HP StoreAll Storage File System User Guide

HP StoreAll Storage File System User Guide
Abstract
This guide describes how to configure and manage StoreAll software file systems and how to use NFS, SMB, FTP, and HTTP
to access file system data. The guide also describes the following file system features: quotas, remote replication, snapshots,
data retention and validation, data tiering, and file allocation. The guide is intended for system administrators managing 9300
Storage Gateway, 9320 Storage, X9720 Storage, and 9730 Storage. For the latest StoreAll guides, browse to
http://www.hp.com/support/StoreAllManuals.
nl
HP Part Number: TA768-96093
Published: June 2013
Edition: 11
© Copyright 2009, 2013 Hewlett-Packard Development Company, L.P.
Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial
Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under
vendor's standard commercial license.
The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express
warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall
not be liable for technical or editorial errors or omissions contained herein.
Acknowledgments
Microsoft® and Windows® are U.S. registered trademarks of Microsoft Corporation.
UNIX® is a registered trademark of The Open Group.
Revision History
Edition Date
Software
Description
Version
1
November 2009
5.3.1
Initial release of HP StoreAll File Serving Software
2
December 2009
5.3.2
Updated license and quotas information
3
April 2010
5.4.0
Added information about file cloning, CIFS, directory tree quotas, the Statistics tool,
and GUI procedures
4
July 2010
5.4.1
Removed information about the Statistics tool
5
December 2010
5.5.0
Added information about authentication, CIFS, FTP, HTTP, SSL certificates, and remote
replication
6
April 2011
5.6
Updated CIFS, FTP, HTTP, and snapshot information
7
September 2011
6.0
Added or updated information about data retention and validation, software snapshots,
block snapshots, remote replication, HTTP, case insensitivity, quotas
8
June 2012
6.1
Added or updated information about file systems, file share creation, rebalancing
segments, remote replication, user authentication, CIFS, LDAP, data retention, data
tiering, file allocation, quotas, Antivirus software
9
December 2012
6.2
Added or updated information about file systems, physical volumes, segment
rebalancing, remote replication, Antivirus scans, REST API, Express Query, auditing,
HTTP, quotas, data tiering, renamed CIFS to SMB
10
March 2013
6.3
Rebranded product to StoreAll. Added or updated information about HTTP StoreAll
REST API shares in object mode, HTTP StoreAll REST API shares in file-compatible mode,
virtual hosts, ibrix_reten_adm command, ibrix_crr command, WebDAV,
reserved space for file systems, enabling export control, mount options, added file
space reserved during mounting, Linux Static User Mapping, Linux permissions on files
created over SMB, mapping SMB shares, planning considerations for StoreAll
Continuous Replication, ibrcfrworker log file, WORM retention information.
11
May 2013
6.3
Updated information about creating StoreAll REST API shares, snapshots, creating SMB
shares, and using the online quota check. Replaced references of the 9000 with
StoreAll.
Contents
1 Using StoreAll software file systems.............................................................10
File system operations..............................................................................................................10
File system building blocks.......................................................................................................12
Configuring file systems...........................................................................................................12
Accessing file systems.............................................................................................................13
2 Creating and mounting file systems.............................................................14
Creating a file system..............................................................................................................14
File systems are created in 64–mode by default.....................................................................14
Using the New Filesystem Wizard........................................................................................14
Configuring additional file system options.............................................................................20
Creating a file system using the CLI......................................................................................21
File limit for directories............................................................................................................22
Managing mountpoints and mount/unmount operations..............................................................22
GUI procedures.................................................................................................................22
CLI procedures..................................................................................................................24
File space reserved during mounting.....................................................................................25
Mounting and unmounting file systems locally on StoreAll clients..............................................26
Limiting file system access for StoreAll clients..............................................................................26
Using Export Control...............................................................................................................27
3 Configuring quotas...................................................................................28
How quotas work...................................................................................................................28
Enabling quotas on a file system and setting grace periods..........................................................28
Setting quotas for users, groups, and directories.........................................................................29
Using a quotas file..................................................................................................................32
Importing quotas from a file................................................................................................32
Exporting quotas to a file....................................................................................................33
Format of the quotas file.....................................................................................................33
Using online quota check........................................................................................................34
Configuring email notifications for quota events..........................................................................35
Deleting quotas......................................................................................................................35
Troubleshooting quotas............................................................................................................36
4 Maintaining file systems............................................................................37
Best practices for file system performance...................................................................................37
Viewing information about file systems and components...............................................................37
Viewing physical volume information....................................................................................38
Viewing volume group information.......................................................................................38
Viewing logical volume information......................................................................................39
Viewing file system information............................................................................................39
Viewing disk space information from a StoreAll Linux client......................................................42
Extending a file system............................................................................................................42
Rebalancing segments in a file system.......................................................................................43
How rebalancing works......................................................................................................44
Rebalancing segments on the GUI........................................................................................44
Rebalancing segments from the CLI......................................................................................46
Tracking the progress of a rebalance task.............................................................................46
Viewing the status of rebalance tasks....................................................................................47
Stopping rebalance tasks....................................................................................................47
Deleting file systems and file system components.........................................................................47
Deleting a file system..........................................................................................................47
Deleting segments, volume groups, and physical volumes........................................................47
Contents
3
Deleting file serving nodes and StoreAll clients.......................................................................48
Checking and repairing file systems..........................................................................................48
Analyzing the integrity of a file system on all segments...........................................................49
Clearing the INFSCK flag on a file system.............................................................................49
Troubleshooting file systems......................................................................................................49
Segment of a file system is accidently deleted........................................................................49
ibrix_pv -a discovers too many or too few devices..................................................................50
Cannot mount on a StoreAll client........................................................................................50
NFS clients cannot access an exported file system..................................................................51
User quota usage data is not being updated.........................................................................51
File system alert is displayed after a segment is evacuated.......................................................51
SegmentNotAvailable is reported.........................................................................................51
SegmentRejected is reported...............................................................................................52
ibrix_fs -c failed with "Bad magic number in super-block"........................................................53
5 Using NFS...............................................................................................55
Exporting a file system............................................................................................................55
Unexporting a file system....................................................................................................58
Using case-insensitive file systems ............................................................................................58
Setting case insensitivity for all users (NFS/Linux/Windows)....................................................58
Viewing the current setting for case insensitivity......................................................................59
Clearing case insensitivity (setting to case sensitive) for all users (NFS/Linux/Windows)..............59
Log files............................................................................................................................59
Case insensitivity and operations affecting directories.............................................................60
6 Configuring authentication for SMB, FTP, and HTTP.......................................61
Using Active Directory with LDAP ID mapping............................................................................61
Using LDAP as the primary authentication method.......................................................................62
Requirements for LDAP users and groups...............................................................................62
Configuring LDAP for StoreAll software.................................................................................62
Configuring authentication from the GUI....................................................................................63
Viewing or changing authentication settings...............................................................................71
Configuring authentication from the CLI.....................................................................................72
Configuring Active Directory ...............................................................................................72
Configuring LDAP..............................................................................................................72
Configuring LDAP ID mapping.............................................................................................73
Configuring Local Users and Groups authentication................................................................74
7 Using SMB..............................................................................................76
Configuring file serving nodes for SMB......................................................................................76
Starting or stopping the SMB service and viewing SMB statistics...................................................76
Monitoring SMB services.........................................................................................................77
SMB shares...........................................................................................................................78
Configuring SMB shares with the GUI...................................................................................79
Configuring SMB signing ...................................................................................................83
Managing SMB shares with the GUI....................................................................................84
Configuring and managing SMB shares with the CLI..............................................................85
Linux permissions on files created over SMB..........................................................................87
Managing SMB shares with Microsoft Management Console...................................................88
Linux static user mapping with Active Directory...........................................................................93
Configuring Active Directory................................................................................................93
Assigning attributes............................................................................................................95
Synchronizing Active Directory 2008 with the NTP server used by the cluster.............................96
Consolidating SMB servers with common share names................................................................96
SMB clients............................................................................................................................98
Viewing quota information..................................................................................................98
4
Contents
Differences in locking behavior............................................................................................98
SMB shadow copy.............................................................................................................98
Permissions in a cross-protocol SMB environment.......................................................................100
How the SMB server handles UIDs and GIDs.......................................................................100
Permissions, UIDs/GIDs, and ACLs.....................................................................................101
Changing the way SMB inherits permissions on files accessed from Linux applications..............102
Troubleshooting SMB............................................................................................................102
8 Using FTP..............................................................................................104
Best practices for configuring FTP............................................................................................104
Managing FTP from the GUI..................................................................................................104
Configuring FTP ..............................................................................................................104
Managing the FTP configuration........................................................................................108
Managing FTP from the CLI....................................................................................................109
Configuring FTP ..............................................................................................................109
Managing the FTP configuration........................................................................................109
The vsftpd service.................................................................................................................110
Starting or stopping the FTP service manually...........................................................................110
Accessing shares..................................................................................................................111
FTP and FTPS commands for anonymous shares...................................................................111
FTP and FTPS commands for non-anonymous shares.............................................................112
FTP and FTPS commands for Fusion Manager......................................................................113
9 Using HTTP............................................................................................114
HTTP share types..................................................................................................................114
Uses for the StoreAll REST API.................................................................................................115
Features for each file share mode...........................................................................................115
Best practices for HTTP REST API shares...................................................................................115
Obtaining the HP StoreAll REST API Sample Client Application...................................................116
Checklist for creating HTTP shares...........................................................................................116
Best practices for configuring HTTP.........................................................................................117
Object mode shares and data retention...................................................................................117
Creating HTTP shares from the GUI.........................................................................................118
Creating standard HTTP shares .........................................................................................118
Creating StoreAll REST API shares......................................................................................123
Managing the HTTP configuration......................................................................................129
Tuning the socket read block size and file write block size .........................................................130
Creating HTTP shares from the CLI..........................................................................................131
Creating HTTP shares.......................................................................................................132
Managing the HTTP configuration......................................................................................132
Starting or stopping the HTTP service manually.........................................................................133
Accessing standard and file-compatible mode HTTP shares........................................................133
Configuring Windows clients to access HTTP WebDAV shares....................................................135
Troubleshooting HTTP............................................................................................................136
10 HTTP-REST API object mode shares..........................................................138
Terminology for StoreAll REST API object mode.........................................................................138
Tutorial for using the HTTP StoreAll REST API object mode..........................................................139
Viewing the list of containers for an account.............................................................................143
Viewing the contents of a container.........................................................................................144
Finding the corresponding object ID from a hash name.............................................................145
Finding the corresponding hash name from an object ID............................................................146
Commands for object mode...................................................................................................148
List Containers.................................................................................................................148
Create Container.............................................................................................................148
List Objects.....................................................................................................................149
Contents
5
Delete Container..............................................................................................................150
Set Container Permission...................................................................................................150
Get Container Permission..................................................................................................150
Create/Update Object.....................................................................................................150
Retrieve Object................................................................................................................151
Delete Object..................................................................................................................151
11 HTTP-REST API file-compatible mode shares...............................................152
Component overview............................................................................................................152
File content transfer...............................................................................................................156
Upload a file (create or replace)........................................................................................156
Download a file...............................................................................................................157
Delete a file....................................................................................................................157
Custom metadata assignment.................................................................................................158
Upload custom metadata (add or replace)..........................................................................158
Delete custom metadata....................................................................................................160
Metadata queries.................................................................................................................160
System and custom metadata.............................................................................................161
System metadata available................................................................................................161
system::onDiskAtime.........................................................................................................165
system::mode..................................................................................................................165
Wildcards.......................................................................................................................166
Pagination......................................................................................................................166
HTTP syntax....................................................................................................................166
JSON response format......................................................................................................168
Example queries..............................................................................................................169
Retention properties assignment..............................................................................................171
HTTP syntax....................................................................................................................172
HTTP Status Codes................................................................................................................173
12 Managing SSL certificates......................................................................174
Creating an SSL certificate.....................................................................................................174
Adding a certificate to the cluster............................................................................................176
Exporting a certificate...........................................................................................................177
Deleting a certificate.............................................................................................................177
13 Using remote replication........................................................................178
Overview............................................................................................................................178
Continuous or run-once replication modes...........................................................................178
Using intercluster replications.............................................................................................179
Using intracluster replications............................................................................................180
File system snapshot replication..........................................................................................180
Configuring the target export for replication to a remote cluster...................................................180
GUI procedure................................................................................................................181
CLI procedure..................................................................................................................183
Configuring and managing replication tasks on the GUI............................................................184
Viewing replication tasks...................................................................................................184
Starting a replication task ................................................................................................187
Pausing or resuming a replication task................................................................................190
Stopping a replication task................................................................................................190
Configuring and managing replication tasks from the CLI...........................................................190
Starting a remote replication task to a remote cluster............................................................190
Starting an intracluster remote replication task.....................................................................191
Starting a run-once directory replication task.......................................................................192
Stopping a remote replication task.....................................................................................192
Pausing a remote replication task.......................................................................................192
6
Contents
Resuming a remote replication task....................................................................................192
Querying remote replication tasks......................................................................................192
Replicating WORM/retained files...........................................................................................193
Configuring remote failover/failback.......................................................................................193
Understanding the ibrcfrworker log file (ibrcfrworker.log) ..........................................................194
Troubleshooting remote replication..........................................................................................195
14 Managing data retention.......................................................................196
Overview............................................................................................................................196
Data retention.................................................................................................................196
Data validation scans.......................................................................................................197
Enabling file systems for data retention....................................................................................198
Viewing the retention profile for a file system.......................................................................201
Changing the retention profile for a file system.....................................................................202
Managing WORM and retained files......................................................................................202
Setting a normal file to WORM or WORM-retained..............................................................202
Viewing the retention information for a file..........................................................................204
File administration............................................................................................................204
Running data validation scans................................................................................................208
Scheduling a validation scan.............................................................................................208
Starting an on-demand validation scan...............................................................................209
Viewing, stopping, or pausing a scan.................................................................................210
Viewing validation scan results..........................................................................................210
Viewing and comparing hash sums for a file.......................................................................211
Handling validation scan errors.........................................................................................211
Creating data retention reports...............................................................................................212
Generating and managing data retention reports.................................................................214
Generating data retention reports from the CLI.....................................................................215
Using hard links with WORM files..........................................................................................215
Using remote replication........................................................................................................215
Backup support for data retention...........................................................................................216
Troubleshooting data retention................................................................................................216
15 Express Query......................................................................................217
Managing the metadata service.............................................................................................217
Backing up and restoring file systems with Express Query data..............................................218
Saving and importing file system metadata..........................................................................219
Metadata and continuous remote replication.......................................................................221
Metadata and synchronized server times.................................................................................221
Managing auditing...............................................................................................................222
Audit log........................................................................................................................222
Audit log reports..............................................................................................................223
16 Configuring Antivirus support..................................................................226
Adding or removing external virus scan engines.......................................................................227
Enabling or disabling Antivirus on StoreAll file systems..............................................................228
Updating Antivirus definitions.................................................................................................228
Configuring Antivirus settings.................................................................................................229
Managing Antivirus scans......................................................................................................233
Starting or scheduling Antivirus scans.................................................................................234
Viewing, pausing, resuming, or stopping Antivirus scan tasks ................................................235
Viewing Antivirus statistics......................................................................................................237
Antivirus quarantines and software snapshots...........................................................................237
17 Creating StoreAll software snapshots.......................................................239
File system limits for snap trees and snapshots..........................................................................239
Configuring snapshot directory trees and schedules...................................................................239
Contents
7
Modifying a snapshot schedule.........................................................................................241
Managing software snapshots................................................................................................241
Taking an on-demand snapshot.........................................................................................241
Determining space used by snapshots................................................................................242
Accessing snapshot directories...........................................................................................242
Restoring files from snapshots............................................................................................244
Deleting snapshots...........................................................................................................244
Moving files between snap trees.............................................................................................248
Backing up snapshots............................................................................................................248
18 Creating block snapshots.......................................................................249
Setting up snapshots.............................................................................................................250
Preparing the snapshot partition........................................................................................250
Registering for snapshots..................................................................................................250
Discovering LUNs in the array............................................................................................250
Reviewing snapshot storage allocation................................................................................250
Automated block snapshots....................................................................................................250
Creating automated snapshots using the Management Console.............................................251
Creating an automated snapshot scheme from the CLI..........................................................255
Other automated snapshot procedures................................................................................255
Managing block snapshots....................................................................................................256
Creating an on-demand snapshot......................................................................................256
Mounting or unmounting a snapshot..................................................................................256
Recovering system resources on snapshot failure...................................................................256
Deleting snapshots...........................................................................................................257
Viewing snapshot information............................................................................................257
Accessing snapshot file systems..............................................................................................258
Troubleshooting block snapshots.............................................................................................260
19 Using data tiering.................................................................................261
Creating and managing data tiers..........................................................................................261
Viewing tier assignments and managing segments....................................................................267
Viewing data tiering rules......................................................................................................268
Running a migration task.......................................................................................................270
Configuring tiers and migrating data using the CLI....................................................................271
Changing the tiering configuration with the CLI.........................................................................274
Writing tiering rules..............................................................................................................275
Rule attributes..................................................................................................................275
Operators and date/time qualifiers....................................................................................275
Rule keywords.................................................................................................................276
Migration rule examples...................................................................................................276
Ambiguous rules..............................................................................................................277
20 Using file allocation..............................................................................279
Overview............................................................................................................................279
File allocation policies......................................................................................................279
How file allocation settings are evaluated...........................................................................280
When file allocation settings take effect on the StoreAll client.................................................281
Using CLI commands for file allocation...............................................................................281
Setting file and directory allocation policies.............................................................................281
Setting file and directory allocation policies from the CLI.......................................................282
Setting segment preferences...................................................................................................282
Creating a pool of preferred segments from the CLI..............................................................283
Restoring the default segment preference.............................................................................284
Tuning allocation policy settings.............................................................................................284
Listing allocation policies.......................................................................................................285
8
Contents
21 Support and other resources...................................................................286
Contacting HP......................................................................................................................286
Related information...............................................................................................................286
HP websites.........................................................................................................................286
Subscription service..............................................................................................................286
22 Documentation feedback.......................................................................287
Glossary..................................................................................................288
Index.......................................................................................................290
Contents
9
1 Using StoreAll software file systems
File system operations
The following diagram highlights the operating principles of the StoreAll file system.
The topology in the diagram reflects the architecture of the HP 9320, which uses a building block
of server pairs (known as couplets) with SAS attached storage. In the diagram:
•
There are four file serving nodes, SS1–SS4. These nodes are also called segment servers.
•
SS1 and SS2 share access to segments 1–4 through SAS connections to a shared storage
array.
•
SS3 and SS4 share access to segments 5-8 through SAS connections to a shared storage
array.
•
One client is accessing the name space using NAS protocols.
•
One client is using the proprietary StoreAll client.
The following steps correspond to the numbering in the diagram:
1. The “namespace” of the file system is a collection of segments. Each segment is simply a
repository for files and directories with no implicit namespace relationships among them.
10
Using StoreAll software file systems
2.
3.
(Specifically, a segment need not be a complete, rooted directory tree). Segments can be any
size and different segments can be different sizes.
The location of files and directories within particular segments in the file space is independent
of their respective and relative locations in the namespace. For example, a directory (Dir1)
can be located on one segment, while the files contained in that directory (File1 and File2)
are resident on other segments. The selection of segments for placing files and directories is
done dynamically when the file/directory is created, as determined by an allocation policy.
The allocation policy is set by the system administrator in accordance with the anticipated
access patterns and specific criteria relevant to the installation (such as performance and
manageability). The allocation policy can be changed at any time, even when the file system
is mounted and in use. Files can be redistributed across segments using a rebalancing utility.
For example, rebalancing can be used when some segments are too full while other have free
capacity, or when files need to be distributed across new segments.
Segment servers are responsible for managing individual segments of the file system. Each
segment is assigned to one segment server and each server may own multiple segments, as
shown by the color coding in the diagram. Segment ownership can be migrated between
servers with direct access to the storage volume while the file system is mounted. For example,
Seg1 can be migrated between SS1 and SS2 but not to SS3 or SS4.
Additional servers can be added to the system dynamically to meet growing performance
needs, without adding more capacity, by distributing the ownership of existing segments for
proper load balancing and utilization of all servers. Conversely, additional capacity can be
added to the file system while in active use without adding more servers—ownership of the
new segments is distributed among existing servers. Servers can be configured with failover
protection, with other servers being designated as standby servers that automatically take
control of a server’s segments if a failure occurs.
4.
5.
6.
7.
Clients run the applications that use the file system. Clients can access the file system either
as a locally mounted cluster file system using the StoreAll Client or using standard network
attached storage (NAS) protocols such as NFS and Server Message Block (SMB).
Use of the StoreAll Client on a client system has some significant advantages over the NAS
approach—specifically, the StoreAll Client driver is aware of the segmented architecture of
the file system and, based on the file/directory being accessed, can route requests directly to
the correct segment server, yielding balanced resource utilization and high performance.
However, the StoreAll Client is available only for a limited range of operating systems.
NAS protocols such as NFS and SMB offer the benefits of multi-platform support and low cost
of administration of client software, as the client drivers for these protocols are generally
available with the base operating system. When using NAS protocols, a client must mount
the file system from one (or more) of the segment servers. As shown in the diagram, all requests
are sent to the server from which the share is mounted, which then performs the required
routing.
Any segment server in the namespace can access any segment. There are three cases:
a. Selected segment is owned by the segment server initiating the operation (for example,
SS1 accessing Seg1).
b. Selected segment is owned by another segment server but is directly accessible at the
block level by the segment server initiating the operation (for example, SS1 accessing
Seg3).
c. Selected segment is owned by another segment server and is not directly accessible by
the segment server initiating the operation (for example, SS1 accessing Seg5).
Each case is handled differently. The data paths are shown in heavy red broken lines in the
diagram:
a. The segment server initiating the operation services the read or write request to the local
segment.
b. In this case, reads and writes take different routes:
File system operations
11
1)
2)
c.
8.
The segment server initiating the operation can read files directly from the segment
across the SAN; this is called a SAN READ.
The segment server initiating the operation routes writes over the IP network to the
segment server owning the segment. That server then writes data to the segment.
All reads and writes must be routed over the IP network between the segment servers.
Step 7 assumed that the server had to go to a segment to read a file. However, every segment
server that reads a file keeps a copy of it cached in its memory regardless of which segment
it was read from (in the diagram, two servers have cached copies of File 1). The cached
copies are used to service local read requests for the file until the copy is made invalid, for
example, because the original file has been changed. The file system keeps track of which
servers have cached copies of a file and manages cache coherency using delegations, which
are StoreAll file system metadata structures used to track cached copies of data and metadata.
File system building blocks
A file system is created from building blocks. The first block comprises an underlying physical
hardware RAID protected volume. Each volume is placed in a volume group (one volume per
volume group) and segments (logical volumes) are created from the volume groups. The system
administrator does not have to manage these low level operations. The built-in volume manager
handles the discovery of physical volumes, the creation of volume groups and the creation of logical
volumes as part of the process of creating a file system. It also assigns the ownership of segments
to servers in the cluster (File Serving Nodes).
Configuring file systems
You can configure your file systems to use the following features:
12
•
Quotas. This feature allows you to assign quotas to individual users or groups, or to a directory
tree. Individual quotas limit the amount of storage or the number of files that a user or group
can use in a file system. Directory tree quotas limit the amount of storage and the number of
files that can be created on a file system located at a specific directory tree. See “Configuring
quotas” (page 28).
•
Remote replication. This feature provides a method to replicate changes in a source file system
on one cluster to a target file system on either the same cluster or a second cluster. See “Using
remote replication” (page 178).
•
Data retention and validation. Data retention ensures that files cannot be modified or deleted
for a specific retention period. Data validation scans can be used to ensure that files remain
unchanged. See “Managing data retention” (page 196).
•
Antivirus support. This feature is used with supported Antivirus software, allowing you to scan
files on a StoreAll file system. See “Configuring Antivirus support” (page 226).
•
StoreAll software snapshots. This feature allows you to capture a point-in-time copy of a file
system or directory for online backup purposes and to simplify recovery of files from accidental
deletion. Users can access the file system or directory as it appeared at the instant of the
snapshot. See “Creating StoreAll software snapshots” (page 239).
•
Block Snapshots. This feature uses the array capabilities to capture a point-in-time copy of a
file system for online backup purposes and to simplify recovery of files from accidental deletion.
The snapshot replicates all file system entities at the time of capture and is managed exactly
like any other file system. See “Creating block snapshots” (page 249).
•
Data tiering. This feature allows you to set a preferred tier where newly created files will be
stored. You can then create a tiering policy to move files from initial storage, based on file
Using StoreAll software file systems
attributes such as such as modification time, access time, file size, or file type. See “Using
data tiering” (page 261).
•
File allocation. This feature allocates new files and directories to segments according to the
allocation policy and segment preferences that are in effect for a client. An allocation policy
is an algorithm that determines the segments that are selected when clients write to a file
system. See “Using file allocation” (page 279).
Accessing file systems
Clients can use the following standard NAS protocols to access file system data:
•
NFS. See “Using NFS” (page 55) or more information.
•
SMB. See “Using SMB” (page 76) for more information.
•
FTP. See “Using FTP” (page 104) for more information.
•
HTTP. See “Using HTTP” (page 114) for more information.
You can also use StoreAll clients to access file systems. Typically, these clients are installed during
the initial system setup. See the HP StoreAll Storage Installation Guide for more information.
Accessing file systems
13
2 Creating and mounting file systems
This chapter describes how to create file systems and mount or unmount them.
Creating a file system
You can create a file system using the New Filesystem Wizard provided with the GUI, or you can
use CLI commands. The New Filesystem Wizard also allows you to create an NFS export or an
SMB share for the file system.
File systems are created in 64–mode by default
Prior to StoreAll 6.1, file systems were created by default with 32-bit compatibility mode
enabled. Starting in 6.1 the default behavior has changed and compatibility mode is now disabled.
StoreAll creates file systems in 64-bit mode unless you changed the compatibility mode when
creating the file system. However, if the original file system was created with 32-bit compatibility
mode enabled and you later upgrade to 6.1 or later and then extend the StoreAll file system, new
segments will be formatted into the file system as 64-bit mode segments.
The 32-bit compatibility limits the number of inodes on any newly formatted segments to be 32-bit
addressable based on the segbits (number of bits allocated to inodes and segment numbers). If
you have a 32-bit client operating system and 32-bit applications, the 32-bit compatibility mode
should be enabled.
The following output shows the relevant portion of the output from the ibrix_fs -f fsname
-i, indicating how newer segments (65,863,680) appear on a file system originally created with
compatibility mode enabled.
NOTE: The 12th column (“FFREE”) is your total available inode count per segment for the original
segments 66 Million per segment and that of the newer 64 bit segments of 1 billion per
segment. This segment mix and inode count does not negatively affect the operation of your file
system nor any applications.
Using the New Filesystem Wizard
To start the wizard, click New on the Filesystems top panel. The wizard includes several steps and
a summary, starting with selecting the storage for the file system.
NOTE:
For details about the prompts for each step of the wizard, see the GUI online help.
On the Select Storage dialog box, select the storage that will be used for the file system.
14
Creating and mounting file systems
Configure Options dialog box. Enter a name for the file system, and specify the appropriate
configuration options.
Creating a file system
15
WORM/Data Retention dialog box. If data retention will be used on the file system, enable it and
set the retention policy. See “Managing data retention” (page 196) for more information.
16
Creating and mounting file systems
You can configure the following:
•
Default retention period. This period determines whether you can manage WORM
(non-retained) files as well as WORM-retained files. (WORM (non-retained) files can be deleted
at any time; WORM-retained files can be deleted only after the file's retention period has
expired.)
To manage only WORM-retained files, set the default retention period to a non-zero value.
WORM-retained files then use this period by default; however, you can assign a different
retention period if desired.
To manage both WORM (non-retained) and WORM-retained files, uncheck Set Default Retention
Period. The default retention period is then set to 0 seconds. When you make a WORM file
retained, you will need to assign a retention period to the file.
•
Autocommit period. When the autocommit period is set, files become WORM or
WORM-retained if they are not changed during the period. (If the default retention period is
set to zero, the files become WORM. If the default retention period is set to a value greater
than zero, the files become WORM-retained.) To use this feature, check Set Auto-Commit
Period and specify the time period. The minimum value for the autocommit period is five
minutes, and the maximum value is one year. If you plan to keep normal files on the file system,
do not set the autocommit period.
•
Data validation. Select this option to schedule periodic scans on the file system. Use the default
schedule, or click Modify to open the Data Validation Scan Schedule dialog box and configure
your own schedule.
Creating a file system
17
•
Report Data Generation. Select this option if you want to create data retention reports. Use
the default schedule, or click Modify to open the Report Data Generation Schedule dialog
box and configure your own schedule.
•
Express Query. Check this option to enable StoreAll Express Query on the file system. Express
Query is a database used to record metadata state changes occurring on the file system.
Auditing Options dialog box. If you enabled Express Query on the WORM/Data Retention dialog
box, you can also enable auditing and select the events that you want to log.
18
Creating and mounting file systems
Default File Shares dialog box. Use this dialog box to create an NFS export and/or an SMB share
at the root of the file system. The default settings are used. See “Using NFS” (page 55) and “Using
SMB” (page 76) for more information.
Review the Summary to ensure that the file system is configured properly. If necessary, you can
return to a dialog box and make any corrections.
Creating a file system
19
Configuring additional file system options
The New Filesystem wizard creates the file system with the default settings for several options. You
can change these settings on the Modify Filesystem Properties dialog box, and can also configure
data retention, data tiering, and file allocation. To open the dialog box, select the file system on
the Filesystems panel. Select Summary from the lower Navigator, and then click Modify on the
Summary panel.
The General tab allows you to enable or disable quotas, Export Control, and 32-bit compatibility
mode on the file system.
When Export Control is enabled on a file system, by default, StoreAll clients have no access to the
file system. Instead, the system administrator grants the clients access by executing the
ibrix_mount command. Enabling Export Control does not affect file-system access by file serving
nodes or any NFS/SMB clients attached to the nodes. File serving nodes always have RW access.
By default, file systems are created in 64-bit mode. If clients need to run a 32-bit application, you
can enable 32-bit compatibility mode. This option is applied to the file system at mount time and
can be enabled or disabled as necessary.
The Data Retention tab allows you to change the data retention configuration. The file system must
be unmounted. See “Configuring data retention on existing file systems” (page 201) for more
information.
NOTE: Data retention cannot be enabled on a file system created on StoreAll software 5.6 or
earlier versions until the file system is upgraded.
The Allocation, Segment Preference, and Host Allocation tabs are used to modify file allocation
policies and to specify segment preferences for file serving nodes and StoreAll clients. See “Using
file allocation” (page 279) for more information.
20
Creating and mounting file systems
Creating a file system using the CLI
The ibrix_fs command is used to create a file system. It can be used in the following ways:
•
Create a file system with the specified segments (segments are logical volumes):
ibrix_fs -c -f FSNAME -s LVLIST [-t TIERNAME] [-a] [-q] [-o
OPTION1=VALUE1,OPTION2=VALUE2,...] [-t TIERNAME] [-F
FMIN:FMAX:FTOTAL] [-D DMIN:DMAX:DTOTAL]
•
Create a file system and assign specify segments to specific file serving nodes:
ibrix_fs -c -f FSNAME -S LV1:HOSTNAME1,LV2:HOSTNAME2,... [-a] [-q]
[-o OPTION1=VALUE1,OPTION2=VALUE2,...] [-t TIERNAME] [-F
FMIN:FMAX:FTOTAL] [-D DMIN:DMAX:DTOTAL]
•
Create a file system from physical volumes in a single step:
ibrix_fs -c -f FSNAME -p PVLIST [-a] [-q] [-o
OPTION1=VALUE1,OPTION2=VALUE2,...] [-t TIERNAME] [-F
FMIN:FMAX:FTOTAL] [-D DMIN:DMAX:DTOTAL]
In the commands, the -t option specifies a tier. TIERNAME can be any alphanumeric, case-sensitive,
text string. Tier assignment is not affected by other options that can be set with the ibrix_fs
command.
NOTE: A tier is created whenever a segment is assigned to it. Be careful to spell the name of the
tier correctly when you add segments to an existing tier. If you make an error in the name, a new
tier is created with the incorrect tier name, and no error is recognized.
Features for file system creation
Feature
Option
Data
retention
-o "retenMode=<mode>,retenDefPeriod=<period>,retenMinPeriod=<period>,
retenMaxPeriod=<period>,retenAutoCommitPeriod=<period>"
Express
Query
-T
Auditing
-oa OPTION1=VALUE1[,OPTION2=VALUE2,...]
The following example enables data retention, Express Query, and auditing, with all events being
audited:
ibrix_fs -o
"retenMode=Enterprise,retenDefPeriod=5m,retenMinPeriod=2s,retenMaxPeriod=30y,retenAutoCommitPeriod=1d"
-T -oa audit_mode=on,all=on -c -f ifs1 -s ilv_[1-4] -a
Creating a file system manually from physical volumes
This procedure is equivalent to using ibrix_fs to create a file system from physical volumes in
a single step. Instead of a single command, you build the file system components individually:
1. Discover the physical volumes in the system. Use the ibrix_pv command.
2. Create volume groups from the discovered physical volumes. Use the ibrix_vg command.
3. Create logical volumes (also called segments) from volume groups. Use the ibrix_lv
command.
4. Create the file system from the new logical volumes. Use the ibrix_fs command.
See the HP StoreAll Storage CLI Reference Guide for details about these commands.
Creating a file system
21
File limit for directories
The maximum number of files in a directory depends on the length of the file names, and also the
names themselves. The maximum size of a directory is approximately 4 GB (double indirect blocks).
An average file name length of eight characters allows about 12 million entries. However, because
directories are hashed, it is unlikely that a directory can contain this number of entries. Files with
a similar naming pattern are hashed into the same bucket. If that bucket fills up, another file cannot
be created there, even if free space is available elsewhere in the directory. If you try to create
another file with a different name, it may succeed, but randomly.
Managing mountpoints and mount/unmount operations
GUI procedures
When you use the New Filesystem Wizard to create a file system, you can specify a name for the
mountpoint and indicate whether the file system should be mounted after it is created. The wizard
will create the mountpoint if necessary. The Filesystems panel shows the file systems created on
the cluster.
To view the mountpoint information for a file system, select the file system on the Filesystems panel,
and click Mountpoints in the lower Navigator. The Mountpoints panel shows the hosts that have
mounted the file system, the name of the mountpoint, the access (RW or RO) allowed to the host,
and whether the file system is mounted.
To mount or remount a file system, select it on the Filesystems panel and click Mount. You can
select several mount options on the Mount Filesystem dialog box. To remount the file system, click
remount.
22
Creating and mounting file systems
IMPORTANT:
Keep in mind:
•
Mount options do not persist, unless they are set at the mountpoint. Mount options that are
not set at the mountpoint are reset to match the mount options on the mount point when the
file system is rebooted or remounted.
•
The ibrix_fs —i and ibrix_mountpoint —l commands display only the mount options
for the mount point.
•
The mount command displays the noatime option. Ignore the noatime option. It is no longer
used. If you set the atime option, the atime option will display instead of the noatime option
when you run the mount command.
The available mount options are:
•
atime: Update the inode access time when a file is accessed
•
nodiratime: Do not update the directory inode access time when the directory is accessed
•
nodquotstatfs: Disable file system reporting based on directory tree quota limits
Managing mountpoints and mount/unmount operations
23
•
path: For StoreAll clients only, mount on the specified subdirectory path of the file system
instead of the root.
•
remount: Remounts a file system without taking it offline. Use this option to change the current
mount options on a file system.
You can also view mountpoint information for a particular server. Select that server on the Servers
panel, and select Mountpoints from the lower Navigator. To delete a mountpoint, select that
mountpoint and click Delete.
CLI procedures
The CLI commands are executed immediately on file serving nodes. For StoreAll clients, the command
intention is stored in the active Fusion Manager. When StoreAll software services start on a client,
the client queries the active Fusion Manager for any commands. If the services are already running,
you can force the client to query the Fusion Manager by executing either ibrix_client or
ibrix_lwmount -a on the client, or by rebooting the client.
If you have configured hostgroups for your StoreAll clients, you can apply a command to a specific
hostgroup. For information about creating hostgroups, see the administration guide for your system.
Creating mountpoints
Mountpoints must exist before a file system can be mounted. To create a mountpoint on file serving
nodes and StoreAll clients, enter the following command:
ibrix_mountpoint -c [-h HOSTLIST] -m MOUNTPOINT
To create a mountpoint on a hostgroup , enter the following command:
ibrix_mountpoint -c -g GROUPLIST -m MOUNTPOINT
For information about mountpoint options, see the "ibrix_mountpoint" section in the HP StoreAll
CLI Reference Guide.
Deleting mountpoints
Before deleting mountpoints, verify that no file systems are mounted on them. To delete a mountpoint
from file serving nodes and StoreAll clients, use the following command:
ibrix_mountpoint -d [-h HOSTLIST] -m MOUNTPOINT
To delete a mountpoint from specific hostgroups, use the following command:
ibrix_mountpoint -d -g GROUPLIST -m MOUNTPOINT
24
Creating and mounting file systems
Viewing mountpoint information
To view mounted file systems and their mountpoints on all nodes, use the following command:
ibrix_mountpoint -l
Mounting a file system
File system mounts are managed with the ibrix_mount command. The command options and
the default file system access allowed for StoreAll clients depend on whether the optional Export
Control feature has been enabled on the file system (see “Using Export Control” (page 27) for
more information). This section assumes that Export Control is not enabled, which is the default.
NOTE: A file system must be mounted on the file serving node that owns the root segment (by
default, this is segment 1) before it can be mounted on any other host. StoreAll software
automatically mounts a file system on the root segment when you mount it on all file serving nodes
in the cluster. The mountpoints must already exist.
Mount a file system on file serving nodes and StoreAll clients:
ibrix_mount -f FSNAME [-o {RW|RO}] [-O MOUNTOPTIONS] -h HOSTLIST -m MOUNTPOINT
Mount a file system on a hostgroup:
ibrix_mount -f FSNAME [-o {RW|RO}] -g GROUP -m MOUNTPOINT
NOTE: If you do not include the -o parameter, the default access option for the mounted file
system is Read Write.
Unmounting a file system
Use the following commands to unmount a file system.
NOTE: Be sure to unmount the root segment last. Attempting to unmount it while other segments
are still mounted will result in failure. If the file system was exported using NFS, you must unexport
it before you can unmount it (see “Exporting a file system” (page 55)).
To unmount a file system from one or more file serving nodes, StoreAll clients, or hostgroups:
ibrix_umount -f FSNAME [-h HOSTLIST | -g GROUPLIST]
To unmount a file system from a specific mountpoint on a file serving node, StoreAll client, or
hostgroup:
ibrix_umount -m MOUNTPOINT
[-h HOSTLIST | -g GROUPLIST]
Enabling or disabling 32-bit compatibility mode
If clients are running 32-bit applications, you can enable 32-bit compatibility mode. This mode is
applied to the file system at mount time. You can enable or disable 32-bit compatibility on the
Modify Filesystems Properties dialog box, and can also use the following commands.
Enable 32-bit compatibility mode:
ibrix_fs_tune -c -e -f FSNAME
Disable 32-bit compatibility mode:
ibrix_fs_tune -c -d -f FSNAME
File space reserved during mounting
When first mounting a file system, approximately 5% of each segment is used for reserved space.
As a result, after mounting the file system, you will see that the overall size of the file system
decreases. Subsequent mounts are not affected.
Managing mountpoints and mount/unmount operations
25
Mounting and unmounting file systems locally on StoreAll clients
On both Linux and Windows StoreAll clients, you can locally override a mount. For example, if
the Fusion Manager configuration database has a file system marked as mounted for a particular
client, that client can locally unmount the file system.
Linux StoreAll clients
To mount a file system locally, use the following command on the StoreAll Linux client. A Fusion
Manager name (fmname) is required only if this StoreAll client is registered with multiple Fusion
Managers.
ibrix_lwmount -f [fmname:]fsname -m mountpoint [-o options]
To unmount a file system locally, use one of the following commands on the StoreAll Linux client.
The first command detaches the specified file system from the client. The second command detaches
the file system that is mounted on the specified mountpoint.
ibrix_lwumount -f [fmname:]FSNAME
ibrix_lwumount -m MOUNTPOINT
Windows StoreAll clients
Use the StoreAll Windows client GUI to mount file systems locally. Click the Mount tab on the GUI
and select the cluster name from the list (the cluster name is the Fusion Manager name). Then, enter
the name of the file system, select a drive, and click Mount.
If you are using Remote Desktop to access the client and the drive letter is not displayed, log out
and log back in. This is a known limitation of Windows Terminal Services when exposing new
drives.
To unmount a file system on the StoreAll Windows client GUI, click the Umount tab, select the file
system, and then click Umount.
Limiting file system access for StoreAll clients
By default, all StoreAll clients can mount a file system after a mountpoint has been created. To limit
access to specific StoreAll clients, create an access entry. When an access entry is in place for a
file system (or a subdirectory of the file system), it enters secure mode, and mount access is restricted
to clients specified in the access entry. All other clients are denied mount access.
Select the file system on the Filesystems top panel, and then select Client Exports in the lower
navigator. On the Create Client Export(s) dialog box, select the clients or hostgroups that will be
allowed access to the file system or a subdirectory of the file system.
26
Creating and mounting file systems
To remove a client access entry, select the affected file system on the GUI, and then select Client
Exports from the lower Navigator. Select the access entry from the Client Exports display, and click
Delete.
On the CLI, use the ibrix_exportfs command to create an access entry:
ibrix_exportfs -c -f FSNAME -p CLIENT:/PATHNAME,CLIENT2:/PATHNAME,...
To see all access entries that have been created, use the following command:
ibrix_exportfs -c -l
To remove an access entry, use the following command:
ibrix_exportfs -c -U -f FSNAME -p CLIENT:/PATHNAME,
CLIENT2:/PATHNAME,...
To mount a restricted file system or a subdirectory of the restricted file system on a StoreAll client
using the CLI, specify the exported path as the option for the ibrix_lwmount command:
ibrix_lwmount -f FSNAME -m MOUNTPOINT -o mountpath=/PATHNAME
Using Export Control
When Export Control is enabled on a file system, by default, StoreAll clients have no access to the
file system. Instead, the system administrator grants the clients access by executing the
ibrix_mount command. Export Control affects only NFS access for StoreAll clients. Enabling
Export Control does not affect access from a file serving node to a file system. File serving nodes
always have RW access.
To determine whether Export Control is enabled, run ibrix_fs -i or ibrix_fs -l. The output
indicates whether Export Control is enabled.
To enable Export Control, include the -C option in the ibrix_fs command:
ibrix_fs -C -E -f FSNAME
To disable Export Control, execute the ibrix_fs command with the -C and -D options:
ibrix_fs -C -D -f FSNAME
To mount a file system that has Export Control enabled, include the ibrix_mount -o {RW|RO}
option to specify that all clients have either RO or RW access to the file system. The default is RO.
In addition, when specifying a hostgroup, the root user can be limited to RO access by adding
the root_ro parameter.
Using Export Control
27
3 Configuring quotas
Quotas can be assigned to individual users or groups, or to a directory tree. Individual quotas
limit the amount of storage or the number of files that a user or group can use in a file system.
Directory tree quotas limit the amount of storage and the number of files that can be created on a
file system located at a specific directory tree. Note the following:
•
You can assign quotas to a user, group, or directory on the GUI or from the CLI. You can also
import quota information from a file.
•
If a user has a user quota and a group quota for the same file system, the first quota reached
takes precedence.
•
Nested directory quotas are not supported. You cannot configure quotas on a subdirectory
differently than the parent directory.
•
The existing quota configuration can be exported to a file at any time.
NOTE: HP recommends that you export the quota configuration and save the resulting file
whenever you update quotas on your cluster.
How quotas work
Quotas can be set for users, groups, or directories in a file system. A quota is specified by hard
and soft storage limits for both the megabytes of storage and the number of files allotted to the
user, group, or directory. The hard limit is the maximum storage (in terms of file size and number
of files) allotted to the user, group, or directory. The soft limit specifies the number of megabytes
or files that, when reached, starts a countdown timer.
If the megabytes of storage or number of files are not reduced below the soft limit, the timer runs
until either the hard storage limit is reached or the grace period for the timer elapses. (The default
grace period is seven days.) When the timer stops, the user, group, or directory for which the
quota was set cannot store any more data, and the system issues quota exceeded messages at
each write attempt.
NOTE: Quota statistics are updated on a regular basis (at one-minute intervals). At each update,
the file and storage usage for each quota-enabled user, group, or directory tree is queried, and
the result is distributed to all file serving nodes. Users or groups can temporarily exceed their quota
if the allocation policy in effect for a file system causes their data to be written to different file
serving nodes during the statistics update interval. In this situation, it is possible for the storage
usage visible to each file serving node to be below or at the quota limit while the aggregate storage
use exceeds the limit.
There is a delay of several minutes between the time a command to update quotas is executed
and when the results are displayed by the ibrix_edquota -l command. This is normal behavior.
Enabling quotas on a file system and setting grace periods
Before you can set quota limits, quotas must be enabled on the file system. You can enable quotas
on a file system at any time.
To view the current quotas configuration on the GUI, select the file system and then select Quotas
from the lower Navigator. The Quotas Summary panel specifies whether quotas are enabled and
lists the grace periods for blocks and inodes.
28
Configuring quotas
To change the quotas configuration, click Modify on the Quota Summary panel.
On the CLI, run the following command to enable quotas on an existing file system:
ibrix_fs -q -E -f FSNAME
Setting quotas for users, groups, and directories
Before configuring quotas, the quota feature must be enabled on the file system and the file system
must be mounted.
NOTE:
For the purpose of setting quotas, no UID or GID can exceed 2,147,483,647.
Setting user quotas to zero removes the quotas.
The Quota Management Wizard can be used to create, modify, or delete quotas for users, groups,
and directories in the selected file system. Click Quotas Wizard on the Quota Summary panel to
open the wizard. The Welcome dialog box describes the options available in the wizard.
Setting quotas for users, groups, and directories
29
The User Quotas dialog box is used to create, modify, or delete quotas for users. To add a user
quota, enter the required information and click Add. Users having quotas are listed in the table at
the bottom of the dialog box. To modify quotas for a user, check the box preceding that user. You
can then adjust the quotas as needed. To delete quotas for a user, check the box and click Delete.
The Group Quotas dialog box is used to create, modify, or delete quotas for groups. To add a
group quota, enter the required information and click Add. The new quota applies to all users in
the group. Groups having quotas are listed in the table at the bottom of the dialog box. To modify
quotas for a group, check the box preceding that group. You can then adjust the quotas as needed.
To delete quotas for a group, check the box and click Delete. group
30
Configuring quotas
The Directory Quotas dialog box is used to create, modify, or delete quotas for directories. To add
a directory quota, enter the required information and click Add. The Name (Alias) is a unique
identifier for the quota, and cannot include commas. The new quota applies to all users and groups
storing data in the directory. Directories having quotas are listed in the table at the bottom of the
dialog box. To modify quotas for a directory, check the box preceding that directory. You can
then adjust the quotas as needed. To delete quotas for a directory, check the box and click Delete.
summary
Setting quotas for users, groups, and directories
31
Configuring quotas from the CLI
In the commands, use -M SOFT_MEGABYTES and -m HARD_MEGABYTES to specify soft and hard
limits for the megabytes of storage. Use -I SOFT_FILES and -i HARD_FILES to specify soft
and hard limits for the number of files allowed.
Create a user or group quota:
User quota:
ibrix_edquota -s -u "USER" -f FSNAME [-M SOFT_MEGABYTES] [-m
HARD_MEGABYTES] [-I SOFT_FILES] [-i HARD_FILES]
Group quota:
ibrix_edquota -s -g "GROUP" -f FSNAME [-M SOFT_MEGABYTES] [-m
HARD_MEGABYTES] [-I SOFT_FILES] [-i HARD_FILES]
Enclose the user or group name in single or double quotation marks.
Create a directory quota:
ibrix_edquota -s -d NAME -p PATH -f FSNAME -M SOFT_MEGABYTES -m
HARD_MEGABYTES -I SOFT_FILES -i HARD_FILES
The -p PATH option specifies the pathname of the directory tree. If the pathname includes a space,
enclose the portion of the pathname that includes the space in single quotation marks, and enclose
the entire pathname in double quotation marks. For example:
-p "/fs48/data/'QUOTA 4'"
The -n NAME option specifies a unique name for the directory tree quota. The name cannot contain
a comma (,) or colon (:) character.
NOTE: When you create a directory quota, the system also runs ibrix_onlinequotacheck
command in DTREE_CREATE mode. If you are creating multiple directory quotas, you can import
the quotas from a file. The system then uses batch processing to create the quotas. If you add the
quotas individually, you will need to wait for ibrix_onlinequotacheck to finish after creating
each quota.
Using a quotas file
Quota limits can be imported into the cluster from the quotas file, and existing quotas can be
exported to the file. See “Format of the quotas file” (page 33) for the format of the file.
Importing quotas from a file
From the GUI, select the file system, select Quotas from the lower Navigator, and then click Import
on the Quota Summary panel.
32
Configuring quotas
From the CLI, use the following command to import quotas from a file, where PATH is the path to
the quotas file:
ibrix_edquota -t -p PATH -f FSNAME
See “Format of the quotas file” (page 33) for information about the format of quota file.
Exporting quotas to a file
From the GUI, select the file system, select Quotas from the lower Navigator, and then click Export
on the Quota Summary panel.
From the CLI, use the following command to export the existing quotas information to a file, where
PATH is the pathname of the quotas file:
ibrix_edquota -e -p PATH -f FSNAME
Format of the quotas file
The quotas file contains a line for each user, group, or directory tree assigned a quota. When you
add quota entries, the lines must use one of the following formats. The “A” format specifies a user
or group ID. The “B” format specifies a user or group name, or a directory tree that has already
been assigned an identifier name. The “C” format specifies a directory tree, where the path exists,
but the identifier name for the directory tree will not be created until the quotas are imported.
A,{type},{block_hardlimit},{block_soft-limit},{inode_hardlimit},{inode_softlimit},{id}
B,{type},{block_hardlimit},{block_soft-limit},{inode_hardlimit},{inode_softlimit},"{name}"
C,{type},{block_hardlimit},{block_soft-limit},{inode_hardlimit},{inode_softlimit},
"{name}","{path}"
The fields in each line are:
{type}
Either 0 for a user quota; 1 for a group quota; 2 for a directory tree quota.
{block_hardlimit}
The maximum number of 1K blocks allowed for the user, group, or directory tree. (1 MB =
1024 blocks).
{block_soft-limit}
The number of 1K blocks that, when reached, starts the countdown timer.
{inode_hardlimit}
The maximum number of files allowed for the user, group, or directory tree.
{inode_softlimit}
The number of files that, when reached, starts the countdown timer.
Using a quotas file
33
{id}
The UID for a user quota or the GID for a group quota.
{name}
A user name, group name, or directory tree identifier.
{path}
The full path to the directory tree. The path must already exist.
NOTE: When a quotas file is imported, the quotas are stored in a different, internal format.
When a quotas file is exported, it contains lines using the internal format. However, when adding
entries, you must use the A, B, or C format.
The following is an example of the syntax for a file to import a directory tree quota (2048=2 MB):
C,2,2048,1024,0,0,"ba","/fs1/a/aa"
C,2,2048,1024,0,0,"bb","/fs1/a/ab"
C,2,2048,1024,0,0,"bc","/fs1/a/ac"
Using online quota check
Online quota checks are used to rescan quota usage, initialize directory tree quotas, and remove
directory tree quotas. The available modes are:
•
FILESYSTEM_SCAN mode. Use this mode in the following scenarios:
◦
You turned quotas off for a user, the user continued to store data in a file system, and
you now want to turn quotas back on for this user.
◦
You are setting up quotas for the first time for a user who has previously stored data in
a file system.
◦
You renamed a directory on which quotas are set.
◦
You moved a subdirectory into another parent directory that is outside of the directory
having the directory tree quota.
•
DTREE_CREATE mode. After setting quotas on a directory tree, use this mode to take into
account the data used under the directory tree.
•
DTREE_DELETE mode. After deleting a directory tree quota, use this mode to unset quota IDs
on all files and folders in that directory.
CAUTION: When ibrix_onlinequotacheck is started in DTREE_DELETE mode, it removes
quotas for the specified directory. Be sure not to use this mode on directories that should retain
quota information.
To run an online quota check from the GUI, select the file system and then select Online quota
check from the lower Navigator.
On the Task Summary panel, select Start to open the Start Online quota check dialog box and
select the appropriate mode.
34
Configuring quotas
The Task Summary panel displays the progress of the scan. If necessary, select Stop to stop the
scan.
To run an online quota check in FILESYSTEM_SCAN mode from the CLI, use the following command:
ibrix_onlinequotacheck -s -S -f FSNAME
To run an online quota check in DTREE_CREATE mode, use this command:
ibrix_onlinequotacheck -s -c -f FSNAME -p PATH
To run an online quota check in DTREE_DELETE mode, use this command:
ibrix_onlinequotacheck -s -d -f FSNAME -p PATH
The command must be run from a file serving node that has the file system mounted.
Configuring email notifications for quota events
If you would like to be notified when certain quota events occur, you can set up email notification
for those events. On the GUI, select Email Configuration. On the Events Notified by Email panel,
select the appropriate events and specify the email addresses to be notified.
Deleting quotas
To delete quotas from the GUI, select the quota from the appropriate Quota Usage Limits panel
and then click Delete. To delete quotas from the CLI, use the following commands.
To delete quotas for a user, use the following command:
ibrix_edquota -D -u UID [-f FSNAME]
To delete quotas for a group, use the following command:
ibrix_edquota -D -g GID [-f FSNAME]
To delete the entry and quota limits for a directory tree quota, use the following command:
ibrix_edquota -D -d NAME -f FSNAME
The -d NAME option specifies the name of the directory tree quota.
Configuring email notifications for quota events
35
Troubleshooting quotas
Recreated directory does not appear in directory tree quota
If you create a directory tree quota on a specific directory and delete the directory (for example,
with rmdir/rm -rf) and then recreate it on the same path, the directory does not count as part
of the directory tree, even though the path is the same. Consequently, the
ibrix_onlinequotacheck command does not report on the directory.
Moving directories
After moving a directory into or out of a directory containing quotas, run the
ibrix_onlinequotacheck command as follows:
36
•
After moving a directory from a directory tree with quotas (the source) to a directory without
quotas (the destination), take these steps:
1. Run ibrix_onlinequotacheck in DTREE_CREATE mode on the source directory tree
to remove the usage information for the moved directory.
2. Run ibrix_onlinequotacheck in DTREE_DELETE mode on the directory that was
moved to delete residual quota information.
•
After moving a directory from a directory without quotas (the source) to a directory tree with
quotas (the destination), take this step:
1. Run ibrix_onlinequotacheck in DTREE_CREATE mode on the destination directory
tree to add the usage for the moved directory.
•
After moving a directory from one directory tree with quotas (the source) to another directory
tree with quotas (the destination), take these steps:
1. Run ibrix_onlinequotacheck in DTREE_CREATE mode on the source directory tree
to remove the usage information for the moved directory.
2. Run ibrix_onlinequotacheck in DTREE_CREATE mode on the destination directory
tree to add the usage for the moved directory.
Configuring quotas
4 Maintaining file systems
This chapter describes how to extend a file system, rebalance segments, delete a file system or file
system component, and check or repair a file system. The chapter also includes file system
troubleshooting information.
Best practices for file system performance
It is important to monitor the space used in the segments making up the file system. If segments are
filled to 90% or greater and the segments are actively being used based on the file system allocation
policy, performance degradation is likely because of extra housekeeping tasks incurred in the file
system. Also, at this point, automatic write behavior changes can cause all new creates to go to
the segment with the most available capacity, causing a slowdown.
To maintain file system performance, follow these recommendations:
•
If segments are approaching 85% full, either expand the file system with new segments or
clean up the file system.
•
If only a few segments are between 85% and 90% and other segments are much lower, run
a rebalance task. However, if those few segments are at 90% or higher, it is best to adjust
the file system allocation policy to exclude the full segments from being used. Then initiate a
rebalance task to balance the full segments out onto other segments with more available space.
When the rebalance task is complete and all segments are below the 85% threshold, you can
reapply the original file system allocation policy.
The GUI displays the space used in each segment. Select the file system, and then select Segments
from the lower Navigator.
Viewing information about file systems and components
The Filesystems top panel on the GUI displays comprehensive information about a file system and
its components. This section describes how to view the same information from the command line.
Best practices for file system performance
37
Viewing physical volume information
The following command lists detailed information about physical volumes:
ibrix_pv -i
For each physical volume, the output includes the following information:
# ibrix_pv -i
PV_NAME SIZE(MB)
------- -------d1
3,072
d2
3,072
VG_NAME LUN_GROUP LV_NAME FILESYSTEM SEGNUM USED% SEGOWNER DEVICE ON SEGOWNER
------- --------- ------- ---------- ------ ----- -------- --------- -------ivg1
ilv1
ifs1
1
99 vm3
/dev/sdb
ivg2
ilv2
ifs1
2
99 vm2
/dev/sdc
The following command provides host-specific information about physical volumes:
ibrix_pv -l [-h HOSTLIST]
The following table lists fields included in the -i and -l commands.
Field
Description
PV_NAME
Physical volume name. Regular physical volume names begin with the letter d. The names of physical
volumes that are part of a mirror device begin with the letter m. Both are numbered sequentially.
SIZE (MB)
Physical volume size, in MB.
VG_NAME
Name of volume group created on this physical volume, if any.
LUN_GROUP
The LUN group, if any.
LV_NAME
Logical volume name.
FILESYSTEM
File system to which the logical volume belongs.
SEGNUM
Number of this segment (logical volume) in the file system.
USED%
Percentage of total space in the volume group allocated to logical volumes.
SEGOWNER
The owner of the segment.
DEVICE ON
The device on which this physical volume is located.
RAID type
Not applicable for this release.
RAID host
Not applicable for this release.
RAID device
Not applicable for this release.
Network host
Not applicable for this release.
Network port
Not applicable for this release.
Viewing volume group information
To display summary information about all volume groups, use the ibrix_vg -l command:
ibrix_vg -l
The VG_FREE field indicates the amount of group space that is not allocated to any logical volume.
The VG_USED field reports the percentage of available space that is allocated to a logical volume.
To display detailed information about volume groups, use the ibrix_vg -i command. The -g
VGLIST option restricts the output to the specified volume groups.
ibrix_vg -i [-g VGLIST]
The following table lists the output fields for ibrix_vg -i.
38
Field
Description
VG_NAME
Volume group name.
SIZE(MB)
Volume group size in MB.
Maintaining file systems
Field
Description
FREE(MB)
Free (unallocated) space, in MB, available on this volume group.
USED%
Percentage of total space in the volume group allocated to logical volumes.
FS_NAME
File system to which this logical volume belongs.
PV_NAME
Name of the physical volume used to create this volume group.
SIZE (MB)
Size, in MB, of the physical volume used to create this volume group.
LV_NAME
Names of logical volumes created from this volume group.
LV_SIZE
Size, in MB, of each logical volume created from this volume group.
GEN
Number of times the structure of the file system has changed (for example, new segments
were added).
SEGNUM
Number of this segment (logical volume) in the file system.
HOSTNAME
File serving node that owns this logical volume.
STATE
Operational state of the file serving node. See the administration guide for your system for
a list of the states.
Viewing logical volume information
To view information about logical volumes, use the ibrix_lv -l command. The following table
lists the output fields for this command.
Field
Description
LV_NAME
Logical volume name.
LV_SIZE
Logical volume size, in MB.
FS_NAME
File system to which this logical volume belongs.
SEG_NUM
Number of this segment (logical volume) in the file system.
VG_NAME
Name of the volume group created on this physical volume, if any.
OPTIONS
Linux lvcreate options that have been set on the volume group.
Viewing file system information
To view information about all file systems, use the ibrix_fs -l command. This command also
displays information about any file system snapshots.
The following table lists the output fields for ibrix_fs -l.
Field
Description
FS_NAME
File system name.
STATE
State of the file system (for example, Mounted).
CAPACITY (GB)
Total space available in the file system, in GB.
USED%
Amount of space used in the file system.
Files
Number of files that can be created in this file system.
FilesUsed%
Percentage of total storage used by files and directories.
Viewing information about file systems and components
39
Field
Description
GEN
Number of times the structure of the file system has changed (for example, new segments
were added).
NUM_SEGS
Number of file system segments.
To view detailed information about file systems, use the ibrix_fs -i command. To view
information for all file systems, omit the -f FSLIST argument.
ibrix_fs -i [-f FSLIST]
The following table lists the file system output fields reported by ibrix_fs -i.
Field
Description
Total Segments
Number of segments.
STATE
State of the file system (for example, Mounted).
Mirrored?
Not applicable for this release.
Compatible?
Yes indicates that the file system is 32-bit compatible; the maximum number of segments
(maxsegs) allowed in the file system is also specified. No indicates a 64-bit file system.
Generation
Number of times the structure of the file system has changed (for example, new segments
were added).
FS_ID
File system ID for NFS access.
FS_NUM
Unique StoreAll software internal file system identifier.
EXPORT_CONTROL_ENABLED Yes if enabled; No if not.
40
QUOTA_ENABLED
Yes if enabled; No if not.
RETENTION
If data retention is enabled, the retention policy is displayed.
DEFAULT_BLOCKSIZE
Default block size, in KB.
CAPACITY
Capacity of the file system.
FREE
Amount of free space on the file system.
AVAIL
Space available for user files.
USED PERCENT
Percentage of total storage occupied by user files.
FILES
Number of files that can be created in this file system.
FFREE
Number of unused file inodes available in this file system.
Prealloc
Number of KB a file system preallocates to a file; default: 1,024 KB.
Readahead
Number of KB that StoreAll software will pre-fetch; default: 512 KB.
NFS Readahead
Number of KB that StoreAll software pre-fetches under NFS; default: 256 KB.
Default policy
Allocation policy assigned on this file system. Defined policies are: ROUNDROBIN,
STICKY, DIRECTORY, LOCAL, RANDOM, and NONE. See “File allocation policies”
(page 279) for information on these policies.
Default start segment
The first segment to which an allocation policy is applied in a file system. If a segment
is not specified, allocation starts on the segment with the most storage space available.
File replicas
NA.
Dir replicas
NA.
Mount Options
Possible root segment inodes. This value is used internally.
Root Segment Hint
Current root segment number, if known. This value is used internally.
Maintaining file systems
Field
Description
Root Segment Replica(s) Hint Possible segment numbers for root segment replicas. This value is used internally.
Snap FileSystem Policy
Snapshot strategy, if defined.
The following table lists the per-segment output fields reported by ibrix_fs -i.
Field
Description
SEGMENT
Number of segments.
OWNER
The host that owns the segment.
LV_NAME
Logical volume name.
STATE
The current state of the segment (for example, OK or UsageStale).
BLOCK_SIZE
Default block size, in KB.
CAPACITY (GB)
Size of the segment, in GB.
FREE (GB)
Free space on this segment, in GB.
AVAIL (GB)
Space available for user files, in GB.
FILES
Inodes available on this segment.
FFREE
Free inodes available on this segment.
USED%
Percentage of total storage occupied by user files.
BACKUP
Backup host name.
TYPE
Segment type. MIXED means the segment can contain both files and directories.
TIER
Tier to which the segment was assigned.
LAST_REPORTED
Last time the segment state was reported.
HOST_NAME
Host on which the file system is mounted.
MOUNTPOINT
Host mountpoint.
PERMISSION
File system access privileges: RO or RW.
Root_RO
Specifies whether the root user is limited to read-only access, regardless of the access setting.
Lost+found directory
When browsing the contents of StoreAll software file systems, you will see a directory named
lost+found. This directory is required for file system integrity and should not be deleted. The
lost+found directory only exists at the top-level directory of a file system, which is also the
mountpoint. Additionally, there are several directories the you can see, at the top-level (mount
point) of a file system, that are "internal use only" and should not be deleted or edited. They are:
•
lost+found
•
.archiving1
•
.audit
•
.webdav
1
There are a few exceptions in the .archiving directory. Some files in this directory are created
for user consumption in certain subdirectories of .archiving (described in various places in this
user guide, for example validation summary outputs 1-0.sum, and audit log reports), and those
specific files can be deleted if desired, but other files should not be deleted.
Viewing information about file systems and components
41
Viewing disk space information from a StoreAll Linux client
Because file systems are distributed among segments on many file serving nodes, disk space utilities
such as df must be provided with collated disk space information about those nodes. The Fusion
Manager collects this information periodically and collates it for df.
StoreAll software includes a disk space utility, ibrix_df, that enables Linux StoreAll clients to
obtain utilization data for a file system. Execute the following command on any StoreAll Linux
client:
ibrix_df
The following table lists the output fields for ibrix_df.
Field
Description
Name
File system name.
CAPACITY
Number of blocks in the file system.
FREE
Number of unused blocks of storage.
AVAIL
Number of blocks available for user files.
USED PERCENT
Percentage of total storage occupied by user files.
FILES
Number of files that can be created in the file system.
FFREE
Number of unused file inodes in the file system.
Extending a file system
You can extend a file system from the GUI or the CLI.
NOTE: If a continuous remote replication (CRR) task is running on a file system, the file system
cannot be extended until the CRR task is complete.
If the file system uses tiers, verify that no tiering task is running before executing the file system
expansion commands. If a tiering task is running, the expansion takes priority and the tiering task
is terminated.
Select the file system on the Filesystems top panel, and then select Extend on the Summary bottom
panel. The Extend Filesystem dialog box allows you to select the storage to be added to the file
system. If data tiering is used on the file system, you can also enter the name of the appropriate
tier.
42
Maintaining file systems
On the CLI, use the ibrix_fs command to extend a file system. Segments are added to the file
serving nodes in a round-robin manner. If tiering rules are defined for the file system, the -t option
is required. Avoid expanding a file system while a tiering job is running. The expansion takes
priority and the tiering job is terminated.
Extend a file system with the logical volumes (segments) specified in LVLIST:
ibrix_fs -e -f FSNAME -s LVLIST [-t TIERNAME]
Extend a file system with segments created from the physical volumes in PVLIST:
ibrix_fs -e -f FSNAME -p PVLIST [-t TIERNAME]
Extend a file system with specific logical volumes on specific file serving nodes:
ibrix_fs -e -f FSNAME -S LV1:HOSTNAME1,LV2:HOSTNAME2...
Extend a file system with the listed tiered segment/owner pairs:
ibrix_fs -e -f FSNAME -S LV1:HOSTNAME1,LV2:HOSTNAME2,... -t TIERNAME
Rebalancing segments in a file system
Segment rebalancing involves redistributing files among segments in a file system to balance
segment utilization and server workload. For example, after adding new segments to a file system,
you can rebalance all segments to redistribute files evenly among the segments. Usually, you will
want to rebalance all segments, possibly as a cron job. In special situations, you might want to
rebalance specific segments. Segments marked as bad (that is, segments that cannot be activated
for some reason) are not candidates for rebalancing.
A file system must be mounted when you rebalance its segments.
If necessary, you can evacuate segments (or logical volumes) located on storage that will be
removed from the cluster, moving the data on the segments to other segments in the file system.
You can evacuate a segment with the GUI or the ibrix_evacuate command. For more
information, see the HP StoreAll Storage CLI Reference Guide or the administrator guide for your
system.
IMPORTANT: Rebalancing is a storage and file system intensive process which, in some
circumstances, can take days to complete. Rebalancing tasks are best to be run at a time when
clients are not generating significant load.
Rebalancing segments in a file system
43
How rebalancing works
During a rebalance operation on a file system, files are moved from source segments to destination
segments. StoreAll software calculates the average aggregate utilization of the selected source
segments, and then moves files from sources to destinations to bring each candidate source segment
as close as possible to the calculated utilization threshold. The final absolute percent usage in the
segments depends on the average file size for the target file system. If you do not specify any
sources or destinations for a rebalance task, candidate segments are sorted into sources and
destinations and then rebalanced as evenly as possible.
If you specify sources, all other candidate segments in the file system are tagged as destinations,
and vice versa if you specify destinations. Following the general rule, StoreAll software will calculate
the utilization threshold from the sources, and then bring the sources as close as possible to this
value by evenly distributing their excess files among all destinations. If you specify sources, only
those segments are rebalanced, and the overflow is distributed among all remaining candidate
segments. If you specify destinations, all segments except the specified destinations are rebalanced,
and the overflow is distributed only to the destinations. If you specify both sources and destinations,
only the specified sources are rebalanced, and the overflow is distributed only among the specified
destinations.
If there is not enough aggregate room in destination segments to hold the files that must be moved
from source segments in order to balance the sources, StoreAll software issues an error message
and does not move any files. The more restricted the number of destinations, the higher the likelihood
of this error.
When rebalancing segments, note the following:
•
To move files out of certain overused segments, specify source segments.
•
To move files into certain underused segments, specify destination segments.
•
To move files out of certain segments and place them in certain destinations, specify both
source and destination segments.
Rebalancing segments on the GUI
Select the file system on the GUI and then select Segments from the lower Navigator. The Segments
panel shows information for all segments in the file system.
Click Rebalance/Evacuate on the Segments panel to open the Segment Rebalance and Evacuation
Wizard. The wizard can rebalance all files in the selected tier or in the file system, or you can
select the segments for the operation. Chose the appropriate rebalance option on the Select Mode
dialog box.
44
Maintaining file systems
The Rebalance All dialog box allows you to rebalance all segments in the file system or in the
selected tier.
The Rebalance Advanced dialog box allows you to select the source and destination segments for
the rebalance operation.
Rebalancing segments in a file system
45
Rebalancing segments from the CLI
To rebalance all segments, use the following command. Include the -a option to run the rebalance
operation in analytical mode.
ibrix_rebalance -r -f FSNAME
To rebalance by specifying specific source segments, use the following command:
ibrix_rebalance -r -f FSNAME [[-s SRCSEGMENTLIST] [-S SRCLVLIST]]
For example, to rebalance segments 2 and 3 only and to specify them by segment name:
ibrix_rebalance -r -f ifs1 -s 2,3
To rebalance segments 1 and 2 only and to specify them by their logical volume names:
ibrix_rebalance -r -f ifs1 -S ilv1,ilv2
To rebalance by specifying specific destination segments, use the following command:
ibrix_rebalance -r -f FSNAME [[-d DESTSEGMENTLIST] [-D DESTLVLIST]]
For example, to rebalance segments 3 and 4 only and to specify them by segment name:
ibrix_rebalance -r -f ifs1 -d 3,4
To rebalance segments 3 and 4 only and to specify them by their logical volume names:
ibrix_rebalance -r -f ifs1 -D ilv3,ilv4
Tracking the progress of a rebalance task
You can use the GUI or CLI to track the progress of a rebalance task. As a rebalance task
progresses, usage approaches an average value across segments, excluding bad segments that
are not candidates for rebalancing or segments containing files that are in heavy use during the
operation.
To track the progress of a rebalance task on the GUI, select the file system, and then select
Rebalancer from the lower Navigator. The Task Summary displays details about the rebalance
task. Also examine Used (%) on the Segments panel for the file system.
To track rebalance job progress from the CLI, use the ibrix_fs -i command. The output lists
detailed information about the file system. The USED% field shows usage per segments.
46
Maintaining file systems
Viewing the status of rebalance tasks
Use the following commands to view status for jobs on all file systems or only on the file systems
specified in FSLIST:
ibrix_rebalance -l [-f FSLIST]
ibrix_rebalance -i [-f FSLIST]
The first command reports summary information. The second command lists jobs by task ID and
file system and indicates whether the job is running or stopped. Jobs that are in the analysis
(Coordinator) phase are listed separately from those in the implementation (Worker) phase.
Stopping rebalance tasks
You can stop running or stalled rebalance tasks. If Fusion Manager cannot stop the task for some
reason, you can force the task to stop. Stopping a task poses no risks for the file system. The system
completes any file migrations that are in process when you issue the stop command. Depending
on when you stop a task, segments might contain more or fewer files than before the operation
started.
To stop a rebalance task on the GUI, select the file system, and then select Rebalancer from the
lower Navigator. Click Stop on the Task Summary to stop the task.
To stop a task from the CLI, first execute ibrix_rebalance -i to obtain the TASKID, and then
execute the following command:
ibrix_rebalance -k -t TASKID [-F]
To force the task to stop, include the -F option.
Deleting file systems and file system components
Deleting a file system
Before deleting a file system, unmount it from all file serving nodes and clients. (See “Unmounting
a file system” (page 25).) Also delete any exports.
CAUTION: When a file system is deleted from the configuration database, its data becomes
inaccessible. To avoid unintended service interruptions, be sure you have specified the correct file
system.
To delete a file system, use the following command:
ibrix_fs -d [-R] f FSLIST
For example, to delete file systems ifs1 and ifs2:
ibrix_fs -d -f ifs1,ifs2
If data retention is enabled on the file system, include the -R option in the command. For example:
ibrix_fs -d -R -f ifs2
Deleting segments, volume groups, and physical volumes
When deleting segments, volume groups, or physical volumes, you should be aware of the following:
•
A segment cannot be deleted until the file system to which it belongs is deleted.
•
A volume group cannot be deleted until all segments that were created on it are deleted.
•
A physical volume cannot be deleted until all volume groups created on it are deleted.
If you delete physical volumes but do not remove the physical storage from the network, the volumes
might be rediscovered when you next perform a discovery scan on the cluster.
To delete segments:
ibrix_lv -d -s LVLIST
Deleting file systems and file system components
47
For example, to delete segments ilv1 and ilv2:
ibrix_lv -d -s ilv1,ilv2
To delete volume groups:
bin/ibrix_vg -d -g VGLIST
For example, to delete volume groups ivg1 and ivg2:
ibrix_vg -d -g ivg1,ivg2
To delete physical volumes:
ibrix_pv -d -p PVLIST [-h HOSTLIST]
For example, to delete physical volumes d1, d2, and d3:
ibrix_pv -d -p d[1-3]
Deleting file serving nodes and StoreAll clients
Before deleting a file serving node, unmount all file systems from it and migrate any segments that
it owns to a different server. Ensure that the file serving node is not serving as a failover standby
and is not involved in network interface monitoring. To delete a file serving node, use the following
command:
ibrix_server -d -h HOSTLIST
For example, to delete file serving nodes s1.hp.com and s2.hp.com:
ibrix_server -d -h s1.hp.com,s2.hp.com
To delete StoreAll clients, use the following command:
ibrix_client -d -h HOSTLIST
Checking and repairing file systems
The ibrix_fsck command analyzes inconsistencies in a file system.
CAUTION: Do not run ibrix_fsck in corrective mode without the direct guidance of HP Support.
If run improperly, the command can cause data loss and file system damage.
CAUTION: Do not run e2fsck (or any other off-the-shelf fsck program) on any part of a file
system. Doing this can damage the file system.
The ibrix_fsck command can detect and repair file system inconsistencies. File system
inconsistencies can occur for many reasons, including hardware failure, power failure, switching
off the system without proper shutdown, and failed migration.
The command runs in four phases and has two running modes: analytical and corrective. You must
run the phases in order and you must run all of them:
•
Phase 0 checks host connectivity and the consistency of segment byte blocks and repairs them
in corrective mode.
•
Phase 1 checks segments and repairs them in corrective mode. Results are stored locally.
•
Phase 2 checks the file system and repairs it in corrective mode. Results are stored locally.
•
Phase 3 moves files from lost+found on each segment to the global lost+found directory
on the root segment of the file system.
If a file system shows evidence of inconsistencies, contact HP Support. A representative will ask
you to run ibrix_fsck in analytical mode and, based on the output, will recommend a course
of action and assist in running the command in corrective mode. HP strongly recommends that you
use corrective mode only with the direct guidance of HP Support. Corrective mode is complex and
difficult to run safely. Using it improperly can damage both data and the file system. Analytical
mode is completely safe, by contrast.
48
Maintaining file systems
NOTE: During an ibrix_fsck run, an INFSCK flag is set on the file system to protect it. If an
error occurs during the job, you must explicitly clear the INFSCK flag (see “Clearing the INFSCK
flag on a file system” (page 49)), or you will be unable to mount the file system.
Analyzing the integrity of a file system on all segments
Observe the following requirements when executing ibrix_fsck:
•
Unmount the file system for phases 0 and 1 and mount the file system for phases 2 and 3.
•
Turn off automated failover by executing ibrix_host -m -U -h SERVERNAME.
•
Unmount all NFS clients and stop NFS on the servers.
Use the following procedure to analyze file system integrity:
Runs phase 0 in analytic mode:
ibrix_fsck -p 0 -f FSNAME [-s LVNAME] [-c]
The command can be run on the specified file system or optionally only on the specified segment
LVNAME.
Run phase 1 in analytic mode:
ibrix_fsck -p 1 -f FSNAME [-s LVNAME] [-c] [-B BLOCKSIZE] [-b
ALTSUPERBLOCK]
The command can be run on the file system FSNAME or optionally only on segment LVNAME. This
phase can be run with a specified block size and an alternate superblock number. For example:
ibrix_fsck -p 1 -f ifs1 -B 4096 -b 12250
NOTE:
If phase 1 is run in analytic mode on a mounted file system, false errors can be reported.
Run phase 2:
ibrix_fsck -p 2 -f FSNAME [-s LVNAME] [-c] [-o "options"]
The command can be run on the specified file system or optionally only on segment LVNAME. Use
-o to specify any options.
Run phase 3:
ibrix_fsck -p 3 -f FSNAME [-c]
Clearing the INFSCK flag on a file system
To clear the INFSCK flag, use the following command:
ibrix_fsck -f FSNAME -C
Troubleshooting file systems
Segment of a file system is accidently deleted
When a segment of a file system is accidently deleted without evacuating it first, all files on that
segment are gone, including the Express Query metadata database files on that segment even
though they are unrelated to the files on that segment. You must recreate the metadata database.
To recreate the metadata database:
Troubleshooting file systems
49
1.
Disable the Express Query and auditing feature for the file system, including the removal of
any StoreAll REST API shares. Disable the auditing feature before you disable the Express
Query feature.
a. To disable auditing, enter the following command:
ibrix_fs -A [-f FSNAME] -oa audit_mode=off
b.
Remove all StoreAll REST API shares created in the file system by entering the following
command:
ibrix_httpshare -d -f <fs_name>
c.
To disable the Express Query settings on a file system, enter the following command:
ibrix_fs -T -D -f FSNAME
2.
To re-enable the Express Query settings on a file system , enter the following command:
ibrix_fs -T -E -f FSNAME
3.
(Optional) To re-enable auditing, enter the following command:
ibrix_fs -A [-f FSNAME] -oa audit_mode=on
4.
To recreate your REST API HTTP shares, enter the ibrix_httpshare -a command with the
appropriate parameters. See “Using HTTP” (page 114).
Express Query re-synchronizes the file system and the database by using the restored database
information. This process might take some time.
5.
Wait for the metadata resync process to finish. Enter the following command to monitor the
resync process for a file system:
ibrix_archiving -l
The status should be at OK for the file system before you proceed. Refer to the ibrix_archiving
section in the HP StoreAll Storage CLI Reference Guide for information about the other states.
6.
Import your previously exported custom metadata and audit logs according to “Importing
metadata to a file system” (page 220).
ibrix_pv -a discovers too many or too few devices
This situation occurs when file serving nodes see devices multiple times. To prevent this, modify
the LVM2 filter in /etc/lvm/lvm.conf to filter only on devices used by StoreAll software. This
will change the output of lvmdiskscan.
By default, the following filter finds all devices:
filter = [ "a/.*/" ]
The following filter finds all sd devices:
filter = [ "a|^/dev/sd.*|", "r|^.*|" ]
Contact HP Support if you need assistance.
Cannot mount on a StoreAll client
Verify the following:
50
•
The file system is mounted and functioning on the file serving nodes.
•
The mountpoint exists on the StoreAll client. If not, create the mountpoint locally on the client.
•
Software management services have been started on the StoreAll client (see “Starting and
stopping processes” in the administrator guide for your system).
Maintaining file systems
NFS clients cannot access an exported file system
An exported file system has been unmounted from one or more file serving nodes, causing StoreAll
software to automatically disable NFS on those servers. Fix the issue causing the unmount and
then remount the file system.
User quota usage data is not being updated
Restart the quota monitor service to force a read of all quota usage data and update usage counts
to the file serving nodes in your cluster. Use the following command:
ibrix_qm restart
File system alert is displayed after a segment is evacuated
When a segment is successfully evacuated, a segment unavailable alert is displayed in the GUI
and attempts to mount the file system will fail. There are several options at this point:
•
Mark the evacuated segment as bad (retired), using the following command. The file system
state changes to okay and the file system can now be mounted. However, the operation
marking the segment as bad cannot be reversed.
ibrix_fs -B -f FSNAME {-n RETIRED_SEGNUMLIST | -s RETIRED_LVLIST}
•
Keep the evacuated segment in the file system. Take one of the following steps to enable
mounting the file system:
◦
Use the force option (-X) when mounting the file system:
ibrix_mount -f myFilesystem -m /myMountpoint -X
◦
Clear the “unavailable segment” flag on the file system with the ibrix_fsck command
and then mount the file system normally:
ibrix_fsck -f FSNAME -C -s LVNAME_OF_EVACUATED_SEG
SegmentNotAvailable is reported
When IAS heartbeats to segments (disk heartbeat every 15 seconds for each segment) or writes
to a segment do not succeed, the segment status may change to SegmentNotAvailable on the
Management Console and an alert message might be generated. If there is not an underlying
hardware storage event related to the affected segments, or a storage firmware update failed while
the file system was mounted, complete the following steps to resolve the issue:
NOTE: If a storage hardware event was generated and the reason for segmentNotAvailable
is due to a file system journal abort on write error or a storage controller failure resulting in the
segment going unavailable, HP recommends that you contact HP Support for analysis of the segment
health and to run the ibrix_fsck command to validate data integrity. If you clear the unvailable
segment status without information about why the segment became unavailable, be aware that
your data could be at further risk of damage or corruption.
1.
2.
Identify the file serving node that owns the segment. This information is reported on the
Filesystem Segments panel on the Management Console.
Run phase 0 and phase 1 of the ibrix_fsck command to verify the issue with the segment.
You can run the command on the file system or specify the segment name using the –s LVNAME
parameter:
ibrix_fsck -p 0 -f FSNAME [-s LVNAME] [-c]
ibrix_fsck -p 1 -f FSNAME [-s LVNAME] [-c]
3.
If you have set Fusion Manager to fail over when a segment becomes unavailable, failover
occurs automatically. For more information, see the ibrix_fm_tune command in the HP
Troubleshooting file systems
51
4.
StoreAll Command Line Reference Guide. You can manually fail over the file serving node.
See the administration guide for your system for more information about this procedure.
If you have set Fusion Manager to make the segment available after failover, the segment
automatically becomes available after failover. For more information, see the ibrix_fm_tune
command in the HP StoreAll Command Line Reference Guide. To manually make the segment
available:
a. Enter the following command to clear the in_fsck flag:
ibrix_fsck -f FSNAME -C
b.
Enter the following command to clear the unavailable flag on the specified segment and
file system:
ibrix_fsck -f FSNAME -C -s LVNAME
c.
When following message displays, enter yes to continue:
Warning: Clearing UNAVAILABLE/RO Flag on a Segment can result in
further damage to your filesystem. Type 'yes' to continue :
d.
5.
6.
7.
If the segment is still not available, contact HP Support.
Fail over the file serving node to its standby. S
Reboot the file serving node.
When the file serving node is up, verify that the segment, or LUN, is available.
If the segment is still not available, contact HP Support.
SegmentRejected is reported
This alert is generated by a client call for a segment that is no longer accessible by the segment
owner or file serving node specified in the client's segment map. The alert is logged to the a
StoreAll.log and messages files. It is usually an indication of an out-of-date or stale segment
map for the affected file system and is caused by a network condition. Other possible causes are
rebooting the node, unmounting the file system on the node, segment migrations, and, in a failover
scenario, stale a StoreAll, an unresponsive kernel, or a network RPC condition.
To troubleshoot this alert, check network connectivity among the nodes, ensuring that the network
is optimal and any recent network conditions have been resolved. From the file system perspective,
verify segment maps by comparing the file system generation numbers and the ownership for those
segments being rejected by the clients.
Use the following commands to compare the file system generation number on the local file serving
nodes and the clients logging the error.
/usr/local/ibrix/bin/rtool enumseg <FSNAME> <SEGNUMBER>
For example:
rtool enumseg ibfs1 3
segnum=3 of 4 ----------fsid ........................... 7b3ea891-5518-4a5e-9b08-daf9f9f4c027
fsname ......................... ibfs1
device_name .................... /dev/ivg3/ilv3
host_id ........................ 1e9e3a6e-74e4-4509-a843-c0abb6fec3a6
host_name ...................... ib50-87 <-- Verify owner of segment
ref_counter .................... 1038
state_flags .................... SEGMENT_LOCAL SEGMENT_PREFERED SEGMENT_DHB <SEGMENT_
ORPHAN_LIST_CREATED (0x00100061)
write_WM ....................... 99129 4K-blocks (387 Mbytes)
create_WM ...................... 793033 4K-blocks (3097 Mbytes)
spillover_WM ................... 892162 4K-blocks (3485 Mbytes)
generation ..................... 26
quota .......................... usr,grp,dir
f_blocks ....................... 0011895510 4K-blocks (==0047582040 1K-blocks, 46466 M)
f_bfree ........................ 0011785098 4K-blocks (==0047140392 1K-blocks, 46035 M)
f_bused ........................ 0000110412 4K-blocks (==0000441648 1K-blocks, 431 M)
f_bavail ....................... 0011753237 4K-blocks (==0047012948 1K-blocks, 45911 M)
f_files ........................ 6553600
f_ffree ........................ 6552536
52
Maintaining file systems
used files (f_files - f_ffree).. 1064
Segment statistics for 690812.89 seconds :
n_reads=0, kb_read=0, n_writes=0, kb_written=0, n_creates=2, n_removes=0
Also run the following command:
/usr/local/ibrix/bin/rtool enumfs <FSNAME>
For example:
rtool enumfs ibfs1
1:---------------fsname ......................... ibfs1
fsid ........................... 7b3ea891-5518-4a5e-9b08-daf9f9f4c027
fsnum .......................... 1
fs_flags........................ operational
total_number_of_segments ....... 4
mounted ........................ TRUE
ref_counter .................... 6
generation ..................... 26 <–– FS generation number for comparison
alloc_policy.................... RANDOM
dir_alloc_policy................ NONE
cur_segment..................... 0
sup_ap_on....................... NONE
local_segments ................. 3
quota .......................... usr,grp,dir
f_blocks ....................... 0047582040 4K-blocks (==0190328160 1K-blocks)
f_bfree ........................ 0044000311 4K-blocks (==0176001244 1K-blocks)
f_bused ........................ 0003581729 4K-blocks (==0014326916 1K-blocks)
f_bavail ....................... 0043872867 4K-blocks (==0175491468 1K-blocks)
f_files ........................ 26214400
f_ffree ........................ 26212193
used files (f_files - f_free)... 2207
FS statistics for 0.0 seconds :
n_reads=0, kb_read=0, n_writes=0, kb_written=0, n_creates=0, n_removes=0
Use the output to determine whether the FS generation number is in sync and whether the file
serving nodes agree on the ownership of the rejected segments. In the rtool enumseg output,
check the state_flags field for SEGMENT_IN_MIGRATION, which indicates that the segment
is stuck in migration because of a failover.
Typically, if the segment has a healthy state flag on the file serving node that owns the segment
and all file serving nodes agree on the owner of the segment, this is not a file system or file serving
node issue. If a state flag is stale or indicates that a segment is in migration, call HP Support for
a recovery procedure.
Otherwise, the alert indicates a file system generation mismatch. Take the following steps to resolve
this situation:
1. From the active Fusion Manager, run the following command to propagate a new file system
segment map throughout the cluster. This step takes a few minutes.
ibrix_dbck -I -f <FSNAME>
2.
If problems persist, try restarting the client's IAD:
/usr/local/ibrix/init/ibrix_iad restart
ibrix_fs -c failed with "Bad magic number in super-block"
If a file system creation command fails with an error such as the following, the command may have
failed to preformat the LUN.
# ibrix_fs -c -f fs1 -s seg1_4
Calculated owner for seg1_4 : glory22
failed command (/usr/local/ibrix/bin/tuneibfs -F
3e2a9657-fc8b-46b2-96b0-1dc27e8002f3 -H glory2 -G 1 -N 1 -S fs1 -R 1
/dev/vg1_4/seg1_4 2>&1) status (1) output: (/usr/local/ibrix/bin/tuneibfs: Bad
magic number in super-block while trying to open /dev/vg1_4/seg1_4 Couldn't
find valid filesystem superblock. /usr/local/ibrix/bin/tuneibfs 5.3.461 Rpc
Version:5 Rpc Ports base=IBRIX_PORTS_BASE (Using EXT2FS Library version 1.32.1)
[ipfs1_open] reading superblock from blk 1 )
Troubleshooting file systems
53
Iad error on host glory2
To work around the problem, recreate the segment on the failing LUN. To identify the LUN associated
with the failure, run a command such as the following on the first server in the system:
# ibrix_pv -l -h glory2
PV_NAME SIZE(MB) VG_NAME
------- -------- ------d1
131070 vg1_1
d2
131070 vg1_2
d3
131070 vg1_3
d5
23551 vg1_5
d6
131070 vg1_4
DEVICE
--------------/dev/mxso/dev4a
/dev/mxso/dev5a
/dev/mxso/dev6a
/dev/mxso/dev8a
/dev/mxso/dev7a
RAIDTYPE
--------
RAIDHOST
--------
RAIDDEVICE
----------
The Device column identifies the LUN number. In this example, the volume group vg1_4 is created
from LUN 7. Recreate the segment and then run the file system creation command again.
54
Maintaining file systems
5 Using NFS
To allow NFS clients to access a StoreAll file system, the file system must be exported. You can
export a file system using the GUI or CLI. By default, StoreAll file systems and directories follow
POSIX semantics and file names are case-sensitive for Linux/NFS users. If you prefer to use Windows
semantics for Linux/NFS users, you can make a file system or subdirectory case-insensitive.
NOTE: The latest release of NFS supported by current version of the StoreAll software is NFS
version 3.
Exporting a file system
Exporting a file system makes local directories available for NFS clients to mount. The Fusion
Manager manages the table of exported file systems and distributes the information to the /etc/
exports files on the file serving nodes. All entries are automatically re-exported to NFS clients
and to the file serving node standbys unless you specify otherwise.
A file system must be mounted before it can be exported.
NOTE: When configuring options for an NFS export, do not use the no_subtree_check
option. This option is not compatible with the StoreAll software.
Export a file system using the GUI
Use the Add a New File Share Wizard to export a file system. Select File Shares from the Navigator,
and click Add on the File Shares panel to open the wizard. (You can also open the wizard by first
selecting a file system on the Filesystems panel, selecting NFS Exports from the lower Navigator,
and then clicking Add on the NFS Exports panel.)
On the File Share window, select the file system to be exported, select NFS as the file sharing
protocol, and enter the export path.
Exporting a file system
55
Use the Settings window to specify the clients allowed to access the share. Also select the permission
and privilege levels for the clients, and specify whether the export should be available from a
backup server.
The Advanced Settings window allows you to set NFS options on the share.
On the Host Servers window, select the servers that will host the NFS share. By default, the share
is hosted by all servers that have mounted the file system.
56
Using NFS
The Summary window shows the configuration of the share. You can go back and revise the
configuration if necessary. When you click Finish, the export is created and appears on the File
Shares panel.
Export a file system using the CLI
To export a file system from the CLI, use the ibrix_exportfs command:
ibrix_exportfs -f FSNAME -h HOSTNAME -p CLIENT1:PATHNAME1,CLIENT2:PATHNAME2,..
[-o "OPTIONS"] [-b]
The options are as follows:
Option
Description
-f FSNAME
The file system to be exported.
-h HOSTNAME
The file serving node containing the file system to be exported.
-p CLIENT1:PATHNAME1,
CLIENT2:PATHNAME2,..
The clients that will access the file system can be a single file serving node, file
serving nodes represented by a wildcard, or the world (:/PATHNAME). Note that
world access omits the client specification but not the colon (for example, :/usr/
src).
-o "OPTIONS"
The default Linux exportfs mount options are used unless specific options are
provided. The standard NFS export options are supported. Options must be enclosed
in double quotation marks (for example, -o "ro"). Do not enter an FSID= or
sync option; they are provided automatically.
-b
By default, the file system is exported to the NFS client’s standby. This option
excludes the standby for the file serving node from the export.
For example, to provide NFS clients *.hp.com with read-only access to file system ifs1 at the
directory /usr/src on file serving node s1.hp.com:
ibrix_exportfs -f ifs1 -h s1.hp.com -p *.hp.com:/usr/src -o "ro"
To provide world read-only access to file system ifs1 located at /usr/src on file serving node
s1.hp.com:
ibrix_exportfs -f ifs1 -h s1.hp.com -p :/usr/src -o "ro"
Exporting a file system
57
Unexporting a file system
A file system should be unexported before it is unmounted.
To unexport a file system:
•
On the GUI, select the file system, select NFS Exports from the lower Navigator, and then
select Unexport.
•
On the CLI, enter the following command:
ibrix_exportfs
-U -h HOSTNAME -p CLIENT:PATHNAME [-b]
Using case-insensitive file systems
By default, StoreAll file systems and directories follow POSIX semantics and file names are
case-sensitive for Linux/NFS users. (File names are always case-insensitive for Windows clients.)
If you prefer to use Windows semantics for Linux/NFS users, you can make a file system or
subdirectory case-insensitive. Doing this prevents a Linux/NFS user from creating two files that
differ only in case (such as file1 and FILE1). If Windows users are accessing the directory, two
files with the same name but different case might be confusing, and the Windows users may be
able to access only one of the files.
CAUTION: Be careful when applying the case-insensitivity setting to an existing directory populated
with files. Make sure you do not have files with duplicate names but in different cases, for example
File1 and file1. If you edit one of the files (File1), the other file (file1) is changed to match
the contents of the edited file. Both files will contain the same data, resulting in the loss of data
from the other file.
CAUTION: This feature breaks POSIX semantics and can cause problems for Linux utilities and
applications.
Before enabling the case-insensitive feature, be sure the following requirements are met:
•
The file system or directory must be created under the StoreAll File Serving Software 6.0 or
later release.
•
The file system must be mounted.
Setting case insensitivity for all users (NFS/Linux/Windows)
The case-insensitive setting applies to all users of the file system or directory.
Select the file system on the GUI, expand Active Tasks in the lower Navigator, and select Case
Insensitivity On the Task Summary bottom panel, click New to open the New Case Insensitivity
Task dialog box. Select the appropriate action to change case insensitivity.
NOTE: When specifying a directory path, the best practice is to change case insensitivity at the
root of an SMB share and to avoid mixed case insensitivity in a given share.
58
Using NFS
To set case insensitivity from the CLI, use the following command:
ibrix_caseinsensitive -s -f FSNAME -c [ON|OFF] -p PATH
Viewing the current setting for case insensitivity
Select Report Current Case Insensitivity Setting on the New Case Insensitivity Task dialog box to
view the current setting for a file system or directory.
Click Perform Recursively to see the status for all descendent directories of the specified file system
or directory.
From the CLI, use the following command to determine whether case-insensitivity is set on a file
system or directory:
ibrix_caseinsensitive -i -f FSNAME -p PATH [-r]
The -r option includes all descendent directories of the specified path.
Clearing case insensitivity (setting to case sensitive) for all users (NFS/Linux/Windows)
When you set the directory tree to be case insensitive OFF, the directory and all recursive
subdirectories are again case sensitive, restoring the POSIX semantics for Linux users.
Log files
A new task is created when you change case insensitivity or query its status recursively. A log file
is created for each task and an ID is assigned to the task. The log file is placed in the directory
/usr/local/ibrix/log/case_insensitive on the server specified as the coordinating
server for the task. Check that server for the log file.
NOTE: To verify the coordinating server, select File System > Inactive Tasks. Then select the task
ID from the display and select Details.
The log file names have the format IDtask.log, such as ID26.log.
The following sample log file is for a query reporting case insensitivity:
0:0:26275:Reporting Case Insensitive status for the following directories
1:0:/fs_test1/samename-T: TRUE
Using case-insensitive file systems
59
2:0:/fs_test1/samename-T/samename: TRUE
2:0:DONE
The next sample log file is for a change in case insensitivity:
0:0:31849:Case Insensitivity is turned ON for the following directories
1:0:/fs_test2/samename-true
2:0:/fs_test2/samename-true/samename
3:0:/fs_test2/samename-true/samename/samename-snap
3:0:DONE
The first line of the output contains the PID for the process and reports the action taken. The first
column specifies the number of directories visited. The second column specifies the number of errors
found. The third column reports either the results of the query or the directories where case
insensitivity was turned on or off.
Displaying and terminating a case insensitivity task
To display a task, use the following command:
# ibrix_task -l
For example:
# ibrix_task -l
TASK ID
TYPE
ENDED AT
----------- ------------- -------caseins_237 caseins
FILE SYSTEM
-----------
fs_test1
SUBMITTED BY
--------------------
root from Local Host
TASK STATUS
-----------
STARTING
IS COMPLETED?
-------------
No
EXIT STATUS
-----------
STARTED AT
--------------
Jun 17, 2011
11:31:38
To terminate a task, run the following command and specify the task ID:
# ibrix_task -k -n <task ID>
For example:
# ibrix_task -k -n caseins_237
Case insensitivity and operations affecting directories
A newly created directory retains the case-insensitive setting of its parent directory. When you use
commands and utilities that create a new directory, that directory has the case-insensitive setting
of its parent. This situation applies to the following:
•
Windows or Mac copy and paste
•
tar/untar
•
compress/uncompress
•
cp -R
•
rsync
•
Remote replication
•
xcopy
•
robocopy
•
Restoring directories and folders from snapshots
The case-insensitive setting of the source directories is not retained on the destination directories.
Instead, the setting for the destination file system is applied. However, if you use a command such
as the Linux mv command, a Windows drag and drop operation, or a Mac uncompress operation,
a new directory is not created, and the affected directory retains its original case-insensitive setting.
60
Using NFS
6 Configuring authentication for SMB, FTP, and HTTP
StoreAll software supports several services for authenticating users accessing shares on StoreAll
file systems:
•
Active Directory (supported for SMB, FTP, and HTTP)
•
Active Directory with LDAP ID mapping as a secondary lookup source (supported for SMB)
•
LDAP (supported for SMB)
•
Local Users and Groups (supported for SMB, FTP, and HTTP)
Local Users and Groups can be used with Active Directory or LDAP.
NOTE:
Active Directory and LDAP cannot be used together.
You can configure authentication from the GUI or CLI. When you configure authentication with
the GUI, the selected authentication services are configured on all servers. The CLI commands
allow you to configure authentication differently on different servers.
Using Active Directory with LDAP ID mapping
When LDAP ID mapping is a secondary lookup method, the system reads SMB client UIDs and
GIDs from LDAP if it cannot locate the needed ID in an AD entry. The name in LDAP must match
the name in AD without respect for case or pre-appended domain.
If the user configuration differs in LDAP and Windows AD, the LDAP ID mapping feature uses the
AD configuration. For example, the following AD configuration specifies that the primary group
for user1 is Domain Users, but in LDAP, the primary group is group1.
AD configuration
LDAP Configuration
user:
user1
uid:
user1
primary group:
Domain Users
uidNumber:
1010
UNIX uid:
not specified
gidNumber:
1001 (group1)
UNIX gid:
not specified
cn:
Domain Users
gidNumber:
1111
The Linux id command returns the primary group specified in LDAP:
user:
primary group:
user1
group1 (1001)
LDAP ID mapping uses AD as the primary source for identifying the primary group and all
supplemental groups. If AD does not specify a UNIX GID for a user, LDAP ID mapping looks up
the GID for the primary group assigned in AD. In the example, the primary group assigned in AD
is Domain Users, and LDAP ID mapping looks up the GID of that group in LDAP. The lookup
operation returns:
user:
primary group:
user1
Domain Users (1111)
AD does not force the supplied primary group to match the supplied UNIX GID.
The supplemental groups assigned in AD do not need to match the members assigned in LDAP.
LDAP ID mapping uses the members list assigned in AD and ignores the members list configured
in LDAP.
IMPORTANT: If the user’s primary group in AD is not resolved to a GID number from either Active
Directory or LDAP, the user will be denied access to StoreAll.
Using Active Directory with LDAP ID mapping
61
Using LDAP as the primary authentication method
Requirements for LDAP users and groups
StoreAll supports only OpenLDAP.
Configuring LDAP for StoreAll software
To configure LDAP, complete the following steps:
1. Update a configuration file template that ships as part of the StoreAll LDAP software.
This updated configuration file is then passed to a configuration utility, which uses LDAP
commands to modify the remote enterprise's OpenLDAP server.
2.
3.
Configure LDAP authentication on all the cluster nodes by using Fusion Manager.
Update the appropriate configuration template with information specific to the OpenLDAP
server being configured.
Update the template on the remote LDAP server
The StoreAll LDAP client ships with three configuration templates, each corresponding to a supported
OpenLDAP server schema:
•
customized-schema-template.conf
•
samba-schema-template.conf
•
posix-schema-template.conf
Pick the schema your server supports. If your server supports both Posix and Samba schemas, pick
the schema most appropriate for your environment. Choose any one of the three supported schema
templates to proceed.
Make a copy of the template corresponding to the schema your LDAP server supports, and update
the copy with your configuration information.
Customized template. If the OpenLDAP server has a customized or a special schema, you must
provide information to help map between the standard schema attribute and class names to the
new names that are extant on the OpenLDAP server. This situation is not a common one. Use this
template only if your OpenLDAP server has overridden the standardized Posix or Samba schema
with customized extensions. Provide values (equivalent names) for all virtual attributes in the
configuration. For example:
mandatory; virtual; uid; your-schema-equivalent-of-uid
optional; virtual; homeDirectory; your-schema-equivalent-of-homeDirectory
Samba template. Enter the required attributes for Samba/POSIX templates. You can use the default
values specified in the “Map (mandatory) variables” and “Map (Optional) variables” sections of
the template.
POSIX template. Enter the required attributes for Samba/POSIX templates. Also remove or comment
out the following virtual attributes:
# mandatory; virtual; SID;sambaSID
# mandatory; virtual; PrimaryGroupSID;sambaPrimaryGroupSID
# mandatory; virtual; sambaGroupMapping;sambaGroupMapping
62
Configuring authentication for SMB, FTP, and HTTP
Required attributes for Samba/POSIX templates
Nonvirtual attribute
name
Value
Description
VERSION
Any arbitrary string
Helps identify the configuration version uploaded. Potentially
used for reports, audit history, and troubleshooting.
LDAPServerHost
IP Address string
A FQDN or IP. Typically, it is a front-ended switch or an IP
LDAP proxy/balancer name/address for multiple backend
high-availability LDAP servers.
LdapConfigurationOU
Writable OU name string
The LDAP OU (organizational unit) to which configuration
entries can be written. This OU must exist on the server and
must be readable and writable using LDAPWriteDN.
LdapWriteDN
DN name string
Limited write DN credentials. HP recommends that you do not
use cn=Manager credentials. Instead, use an account DN with
very restricted write permissions to the LdapConfigurationOU
and beneath.
LDAPWritePassword
Unencrypted password string. Password for the LdapWriteDN account.
LDAP encrypts the string on
storage.
schematype
Samba, posix, or user defined Supported schema for the OpenLDAP server.
schema
Run the configuration script on the remote LDAP server
The StoreAll gen_ldap-lwtools.sh script performs the configuration based on the copy of the
chosen schema template (UserConf.conf in the examples). Run the following command to
validate your changes:
sh /opt/likewise/bin/gen_ldap-lwtools.sh UserConf.conf -v
If the configuration looks okay, run the command with added security by removing all temporary
files:
sh /opt/likewise/bin/gen_ldap-lwtools.sh UserConf.conf -rm
If you need to troubleshoot the configuration, run the command as follows:
sh /opt/likewise/bin/gen_ldap-lwtools.sh UserConf.conf
Configure LDAP authentication on the cluster nodes
You can configure LDAP authentication from the GUI, as described in “Configuring authentication
from the GUI” (page 63) (recommended), or by using the ibrix_ldapconfig command (see
“Configuring LDAP” (page 72).
Configuring authentication from the GUI
Use the Authentication Wizard to perform the initial configuration or to modify it at a later time.
Select Cluster Configuration > File Sharing Authentication from the Navigator to open the File
Sharing Authentication Settings panel. This panel shows the current authentication configuration
on each server.
Configuring authentication from the GUI
63
Click Authentication Wizard to start the wizard. On the Configure Options page, select the
authentication service to be applied to the servers in the cluster.
NOTE:
SMB.
CIFS in the GUI has not been rebranded to SMB yet. CIFS is just a different name for
The wizard displays the configuration pages corresponding to the option you selected.
64
•
Active Directory. See “Active Directory” (page 65).
•
LDAP. See “LDAP” (page 67).
•
LDAP ID Mapping. See “LDAP ID mapping” (page 66).
•
Local Groups. See “Local Groups” (page 68).
•
Local Users. See “Local Users” (page 69).
Configuring authentication for SMB, FTP, and HTTP
•
Share Administrators. See “Windows Share Administrators” (page 71).
•
Summary. See “Summary” (page 71).
Active Directory
Enter your domain name, the Auth Proxy username (an AD domain user with privileges to join the
specified domain; typically a Domain Administrator), and the password for that user. These
credentials are used only to join the domain and do not persist on the cluster nodes.
NOTE: When you successfully configure Active Directory authentication, the machine is part of
the domain until you remove it from the domain, either with the ibrix_auth -n command or
with the Management Console. Because Active Directory authentication is a one-time event, it is
not necessary to update authentication if you change the proxy user information.
IMPORTANT: See “Linux static user mapping with Active Directory” (page 93) for information
about enabling Linux Static User Mapping. You can return to the wizard to modify settings if it is
not enabled at the first pass and is later required.
Linux static use mapping is optional. If you do not want to enable Linux Static User Mapping, leave
it set to the default value of None.
If you want to enable Linux Static User Mapping using Active Directory based ID Mapping set it
to Enabled with Active Directory.
If you want to use LDAP ID mapping as a secondary lookup for Active Directory, select Enabled
with LDAP ID Mapping and AD. When you click Next, the LDAP ID Mapping dialog box appears.
Configuring authentication from the GUI
65
LDAP ID mapping
If LDAP ID mapping is enabled and the system cannot locate a UID/GID in Active Directory, it
searches for the UID/GID in LDAP. On the LDAP ID Mapping dialog box, specify the appropriate
search parameters.
Enter the following information on the dialog box:
LDAP Server Host
Enter the server name or IP address of the LDAP server host.
Port
Enter the LDAP server port (TCP port 389 for unencrypted or TLS encrypted; 636 for SSL encrypted).
Base of Search
Enter the LDAP base for searches. This is normally the root suffix of the directory, but you can
provide a base lower down the tree for business rules enforcement, ACLs, or performance reasons.
For example, ou=people,cd=enx,dc=net.
Bind DN
Enter the LDAP user account used to authenticate to the LDAP server to read data. This account
must have privileges to read the entire directory. Write credentials are not required. For example,
scn=hp9000-readonly-user,dc=entx,dc=net.
Password
Enter the password for the LDAP user account.
Max Entries
Enter the maximum number of entries to return from the search (the default is 10). Enter 0 (zero)
for no limit.
Max Wait Time
Enter the local maximum search time-out value in seconds. This value determines how long the
client will wait for search results.
LDAP Scope
Select the level of entries to search:
• base: search the base level entry only
• sub: search the base level entry and all entries in sub-levels below the base entry
• one: search all entries in the first level below the base entry, excluding the base entry
Namesearch Case If LDAP searches should be case sensitive, check this box.
Sensitivity
66
Configuring authentication for SMB, FTP, and HTTP
LDAP
To configure LDAP as the primary authentication mechanism for SMB shares, enter the server name
or IP address of the LDAP server host and the password for the LDAP user account.
NOTE:
LDAP cannot be used with Active Directory.
Enter the following information in the remaining fields:
Bind DN
Enter the LDAP user account used to authenticate to the LDAP server to read data, such as
cn=hp9000-readonly-user,dc=entx,dc=net. This account must have privileges to read the
entire directory. Write credentials are not required.
Write OU
Enter the OU (organizational unit) on the LDAP server to which configuration entries can be written.
This OU must be pre-provisioned on the remote LDAP server. The previous schema configuration
step would have seeded this OU with values that will now be read. The LDAPBindDN credentials
must be able to read (but not write) from the LDAPWriteOU. For example,
ou=9000Config,ou=configuration,dc=entx,dc=net.
Base of Search
This is normally the root suffix of the directory, but you can provide a base lower down the tree for
business rules enforcement, ACLs, or performance reasons. For example,
ou=people,cd=enx,dc=net.
NetBIOS Name
Enter any string that identifies the StoreAll host, such as StoreAll.
If your LDAP configuration requires a certificate for secure access, click Edit to open the LDAP
dialog box. You can enter a TLS or SSL certificate. When no certificate is used, the Enable SSL
field shows Neither TLS or SSL.
Configuring authentication from the GUI
67
NOTE: If LDAP is the primary authentication service, Windows clients such as Explorer or MMC
plug-ins cannot be used to add new users.
Local Groups
Specify local groups allowed to access shares. On the Local Groups page, enter the group name
and, optionally, the GID and RID. If you do not assign a GID and RID, they are generated
automatically. Click Add to add the group to the list of local groups. Repeat this process to add
other local groups.
When naming local groups, you should be aware of the following:
68
•
Group names must be unique. The new name cannot already be used by another user or
group.
•
The following names cannot be used: administrator, guest, root.
Configuring authentication for SMB, FTP, and HTTP
NOTE: If Local Users and Groups is the primary authentication service, Windows clients such as
Explorer or MMC plug-ins cannot be used to add new users.
Local Users
Specify local users allowed to access shares. On the Local Users page, enter a user name and
password. Click Add to add the user to the Local Users list.
When naming local users, you should be aware of the following:
•
User names must be unique. The new name cannot already be used by another user or group.
•
The following names cannot be used: administrator, guest, root.
Configuring authentication from the GUI
69
To provide account information for the user, click Advanced. The default home directory is /home/
<username> and the default shell program is /bin/false.
70
Configuring authentication for SMB, FTP, and HTTP
NOTE: If Local Users and Groups is the primary authentication service, Windows clients such as
Explorer or MMC plug-ins cannot be used to add new users.
Windows Share Administrators
If you will be using the Windows Share Management MMC plug-in to manage SMB shares, enter
your share administrators on this page. You can skip this page if you will be managing shares
entirely from the StoreAll Management Console.
To add an Active Directory or LDAP share administrator, enter the administrator name (such as
domain\user1 or domain\group1) and click Add to add the administrator to the Windows
Share Administrators list.
To add an existing Local User as a share administrator, select the user and click Add.
Summary
The Summary page shows the authentication configuration. You can go back and revise the
configuration if necessary. When you click Finish, authentication is configured, and the details
appear on the File Sharing Authentication panel.
Viewing or changing authentication settings
Expand File Sharing Authentication in the lower Navigator, and then select an authentication
service to display the current configuration for that service. On each panel, you can start the
Authentication Wizard and modify the configuration if necessary.
Viewing or changing authentication settings
71
You cannot change the UID or RID for a Local User account. If it is necessary to change a UID or
RID, first delete the account and then recreate it with the new UID or RID. The Local Users and
Local Groups panels allow you to delete the selected user or group.
Configuring authentication from the CLI
You can configure Active Directory, LDAP, LDAP ID mapping, or Local Users and Groups.
Configuring Active Directory
To configure Active Directory authentication, use the following command:
ibrix_auth -n DOMAIN_NAME -A AUTH_PROXY_USER_NAME@domain_name [-P AUTH_PROXY_PASSWORD]
[-S SETTINGLIST] [-h HOSTLIST]
RFC2307 defines extensions to the Active Directory schema to store UNIX Attributes for users and
groups. These are present in all versions of Windows since Windows 2003 R2. Enabling RFC2307
support enables Linux static user mapping with Active Directory. To enable RFC2307 support, use
the following command:
ibrix_cifsconfig -t [-S SETTINGLIST] [-h HOSTLIST]
Enable RFC2307 in the SETTINGLIST as follows:
rfc2307_support=rfc2307
For example:
ibrix_cifsconfig -t -S "rfc2307_support=rfc2307"
To disable RFC2307, set rfc2307_support to unprovisioned. For example:
ibrix_cifsconfig -t -S "rfc2307_support=unprovisioned"
IMPORTANT: After making configuration changes with the ibrix_cifsconfig -t -S
command, use the following command to restart the SMB services on all nodes affected by the
change.
ibrix_server -s -t cifs -c restart [-h SERVERLIST]
Clients will experience a temporary interruption in service during the restart.
Configuring LDAP
Use the ibrix_ldapconfig command to configure LDAP as the primary authentication service
for SMB shares.
72
Configuring authentication for SMB, FTP, and HTTP
IMPORTANT: Before using ibrix_ldapconfig to configure LDAP on the cluster nodes, you
must configure the remote LDAP server. For more information, see “Configuring LDAP for StoreAll
software” (page 62).
IMPORTANT: Linux Static User mapping is not supported if LDAP is configured as the primary
authentication service.
Add an LDAP configuration and enable LDAP:
ibrix_ldapconfig -a -h LDAPSERVERHOST [-P LDAPSERVERPORT] -b LDAPBINDDN
-p LDAPBINDDNPASSWORD -w LDAPWRITEOU -B LDAPBASEOFSEARCH -n NETBIOS -E
ENABLESSL [-f CERTFILEPATH] [-c CERTFILECONTENTS]
The options are:
-h LDAPSERVERHOST
The LDAP server host (server name or IP address).
-P LDAPSERVERPORT
The LDAP server port.
-b LDAPBINDDN
The LDAP bind Distinguished Name. For example:
cn=hp9000-readonly-user,dc=entx,dc=net.
-p LDAPBINDDNPASSWORD
The LDAP bind password.
-w LDAPWRITEOU
The LDAP write Organizational Unit, or OU (for example,
ou=9000Config,,ou=configuration,dc=entx,dc=net).
-B LDAPBASEOFSEARCH
The LDAP base for searches (for example, ou=people,cd=enx,dc=net).
-n NETBIOS
The NetBIOS name, such as StoreAll.
-E ENABLESSL
The type of certificate required. Enter 0 for no certificate, 1 for TLS, or 2 for SSL.
-f CERTFILEPATH
The path to the TLS or SSL certificate file, such as /usr/local/ibrix/ldap/
key.pem.
-c CERTFILECONTENTS
The contents of the certificate file. Copy the contents and paste them between quotes.
Modify an LDAP configuration:
ibrix_ldapconfig -m -h LDAPSERVERHOST [-P LDAPSERVERPORT] [e|D] [-b
LDAPBINDDN] [-p LDAPBINDDNPASSWORD] [-w LDAPWRITEOU] [-B
LDAPBASEOFSEARCH] [-n NETBIOS] [-E ENABLESSL] [-f CERTFILEPATH]|[-c
CERTFILECONTENTS]
The -f and -c arguments are mutually exclusive. Provide one or the other but not both.
View the LDAP configuration:
ibrix_ldapconfig -i
Enable LDAP:
ibrix_ldapconfig -e LDAPSERVERHOST
Disable LDAP:
ibrix_ldapconfig -D LDAPSERVERHOST
Configuring LDAP ID mapping
Use the ibrix_ldapidmapping command to configure LDAP ID mapping as a secondary lookup
source for Active Directory. LDAP ID mapping can be used only for SMB shares.
Add an LDAP ID mapping:
ibrix_ldapidmapping -a -h LDAPSERVERHOST -B LDAPBASEOFSEARCH [-P
LDAPSERVERPORT] [-b LDAPBINDDN] [-p LDAPBINDDNPASSWORD] [-m MAXWAITTIME]
[-M MAXENTRIES] [-n] [-s] [-o] [-u]
Configuring authentication from the CLI
73
This command automatically enables LDAP RFC 2307 ID Mapping. The options are:
-h LDAPSERVERHOST
The LDAP server host (server name or IP address).
-B LDAPBASEOFSEARCH
The LDAP base for searches (for example, ou=people,cd=enx,dc=net).
-P LDAPSERVERPORT
The LDAP server port (TCP port 389).
-b LDAPBINDDN
The LDAP bind Distinguished Name (the default is anonymous). For example:
cn=hp9000-readonly-user,dc=entx,dc=net.
-p LDAPBINDDNPASSWORD
The LDAP bind password.
-m MAXWAITTIME
The maximum amount of time to allow the search to run.
-M MAXENTRIES
The maximum number of entries (the default is 10).
-n
Case sensitivity for name searches (the default is false, or case-insensitive).
-s
Search the LDAP scope base (search the base level entry only).
-o
LDAP scope one (search all entries in the first level below the base entry, excluding
the base entry).
-u
LDAP scope sub (search the base-level entries and all entries below the base level).
Display information for LDAP ID mapping:
ibrix_ldapidmapping -i
Enable an existing LDAP ID mapping:
ibrix_ldapidmapping -e -h LDAPSERVERHOST
Disable an existing LDAP ID mapping:
ibrix_ldapidmapping -d -h LDAPSERVERHOST
Configuring Local Users and Groups authentication
Use ibrix_auth to configure Local Users authentication. Use ibrix_localusers and
ibrix_localgroups to manage user and group accounts.
Configure Local Users authentication:
ibrix_auth -N
[-h HOSTLIST]
Be sure to create a local user account for each user that will be accessing SMB, FTP, or HTTP
shares, and create at least one local group account for the users. The account information is stored
internally in the cluster.
Configure Active Directory authentication:
ibrix_auth -n DOMAIN_NAME -A AUTH_PROXY_USER_NAME@domain_name [-P
AUTH_PROXY_PASSWORD] [-S SETTINGLIST] [-h HOSTLIST]
In the command, DOMAIN_NAME is your Active Directory domain.
AUTH_PROXY_USER_NAME@domain_name is the name and domain for an AD domain user
(typically a Domain Administrator) having privileges to join the specified domain and
AUTH_PROXY_PASSWORD is the password for that account.
To configure Active Directory authentication on specific nodes, specify those nodes in HOSTLIST.
For the -S option, enter the settings as settingname=value. Use commas to separate the
settings, and enclose the list in quotation marks. If there are multiple values for a setting, enclose
the values in square brackets. The users you specify must already exist. For example:
ibrix_auth -t -S 'share admins=[domain\user1, domain\user2,
domain\user3]'
To remove a setting, enter settingname=.
All servers, or only the servers specified in HOSTLIST, will be joined to the specified Active
Directory domain.
74
Configuring authentication for SMB, FTP, and HTTP
Add a Local User account:
ibrix_localusers -a -u USERNAME -g DEFAULTGROUP -p PASSWORD [-h HOMEDIR]
[-s SHELL] [-i USERINFO] [-U USERID] [-S RID] [-G GROUPLIST]
Modify a Local User account:
ibrix_localusers -m -u USERNAME [-g DEFAULTGROUP] [-p PASSWORD] [-h
HOMEDIR] [-s SHELL] [-i USERINFO] [-G GROUPLIST]
View information for all Local User accounts:
ibrix_localusers -L
View information for a specific Local User account:
ibrix_localusers -l -u USERNAME
Delete a Local User account:
ibrix_localusers -d -u USERNAME
Add a Local Group account:
ibrix_localgroups -a -g GROUPNAME [-G GROUPID] [-S RID]
Modify a Local Group account:
ibrix_localgroups -m -g GROUPNAME [-G GROUPID] [-S RID]
View information about all Local Group accounts:
ibrix_localgroups -L
View information for a specific Local Group account:
ibrix_localgroups -l -g GROUPNAME
Delete a Local Group account:
ibrix_localgroups -d -g GROUPNAME
Configuring authentication from the CLI
75
7 Using SMB
The SMB server implementation allows you to create file shares for data stored on the cluster. The
SMB server provides a true Windows experience for Windows clients. A user accessing a file
share on a StoreAll system will see the same behavior as on a Windows server.
IMPORTANT: SMB and StoreAll Windows clients cannot be used together because of incompatible
AD user to UID mapping. You can use either SMB or StoreAll Windows clients, but not both at the
same time.
IMPORTANT: Before configuring SMB, select an authentication method. See “Configuring
authentication for SMB, FTP, and HTTP” (page 61) for more information.
Configuring file serving nodes for SMB
To enable file serving nodes to provide SMB services, you will need to configure the resolv.conf
file. On each node, the /etc/resolv.conf file must include a DNS server that can resolve SRV
records for your domain. For example:
# cat /etc/resolv.conf
search mycompany.com
nameserver 192.168.100.132
To verify that a file serving node can resolve SRV records for your AD domain, run the Linux dig
command. (In the following example, the Active Directory domain name is mydomain.com.)
% dig SRV _ldap._tcp.mydomain.com
In the output, verify that the ANSWER SECTION contains a line with the name of a domain controller
in the Active Directory domain. Following is some sample output:
; <<>> DiG 9.3.4-P1 <<>> SRV _ldap._tcp.mydomain.com
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 56968
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 2
;; QUESTION SECTION:
;_ldap._tcp.mydomain.com.
IN
SRV
;; ANSWER SECTION:
_ldap._tcp.mydomain.com. 600
IN
SRV
;; ADDITIONAL SECTION:
adctrlr.mydomain.com.
;;
;;
;;
;;
3600
IN
A
0 100 389 adctrlr.mydomain.com.
192.168.11.11
Query time: 0 msec
SERVER: 192.168.100.132 #53(192.168.100.132)
WHEN: Tue Mar 16 09:56:02 2010
MSG SIZE rcvd: 113
For more information, see the Linux resolv.conf(5) man page.
Starting or stopping the SMB service and viewing SMB statistics
IMPORTANT: You will need to start the SMB service initially on the file serving nodes.
Subsequently, the service is started automatically when a node is rebooted.
NOTE:
SMB.
CIFS in the GUI has not been rebranded to SMB yet. CIFS is just a different name for
Use the SMB panel on the GUI to start, stop, or restart the SMB service on a particular server, or
to view SMB activity statistics for the server. Select Servers from the Navigator and then select the
76
Using SMB
appropriate server. Select CIFS in the lower Navigator to display the CIFS panel, which shows
SMB activity statistics on the server. You can start, stop, or restart the SMB service by clicking the
appropriate button.
NOTE: Click CIFS Settings to configure SMB signing on this server. See “Configuring SMB signing
” (page 83) for more information.
To start, stop, or restart the SMB service from the CLI, use the following command:
ibrix_server -s -t cifs -c {start|stop|restart}
Monitoring SMB services
The ibrix_cifsmonitor command configures monitoring for the following SMB services:
•
lwreg
•
dcerpc
•
eventlog
•
lsass
•
lwio
•
netlogin
•
srvsvc
If the monitor finds that a service is not running, it attempts to restart the service. If the service
cannot be restarted, that particular service is not monitored.
The command can be used for the following tasks.
Start the SMB monitoring daemon and enable monitoring:
ibrix_cifsmonitor -m [-h HOSTLIST]
Display the health status of the SMB services:
ibrix_cifsmonitor -l
Monitoring SMB services
77
The command output reports status as follows:
Health Status
Condition
Up
All monitored SMB services are up and running
Degraded
The lwio service is running but one or more of the other services are down
Down
The lwio service is down and one or more of the other services are down
Not Monitored
Monitoring is disabled
N/A
The active Fusion Manager could not communicate with other file serving nodes in the cluster
Disable monitoring and stop the SMB monitoring daemon:
ibrix_cifsmonitor -u [-h HOSTLIST]
Restart SMB service monitoring:
ibrix_cifsmonitor -c [-h HOSTLIST]
SMB shares
Windows clients access file systems through SMB shares. You can use the StoreAll GUI or CLI to
manage shares, or you can use the Microsoft Management Console interface. The SMB service
must be running when you add shares.
When working with SMB shares, you should be aware of the following:
78
•
The permissions on the directory exporting an SMB share govern the access rights that are
given to the Everyone user as well as to the owner and group of the share. Consequently, the
Everyone user may have more access rights than necessary. The administrator should set ACLs
on the SMB share to ensure that users have only the appropriate access rights. Alternatively,
permissions can be set more restrictively on the directory exporting the SMB share.
•
When the cluster and Windows clients are not joined in a domain, local users are not visible
when you attempt to add ACLs on files and folders in an SMB share.
•
A directory tree on an SMB share cannot be copied if there are more than 50 ACLs on the
share. Also, because of technical constraints in the SMB service, you cannot create subfolders
in a directory on an SMB share having more than 50 ACLs.
•
When configuring an SMB share, you can specify IP addresses or ranges that should be
allowed or denied access to the share. However, if your network includes packet filters, a
NAT gateway, or routers, this feature cannot be used because the client IP addresses are
modified while in transit.
•
You can use an SMB share as a DFS target. However, the SMB share does not support DFS
load balancing or DFS replication.
•
With the release of version 6.2, SMB shares support Large MTU, which provides a 1 MB
buffer for reads and writes. On the client, you must enable Large MTU in the registry to enable
support for Large MTU on the SMB server.
•
SMB shares support alternate data streams. SMB clients with files containing the Alternate
Data Streams type '$DATA' can be written to SMB shares. The files are stored on the StoreAll
file system in a special format and should only be handled by SMB clients.
Using SMB
IMPORTANT:
•
Keep in mind the following:
◦
If files are handled over a different protocol or directly on the StoreAll server via
PowerShell, the alternate data streams could be lost.
◦
If you rename the master file table while archiving and auto commit is enabled, the
alternative data streams associated with the Master File Table are missing after the rename.
HP-SMB supports the following subset of Windows LSASS Local Authentication Provider
Privileges:
◦
SE_BACKUP_PRIVILEGE
◦
SE_CHANGE_NOTIFY_PRIVILEGE (Bypass traverse checking)
◦
SE_MACHINE_ACCOUNT_PRIVILEGE
◦
SE_MACHINE_VOLUME_PRIVILEGE
◦
SE_RESTORE_PRIVILEGE
◦
SE_TAKE_OWNERSHIP_PRIVILEGE
See the Microsoft documentation for more information about these privileges.
Configuring SMB shares with the GUI
Use the Add New File Share Wizard to configure SMB shares. You can then view or modify the
configuration as necessary.
On the GUI, select File Shares from the Navigator to open the File Shares panel, and then click
Add to start the Add New File Share Wizard.
On the File Share page, select CIFS as the File Sharing Protocol. Select the file system, which must
be mounted, and enter a name, directory path, and description for the share. Note the following:
•
Do not include any of the following special characters in a share name. If the name contains
any of these special characters, the share might not be set up properly on all nodes in the
cluster.
' & ( [ { $ ` , / \
•
Do not include any of the following special characters in the share description. If a description
contains any of these special characters, the description might not propagate correctly to all
nodes in the cluster.
* % + & `
SMB shares
79
On the Permissions page, specify permissions for users and groups allowed to access the share.
80
Using SMB
Click Add to open the New User/Group Permission Entry dialog box, where you can configure
permissions for a specific user or group. The completed entries appear in the User/Group Entries
list on the Permissions page.
On the Client Filtering page, specify IP addresses or ranges that should be allowed or denied
access to the share.
NOTE: This feature cannot be used if your network includes packet filters, a NAT gateway, or
routers.
Click Add to open the New Client UP Address Entry dialog box, where you can allow or deny
access to a specific IP address or a range of addresses. Enter a single IP address, or include a
bitmask to specify entire subnets of IP addresses, such as 10.10.3.2/25. The valid range for the
SMB shares
81
bitmask is 1-32. The completed entry appears on the Client IP Filters list on the Client Filtering
page.
On the Advanced Settings page, enable or disable Access Based Enumeration and specify the
default create mode for files and directories created in the share. The Access Based Enumeration
option allows users to see only the files and folders to which they have access on the file share.
On the Host Servers page, select the servers that will host the share.
82
Using SMB
Configuring SMB signing
The SMB signing feature specifies whether clients must support SMB signing to access SMB shares.
You can apply the setting to all servers, or to a specific server. To apply the same setting to all
server, select File Shares from the Navigator and click Settings on the File Shares panel. To apply
a setting to a specific server, select that server on the GUI, select CIFS from the lower Navigator,
and click Settings. The dialog is the same for both selection methods.
SMB shares
83
When configuring SMB signing, note the following:
•
SMB2 is always enabled.
•
Use the Required check box to specify whether SMB signing (with either SMB1 or SMB2) is
required.
•
The Disabled check box applies only to SMB1. Use this check box to enable or disable SMB
signing with SMB1.
You should also be aware of the following:
•
The File Share Settings dialog box does not display whether SMB signing is currently enabled
or disabled. Use the following command to view the current setting for SMB signing:
ibrix_cifsconfig -i
•
SMB signing must not be required to support connections from 10.5 and 10.6 Mac clients.
•
It is possible to configure SMB signing differently on individual servers. Backup SMB servers
should have the same settings to ensure that clients can connect after a failover.
•
The SMB signing settings specified here are not affected by Windows domain group policy
settings when joined to a Windows domain.
Configuring SMB signing from the CLI
To configure SMB signing from the command line, use the following command:
ibrix_cifsconfig -t -S SETTINGLIST
You can specify the following values in the SETTINGLIST:
smb signing enabled
smb signing required
Use commas to separate the settings, and enclose the list in quotation marks. For example, the
following command sets SMB signing to enabled and required:
ibrix_cifsconfig -t -S "smb signing enabled=1,smb signing required=1"
To disable SMB signing, enter settingname= with no value. For example:
ibrix_cifsconfig -t -S "smb signing enabled=,smb signing required="
IMPORTANT: After making configuration changes with the ibrix_cifsconfig -t -S
command, use the following command to restart the SMB services on all nodes affected by the
change.
ibrix_server -s -t cifs -c restart [-h SERVERLIST]
Clients will experience a temporary interruption in service during the restart.
Managing SMB shares with the GUI
To view existing SMB shares on the GUI, select File Shares > CIFS from the Navigator. The CIFS
Shares panel shows the file system being shared, the hosts (or servers) providing access, the name
of the share, the export path, and the options applied to the share.
NOTE: When externally managed appears in the option list for a share, that share is being
managed with the Microsoft Management Console interface. The GUI or CLI cannot be used to
change the permissions for the share.
84
Using SMB
On the CIFS Shares panel, click Add or Modify to open the File Shares wizard, where you can
create a new share or modify the selected share. Click Delete to remove the selected share. Click
CIFS Settings to configure global file share settings; see “Configuring SMB signing ” (page 83))
for more information.
You can also view SMB shares for a specific file system. Select that file system on the GUI, and
then select CIFS Shares from the lower Navigator.
Configuring and managing SMB shares with the CLI
Adding, modifying, or deleting shares
Use the ibrix_cifs command to add, modify, or delete shares. For detailed information, see
the HP StoreAll Storage CLI Reference Guide.
NOTE: Be sure to use the ibrix_cifs command located in <installdirectory>/bin.
The ibrix_cifs command located in /usr/local/bin/init is used internally by StoreAll
software and should not be run directly.
Add an SMB share:
ibrix_cifs -a -f FSNAME -s SHARENAME -p SHAREPATH [-D SHAREDESCRIPTION]
[-S SETTINGLIST] [-A ALLOWCLIENTIPSLIST] [-E DENYCLIENTIPSLIST] [-F
FILEMODE] [-M DIRMODE] [-h HOSTLIST]
SMB shares
85
NOTE: You cannot create an SMB share with a name containing an exclamation point (!) or a
number sign (#) or both.
Use the -A ALLOWCLIENTIPSLIST or -E DENYCLIENTIPSLIST options to list client IP addresses
allowed or denied access to the share. Use commas to separate the IP addresses, and enclose the
list in quotes. You can include an optional bitmask to specify entire subnets of IP addresses (for
example, ibrix_cifs -A "192.186.0.1,102.186.0.2/16"). The default is "", which
allows all IP addresses when it is used with the -A option or it denies all IP addresses when it is
used with the -E option.
The -F FILEMODE and -M DIRMODE options specify the default mode for newly created files or
directories, in the same manner as the Linux chmod command. The range of values is 0000–0777.
The default is 0700.
To see the valid settings for the -S option, use the following command:
ibrix_cifs -L
View share information:
ibrix_cifs -i [-h HOSTLIST]
Modify a share:
ibrix_cifs -m -s SHARENAME [-D SHAREDESCRIPTION] [-S SETTINGLIST] [-A
ALLOWCLIENTIPSLIST] [-E DENYCLIENTIPSLIST] [-F FILEMODE] [-M DIRMODE]
[-h HOSTLIST]
Delete a share:
ibrix_cifs -d -s SHARENAME [-h HOSTLIST]
Managing user and group permissions
Use the ibrix_cifsperms command to manage share-level permissions for users and groups.
Add a user or group to a share and assign share-level permissions:
ibrix_cifsperms -a -s SHARENAME -u USERNAME -t TYPE -p PERMISSION [-h
HOSTLIST]
For -t TYPE, specify either allow or deny.
For -p PERMISSION, specify one of the following:
•
fullcontrol
•
change
•
read
For example, the following command gives everyone read permission on share1:
ibrix_cifsperms -a -s share1 -u Everyone -t allow -p read
Modify share-level permissions for a user or group:
ibrix_cifsperms -m -s SHARENAME -u USERNAME -t TYPE -p PERMISSION [-h
HOSTLIST]
Delete share-level permissions for a user or group:
ibrix_cifsperms -d -s SHARENAME [-u USERNAME] [-t TYPE] [-h HOSTLIST]
Display share-level permissions:
ibrix_cifsperms -i -s SHARENAME [-t TYPE] [-h HOSTLIST]
86
Using SMB
Linux permissions on files created over SMB
The Linux permissions on files and folders created over SMB are not generally of interest to Windows
users and administrators, but some insight is useful when considering multiprotocol access and for
StoreAll system administrators. See “Permissions in a cross-protocol SMB environment” (page 100).
The HP SMB server maps Windows user and group credentials to Linux UIDs and GIDs. The UIDs
and GIDs are generated automatically by the CIFS server unless Linux Static User Mapping has
been enabled in which case the UIDs and GIDs are looked up in Active Directory or LDAP.
Use lw-find-user-by-name to find the Linux UID for a Windows user and use
lw-find-user-by-name to find the Linux GID for a Windows group. You can also do reverse
lookups with UIDs and GIDs to find the equivalent Windows using lw-find-user-by-id and
lw-find-group-by-id.
[root@ibrix01a ~]# /opt/likewise/bin/lw-find-user-by-name IB\\testuser1
The command displays the following output:
User info (Level-0):
====================
Name: IB\testuser1
SID: S-1-5-21-3681183244-3700010909-334885885-27276
Uid: 1060661900
Gid: 1060635137
Gecos: testuser1
Shell: /bin/sh
Home dir: /home/local/IB/testuser1
Logon restriction: NO
Do a reverse lookup with the UID by entering the following command:
[root@ibrix01a ~]# /opt/likewise/bin/lw-find-user-by-id 1060661900
The command displays the following output:
User info (Level-0):
====================
Name: IB\testuser1
SID: S-1-5-21-3681183244-3700010909-334885885-27276
Uid: 1060661900
Gid: 1060635137
Gecos: testuser1
Shell: /bin/sh
Home dir: /home/local/IB/testuser1
Logon restriction: NO
The GID is the GID for the user’s primary group as set in Active Directory. Do a reverse lookup
and find out the name of that group by entering the following command:
[root@ibrix01a ~]# /opt/likewise/bin/lw-find-group-by-id 1060635137
The command displays the following output:
Group info (Level 0):
====================
Name: IB\domain^users
SMB shares
87
Gid: 1060635137
SID: S-1-5-21-3681183244-3700010909-334885885-513
We can find the GID assigned by the StoreAll CIFS server for any Active Directory Group. Here
we lookup the GID for Domain Admins by entering the following command:
[root@ibrix01a ~]# /opt/likewise/bin/lw-find-group-by-name IB\\Domain\
Admins
The command displays the following output:
Group info (Level 0):
====================
Name: IB\domain^admins
Gid: 1060635136
SID: S-1-5-21-3681183244-3700010909-334885885-512
NOTE:
Backslashes have been used to escape special characters in the group name.
The SMB server’s file and folder create modes control the Linux permissions on files and directories
created over CIFS. These default to 0700 (read/write/execute to the owner).
The create modes can be managed with the ibrix_cifs command and with the GUI Wizard
(in Advanced Settings) when creating or modifying shares. Below is a usage example changing
a share’s default create mode for files:
[root@ibrix01a ~]# ibrix_cifs -m -s cifs1 -F 0770 Command succeeded!
In this command:
•
The -m option specifies we are going to modify the setting.
•
The -s option specifies the resource name of the share.
•
The -D option displays the description of the share.
•
The -F option specifies that we are changing the mask for files; use -M to change the directory
mask.
Managing SMB shares with Microsoft Management Console
The Microsoft Management Console (MMC) can be used to add, view, or delete SMB shares.
Administrators running MMC must have StoreAll software share management privileges.
NOTE: To use MMC to manage SMB shares, you must be authenticated as a user with share
modification permissions.
NOTE: If you will be adding users with the MMC, the primary authentication method must be
Active Directory.
NOTE: The permissions for SMB shares managed with the MMC cannot be changed with the
StoreAll Management Console GUI or CLI.
Connecting to cluster nodes
When connecting to cluster nodes, use the procedure corresponding to the Windows operating
system on your machine.
Windows XP, Windows 2003 R2:
Complete the following steps:
1. Open the Start menu, select Run, and specify mmc as the program to open.
2. On the Console Root window, select File > Add/Remove Snap-in.
88
Using SMB
3.
4.
5.
On the Add/Remove Snap-in window, click Add.
On the Add Standalone Snap-in window, select Shared Folders and click Add.
On the Shared Folders window, select Another computer as the computer to be managed,
enter or browse to the computer name, and click Finish.
6.
7.
8.
Click Close > OK to exit the dialogs.
Expand Shared Folders (\\<address>).
Select Shares and manage the shares as needed.
Windows Vista, Windows 2008, Windows 7:
Complete the following steps:
1. Open the Start menu and enter mmc in the Start Search box. You can also enter mmc in an
MS-DOS window.
2. On the User Account Control window, click Continue.
3. On the Console 1 window, select File > Add/Remove Snap-in.
4. On the Add or Remove Snap-ins window, select Shared Folders and click Add.
5. On the Shared Folders window, select Another computer as the computer to be managed,
enter or browse to the computer name, and click Finish.
SMB shares
89
6.
7.
8.
Click OK to exit the Add or Remove Snap-ins window.
Expand Shared Folders (\\<address>).
Select Shares and manage the shares as needed.
Saving MMC settings
You can save your MMC settings to use when managing shares on this server in later sessions.
Complete these steps:
1. On the MMC, select File > Save As.
2. Enter a name for the file. The name must have the suffix .msc.
3. Select Desktop as the location to save the file, and click Save.
4. Select File > Exit.
Granting share management privileges
Use the following command to grant administrators StoreAll software share management privileges.
The users you specify must already exist. Be sure to enclose the user names in square brackets.
ibrix_auth -t -S 'share admins=[domainname\username,domainname\username]'
The following example gives share management privileges to a single user:
ibrix_auth -t -S 'share admins=[domain\user1]'
If you specify multiple administrators, use commas to separate the users. For example:
ibrix_auth -t -S 'share admins=[domain\user1, domain\user2,
domain\user3]'
Adding SMB shares
SMB shares can be added with the MMC, using the share management plug-in. When adding
shares, you should be aware of the following:
•
90
The share path must include the StoreAll file system name. For example, if the file system is
named data, you could specify C:\data1\folder1.
Using SMB
NOTE:
The Browse button cannot be used to locate the file system.
•
The directory to be shared will be created if it does not already exist.
•
The permissions on the shared directory will be set to 777. It is not possible to change the
permissions on the share.
•
Do not include any of the following special characters in a share name. If the name contains
any of these special characters, the share might not be set up properly on all nodes in the
cluster.
' & ( [ { $ ` , / \
•
Do not include any of the following special characters in the share description. If a description
contains any of these special characters, the description might not propagate correctly to all
nodes in the cluster.
* % + & `
•
The management console GUI or CLI cannot be used to alter the permissions for shares created
or managed with Windows Share Management. The permissions for these shares are marked
as “externally managed” on the GUI and CLI.
Open the MMC with the Shared Folders snap-in that you created earlier. On the Select Computer
dialog box, enter the IP address of a server that will host the share.
The Computer Management window shows the shares currently available from server.
SMB shares
91
To add a new share, select Shares > New Share and run the Create A Shared Folder Wizard. On
the Folder Path panel, enter the path to the share, being sure to include the file system name.
When you complete the wizard, the new share appears on the Computer Management window.
Deleting SMB shares
To delete an SMB share, select the share on the Computer Management window, right-click, and
select Delete.
Mapping SMB shares
Before mapping a share it is important to understand the following:
92
•
By default, a share is made available from all of the file serving nodes in an cluster.
•
A share is always available on all of a file serving node’s network interfaces.
•
An SMB client will map the share from only one file serving node. All network traffic between
the client and the cluster will go via that node.
Using SMB
Best Practices when mapping shares:
•
A share should always be mapped using the User Virtual Interface (the User VIF) of a file
serving node as that interface will be migrated to the node’s HA partner in event of the first
node failing.
•
A share should never be mapped using the Admin IP address of a node as that interface
cannot migrate to the node’s HA partner.
•
A share should never be mapped using the StoreAll Virtual Management Interface.
Where many clients will be mapping shares the most common method of directing mapping requests
to file serving nodes is to set up a round robin DNS entry for all of the cluster’s User VIFs.
Linux static user mapping with Active Directory
Linux static user mapping can be enabled when you configure Active Directory for user
authentication (see “Configuring authentication for SMB, FTP, and HTTP” (page 61)).
If you configure LDAP ID mapping as the secondary authentication service, authentication uses the
IDs assigned in AD if they exist. If an ID is not found in an AD entry, authentication looks in LDAP
for a user or group of the same name and uses the corresponding ID assigned in LDAP. The primary
group and all supplemental groups are still determined by the AD configuration.
You can also assign UIDs, GIDs, and other POSIX attributes such as the home directory, primary
group and shell to users and groups in Active Directory. To add static entries to Active Directory,
complete these steps:
•
Configure Active Directory.
•
Assign POSIX attributes to users and groups in Active Directory.
NOTE: Mapping UID 0 and GID 0 to any AD user or group is not compatible with SMB static
mapping.
Configuring Active Directory
Your Windows Domain Controller machines must be running Windows Server 2003 R2 or Windows
Server 2008 R2. Configure the Active Directory domain as follows:
•
Install Identity Management for UNIX.
•
Activate the Active Directory Schema MMC snap-in.
•
Add the uidNumber and gidNumber attributes to the partial-attribute-set of the AD global
catalog.
You can perform these procedures from any domain controller. However, the account used to add
attributes to the partial-attribute-set must be a member of the Schema Admins group.
Installing Identity Management for UNIX
To install Identity Management for UNIX on a domain controller running Windows Server 2003
R2, see the following Microsoft TechNet Article:
http://technet.microsoft.com/en-us/library/cc778455(WS.10).aspx
To install Identity Management for UNIX on a domain controller running Windows Server 2008
R2, see the following Microsoft TechNet article:
http://technet.microsoft.com/en-us/library/cc731178.aspx
Activating the Active Directory Schema MMC snap-in
Use the Active Directory Schema MMC snap-in to add the attributes. To activate the snap-in,
complete the following steps:
Linux static user mapping with Active Directory
93
1.
2.
3.
4.
Click Start, click Run, type mmc, and then click OK.
On the MMC Console menu, click Add/Remove Snap-in.
Click Add, and then click Active Directory Schema.
Click Add, click Close, and then click OK.
Adding uidNumber and gidNumber attributes to the partial-attribute-set
To make modifications using the Active Directory Schema MMC snap-in, complete these steps:
1. Click the Attributes folder in the snap-in.
2. In the right panel, scroll to the desired attribute, right-click the attribute, and then click Properties.
Select Replicate this attribute to the Global Catalog, and click OK.
The following dialog box shows the properties for the uidNumber attribute:
The next dialog box shows the properties for the gidNumber attribute.
94
Using SMB
The following article provides more information about modifying attributes in the Active Directory
global catalog:
http://support.microsoft.com/kb/248717
Assigning attributes
To set POSIX attributes for users and groups, start the Active Directory Users and Computers GUI
on the Domain Controller. Open the Administrator Properties dialog box, and go to the UNIX
Attributes tab. For users, you can set the UID, login shell, home directory, and primary group. For
groups, set the GID.
Linux static user mapping with Active Directory
95
Synchronizing Active Directory 2008 with the NTP server used by the cluster
It is important to synchronize Active Directory with the NTP server used by the StoreAll cluster. Run
the following commands on the PDC:
net stop w32time
w32tm /config /syncfromflags:manual /manualpeerlist:"<NTP server>"
w32tm /config /reliable:yes
net start w32time
To check the configuration, run the following command:
w32tm /query /configuration
Consolidating SMB servers with common share names
If your SMB servers previously used the same share names, you can consolidate the servers without
changing the share name requested on the client side. For example, you might have three SMB
servers, SRV1, SRV2, and SRV3, that each have a share named DATA. SRV3 points to a shared
drive that has the same path as \\SRV1\DATA; however, users accessing SRV3 have different
permissions on the share.
To consolidate the three servers, we will take these steps:
1. Assign Vhost names SRV1, SRV2, and SRV3.
2. Create virtual interfaces (VIF) for the IP addresses used by the servers. For example, Vhost
SRV1 has VIF 99.10.10.101 and Vhost SRV2 has VIF 99.10.10.102.
3. Map the old share names to new share names. For example, map \\SRV1\DATA to new
share srv1-DATA, map \\SRV2\DATA to new share srv2-DATA, and map \\SRV3\DATA
to srv3-DATA.
96
Using SMB
4.
Create the new shares on the cluster storage and assign each share the appropriate path. For
example, assign srv1-DATA to /srv1/data, and assign srv2-DATA to /srv2/data.
Because SRV3 originally pointed to the same share as SRV1, we will assign the share
srv3-DATA the same path as srv1-DATA, but set the permissions differently.
5.
Optionally, create a share having the original share name, DATA in our example. Assign a
path such as /ERROR/DATA and place a file in it named SHARE_MAP_FAILED. Doing this
ensures that if a user configuration error occurs or the map fails, clients will not gain access
to the wrong shares. The file name notifies the user that their access has failed.
When this configuration is in place, a client request to access share \\srv1\data will be translated
to share srv1-DATA at /srv1/data on the file system. Client requests for \\srv3\data will
also be translated to /srv1/data, but the clients will have different permissions. The client requests
for \\srv2\data will be translated to share srv2-DATA at /srv2/data.
Client utilities such as net use will report the requested share name, not the new share name.
Mapping old share names to new share names
Mappings are defined in the /etc/likewise/vhostmap file. Use a text editor to create and
update the file. Each line in the file contains a mapping in the following format:
VIF (or VhostName)|oldShareName|newShareName
If you enter a VhostName, it will be changed to a VIF internally.
The oldShareName is the user-requested share name from the client that needs to be translated
into a unique name. This unique name (the newShareName) is used when establishing a mount
point for the share.
Following are some entries from a vhostmap file:
99.30.8.23|salesd|q1salesd
99.30.8.24|salesd|q2salesd
salesSrv|salesq|q3salesd
When editing the /etc/likewise/vhostmap file, note the following:
•
All VIF|oldShareName pairs must be unique.
•
The following characters cannot be used in a share name: “ / \ | [ ] < > + : ; ,
? * =
•
Share names are case insensitive, and must be unique with respect to case.
•
The oldShareName and newShareName do not need to exist when creating the file; however,
they must exist for a connection to be established to the share.
•
If a client specifies a share name that is not in the file, the share name will not be translated.
•
Care should be used when assigning share names longer than 12 characters. Some clients
impose a limit of 12 characters for a share name.
•
Verify that the IP addresses specified in the file are legal and that Vhost names can be resolved
to an IP address. IP addresses must be IP4 format, which limits the addresses to 15 characters.
IMPORTANT: When you update the vhostmap file, the changes take effect a few minutes after
the map is saved. If a client attempts a connection before the changes are in effect, the previous
map settings will be used. To avoid any delays, make your changes to the file when the SMB
service is down.
After creating or updating the vhostmap file, copy the file manually to the other servers in the
cluster.
Consolidating SMB servers with common share names
97
SMB clients
SMB clients access shares on the StoreAll software cluster in the same way they access shares on
a Windows server.
Viewing quota information
When user or group quotas are set on a file system exported as an SMB share, users accessing
the share can see the quota information on the Quotas tab of the Properties dialog box. Users
cannot modify quota settings from the client end.
SMB users cannot view directory tree quotas.
Differences in locking behavior
When SMB clients access a share from different servers, as in the StoreAll software environment,
the behavior of byte-range locks differs from the standard Windows behavior, where clients access
a share from the same server. You should be aware of the following:
•
Zero-length byte-range locks acquired on one file serving node are not observed on other file
serving nodes.
•
Byte-range locks acquired on one file serving node are not enforced as mandatory on other
file serving nodes.
•
If a shared byte-range lock is acquired on a file opened with write-only access on one file
serving node, that byte-range lock will not be observed on other file serving nodes. ("Write-only
access" means the file was opened with GENERIC_WRITE but not GENERIC_READ access.)
•
If an exclusive byte-range lock is acquired on a file opened with read-only access on one file
serving node, that byte-range lock will not be observed on other file serving nodes. ("Read-only
access" means the file was opened with GENERIC_READ but not GENERIC_WRITE access.)
SMB shadow copy
Users who have accidently lost or changed a file can use the SMB shadow copy feature to retrieve
or copy the previous version of the file from a file system snapshot. StoreAll software supports SMB
shadow copy operations as follows.
98
Using SMB
Access Control Lists (ACLs)
StoreAll SMB shadow copy behaves in the same manner as Windows shadow copy with respect
to ACL restoration. When a user restores a deleted file or folder using SMB shadow copy, the
ACLs applied on the individual files or folders are not restored. Instead, the files and folders inherit
the permissions from the root of the share or from the parent directory where they were restored.
When a user restores on an existing file or folder by restoring it with SMB shadow copy, the ACLs
applied on the individual file or folder are not restored. The ACLS applied on the individual file
or folder remain as they were before the restore.
Restore operations
If a file has been deleted from a directory that has Previous Versions, the user can recover a previous
version of the file by performing a Restore of the parent directory. However, the Properties of the
restored file will no longer list those Previous Versions. This condition is due to the StoreAll snapshot
infrastructure; after a file is deleted, a new file in the same location is a new inode and will not
have snapshots until a new snapshot is subsequently created. However, all pre-existing previous
versions of the file continue to be available from the Previous Versions of the parent directory.
For example, folder Fold1 contains files f1 and f2. There are two snapshots of the folder at
timestamps T1 and T2, and the Properties of Fold1 show Previous Versions T1 and T2. The
Properties of files f1 and f2 also show Previous Versions T1 and T2 as long as these files have
never been deleted.
If the file f1 is now deleted, you can restore its latest saved version from Previous Version T2 on
Fold1.
From that point on, the Previous Versions of \Fold1\f1 no longer show timestamps T1 and T2.
However, the Previous Versions of \Fold1 continue to show T1 and T2, and the T1 and T2 versions
of file f1 continue to be available from the folder.
Windows Clients Behavior
Users should have full access on files and folders to restore them with SMB shadow copy. If the
user does not have adequate permission, an error appears and the user is prompted to skip that
file or folder when the failover is complete.
After the user skips the file or folder, the restore operation may or may not continue depending on
the Windows client being used. For Windows Vista, the restore operation continues by skipping
the folder or file. For other Windows clients (Windows 2003, XP, 2008), the operation stops
abruptly or gives an error message. Testing has shown that Windows Vista is an ideal client for
SMB clients
99
SMB shadow copy support. StoreAll software does not have any control over the behavior of other
clients.
NOTE: HP recommends that the share root is not at the same level as the file system root, and is
instead a subdirectory of the file system root. This configuration reduces access and other
permissions-related issues, as there are many system files (such as lost+found, quota subsystem
files, and so on) at the root of the file system.
SMB shadow copy restore during node failover
If a node fails over while an SMB shadow copy restore is in progress, the user may see a disruption
in the restore operation.
After the failover is complete, the user must skip the file that could not be accessed. The restore
operation then proceeds. The file will not be restored and can be manually copied later, or the
user can cancel the restore operation and then restart it.
Permissions in a cross-protocol SMB environment
The manner in which the SMB server handles permissions affects the use of files by both Windows
and Linux clients. Following are some considerations.
How the SMB server handles UIDs and GIDs
The SMB server provides a true Windows experience for Windows users. Consequently, it must
be closely aligned with Windows in the way it handles permissions and ownership on files.
Windows uses ACLs to control permissions on files. The SMB server puts a bit-for-bit copy of the
ACLs on the Linux server (in the files on the StoreAll file system), and validates file access through
these permissions.
ACLs are tied to Security Identifiers (SIDs) that uniquely identify users in the Windows environment,
and which are also stored on the file in the Linux server as a part of the ACLs. SIDs are obtained
from the authenticating authority for the Windows client (in StoreAll software, an Active Directory
server). However, Linux does not understand Windows-style SIDs; instead, it has its own permissions
control scheme based on UID/GID and permissions bits (mode bits, sticky bits). Since this is the
native permissions scheme for Linux, the SMB server must make use of it to access files on behalf
of a Windows client; it does this by mapping the SID to a UID/GID and impersonating that UID/GID
when accessing files on the Linux file system.
From a Windows standpoint, all of the security for the StoreAll software-resident files is self-consistent;
Windows clients understand ACLs and SIDs, and understand how they work together to control
100 Using SMB
access to and security for Windows clients. The SMB server maintains the ACLs as requested by
the Windows clients, and emulates the inheritance of ACLs identically to the way Windows servers
maintain inheritance. This creates a true Windows experience around accessing files from a
Windows client.
This mechanism works well for pure Linux environments, but (like the SMB server) Linux applications
do not understand any permissions mechanisms other than their own. Note that a Linux application
can also use POSIX ACLs to control access to a file; POSIX ACLs are honored by the SMB server,
but will not be inherited or propagated. The SMB server also does not map POSIX ACLs to be
compatible with Windows ACLs on a file.
These permission mechanisms have some ramifications for setting up shares, and for cross-protocol
access to files on a StoreAll system. The details of these ramifications follow.
Permissions, UIDs/GIDs, and ACLs
The SMB server does not attempt to maintain two permission/access schemes on the same file.
The SMB server is concerned with maintaining ACLs, so performs ACL inheritance and honors
ACLS. The UID/GIDs and permission bits for files on a directory tree are peripheral to this activity,
and are used only as much as necessary to obtain access to files on behalf of a Windows client.
The various cases the SMB server can encounter while accessing files and directories, and what
it does with UID/GID and permission bits in that access, are considered in the following sections.
Pre-existing directories and files
A pre-existing Linux directory will not have ACLs associated with it. In this case, the SMB server
will use the permission bits and the mapped UID/GID of the SMB user to determine whether it has
access to the directory contents. If the directory is written by the SMB server, the inherited ACLS
from the directory tree above that directory (if there are any) will be written into the directory so
future SMB access will have the ACLs to guide it.
Pre-existing files are treated like pre-existing directories. The SMB server uses the UID/GID of the
SMB user and the permission bits to determine the access to the file. If the file is written to, the
ACLs inherited from the containing directory for the file are applied to the file using the standard
Windows ACL inheritance rules.
Working with pre-existing files and directories
Pre-existing file treatment has ramifications for cross-protocol environments. If, for example, files
are deposited into a directory tree using NFS and then accessed using SMB clients, the directory
tree will not have ACLs associated with it, and access to the files will be moderated by the NFS
UID/GID and permissions bits. If those files are then modified by an SMB client, they will take on
the UID/GID of the SMB client (the new owner) and the NFS clients may lose access to those files.
New directories and files
New directories created in a tree by the Windows client inherit the ACLs of the parent directory.
They ACLs are created with the UID/GID of the Windows user (the UID/GID that the SID for the
Windows user is mapped to) and they have a Linux permission bit mask of 700. This translates to
Linux applications (which do not understand the Windows ACL) having owner and group (users
with the same group ID) with read, write, execute permissions, and everyone else having just read
and execute permissions.
New files are handled the same way as directories. The files inherit the ACLs of the parent directory
according to the Windows rules for ACL inheritance, and they are created with a UID/GID of the
Windows user as mapped from the SID. They are assigned a permissions mask of 700.
Permissions in a cross-protocol SMB environment
101
Working with new files and directories
The inheritance rules of Windows assume that all directories are created on a Windows machine,
where they inherit ACLs from their parent; the top level of a directory tree (the root of the file system)
is assigned ACLs by the file system formatting process from the defaults for the system.
This process is not in place on file serving nodes. Instead, when you create a share on a node,
the share does not have any inherited ACLs from the root of the file system in which it is created.
This leads to strange behavior when a Windows client attempts to use permissions to control access
to a file in such a directory. The usual CREATOR/OWNER and EVERYBODY ACLs (which are a
part of the typical Windows ACLS inheritance ACL set) do not exist on the containing directory for
the share, and are not inherited downward into the share directory tree. For true Windows-like
behavior, the creator of a share must access the root of the share and set the desired ACLs on it
manually (using Windows Explorer or a command line tool such as ICACLS). This process is
somewhat unnatural for Linux administrators, but should be fairly normal for Windows administrators.
Generally, the administrator will need to create a CREATOR/OWNER ACL that is inheritable on
the share directory, and then create an inheritable ACL that controls default access to the files in
the directory tree.
Changing the way SMB inherits permissions on files accessed from Linux applications
To prevent the SMB server from modifying file permissions on directory trees that a user wants to
access from Linux applications (so keeping permissions other than 700 on a file in the directory
tree), a user can set the setgid bit in the Linux permissions mask on the directory tree. When the
setgid bit is set, the SMB server honors that bit, and any new files in the directory inherit the
parent directory permission bits and group that created the directory. This maintains group access
for new files created in that directory tree until setgid is turned off in the tree. That is, Linux-style
permissions semantics are kept on the files in that tree, allowing SMB users to modify files in the
directory while NFS users maintain their access though their normal group permissions.
For example, if a user wants all files in a particular tree to be accessible by a set of Linux users
(say, through NFS), the user should set the setgid bit (through local Linux mechanisms) on the
top level directory for a share (in addition to setting the desired group permissions, for example
770). Once that is done, new files in the directory will be accessible to the group that creates the
directory and the permission bits on files in that directory tree will not be modified by the SMB
server. Files that existed in the directory before the setgid bit was set are not affected by the
change in the containing directory; the user must manually set the group and permissions on files
that already existed in the directory tree.
This capability can be used to facilitate cross-protocol sharing of files. Note that this does not affect
the permissions inheritance and settings on the SMB client side. Using this mechanism, a Windows
user can set the files to be inaccessible to the SMB users of the directory tree while opening them
up to the Linux users of the directory tree.
Troubleshooting SMB
Changes to user permissions do not take effect immediately
The SMB implementation maintains an authentication cache that is set to four hours. If a user is
authenticated to a share, and the user's permissions are then changed, the old permissions will
remain in effect until the cache expires, at four hours after the authentication. The next time the
user is encountered, the new, correct value will be read and written to the cache for the next four
hours.
This is not a common occurrence. However, to avoid the situation, use the following guidelines
when changing user permissions:
•
•
After a user is authenticated to a share, wait four hours before modifying the user's permissions.
Conversely, it is safe to modify the permissions of a user who has not been authenticated in
the previous four hours.
102 Using SMB
Robocopy errors occur during node failover or failback
If Robocopy is in use on a client while a file serving node is failed over or failed back, the
application repeatedly retries to access the file and reports the error The process cannot
access the file because it is being used by another process. These errors
occur for 15 to 20 minutes. The client's copy will then continue without error if the retry timeout
has not expired. To work around this situation, take one of these steps:
•
Stop and restart processes on the affected file serving node:
# /opt/likewise/bin/lwsm stop lwreg && /etc/init.d/lwsmd stop
# /etc/init.d/lwsmd start && /opt/likewise/bin/lwsm start srvsvc
•
Power down the file serving node before failing it over, and do failback operations only during
off hours.
The following xcopy and robocopy options are recommended for copying files from a client to
a highly available SMB server:
xcopy: include the option /C; in general, /S /I /Y /C are good baseline options.
robocopy: include the option /ZB; in general, /S /E /COPYALL /ZB are good baseline
options.
Copy operations interrupted by node failback
If a node failback occurs while xcopy or robocopy is copying files to an SMB share, the copy
operation might be interrupted and need to be restarted.
Active Directory users cannot access SMB shares
If any AD user is set to UID 0 in Active Directory, you will not be able to connect to SMB shares
and errors will be reported. Be sure to assign a UID other than 0 to your AD users.
UID for SMB Guest account conflicts with another user
If the UID for the Guest account conflicts with another user, you can delete the Guest account and
recreate it with another UID.
Use the following command to delete the Guest account, and enter yes when you are prompted
to confirm the operation:
/opt/likewise/bin/lw-del-user Guest
Recreate the Guest account, specifying a new UID:
/opt/likewise/bin/lw-add-user --force --uid <UID_number> Guest
To have the system generate the UID, omit the --uid <UID_number> option.
Troubleshooting SMB 103
8 Using FTP
The FTP feature allows you to create FTP file shares for data stored on the cluster. Clients access
the FTP shares using standard FTP and FTPS protocol services.
IMPORTANT: Before configuring FTP, select an authentication method (either Local Users or Active
Directory). See “Configuring authentication for SMB, FTP, and HTTP” (page 61) for more information.
An FTP configuration consists of one or more configuration profiles and one or more FTP shares.
A configuration profile defines global FTP parameters and specifies the file serving nodes on which
the parameters are applied. The vsftpd service starts on these nodes when the cluster services
start. Only one configuration profile can be in effect on a particular node.
An FTP share defines parameters such as access permissions and lists the file system to be accessed
through the share. Each share is associated with a specific configuration profile. The share
parameters are added to the profile's global parameters on the file serving nodes specified in the
configuration profile.
You can create multiple shares having the same physical path, but with different sets of properties,
and then assign users to the appropriate share. Be sure to use a different IP address or port for
each share.
You can configure and manage FTP from the GUI or CLI.
Best practices for configuring FTP
When configuring FTP, follow these best practices:
•
If an SSL certificate will be required for FTPS access, add the SSL certificate to the cluster
before creating the shares. See “Managing SSL certificates” (page 174) for information about
creating certificates in the format required by StoreAll software and then adding them to the
cluster.
•
When configuring a share on a file system, the file system must be mounted.
•
If the directory path to the share includes a subdirectory, be sure to create the subdirectory
on the file system and assign read/write/execute permissions to it. (StoreAll software does
not create the subdirectory if it does not exist, and instead adds a /pub/ directory to the
share path.)
•
For High Availability, when specifying IP addresses for accessing a share, use IP addresses
for VIFs having VIF backups. See the administrator guide for your system for information about
creating VIFs.
•
The allowed ports are 21 (FTP) and 990 (FTPS).
•
Uploads and downloads to an anonymous share (anonymous=true) can only be done by
nonusers or by ftp user, which is the default user of an anonymous share. For information
about commands for anonymous shares, see “FTP and FTPS commands for anonymous shares”
(page 111).
Managing FTP from the GUI
Use the Add New File Share Wizard to configure FTP. You can then view or modify the configuration
as necessary.
Configuring FTP
On the GUI, select File Shares from the Navigator to open the File Shares panel, and then click
Add to start the Add New File Share Wizard.
104 Using FTP
On the File Share page, select FTP as the File Sharing Protocol. Select the file system, which must
be mounted, and enter the default directory path for the share. If the directory path includes a
subdirectory, be sure to create the subdirectory on the file system and assign read/write/execute
permissions to it.
NOTE: StoreAll software does not create the subdirectory if it does not exist, and for anonymous
shares only, adds a /pub/ directory to the share path instead. All files uploaded through the
anonymous user will then be placed in that directory. The /pub/ directory is not created for a
non-anonymous share.
On the Config Profile page, select an existing configuration profile or create a new profile,
specifying a name and defining the appropriate parameters.
Managing FTP from the GUI 105
On the Host Servers page, select the servers that will host the configuration profile.
106 Using FTP
On the Settings page, configure the FTP parameters that apply to the share. The parameters are
added to the file serving nodes hosting the configuration profile. Also enter the IP addresses and
ports that clients will use to access the share. For High Availability, specify the IP address of a VIF
having a VIF backup.
NOTE:
The allowed ports are 21 (FTP) and 990 (FTPS).
NOTE: If you need to allow NAT connections to the share, use the Modify FTP Share dialog box
after the share is created.
On the Users page, specify the users to be given access to the share. If no users are specified on
this page, then any user who can be authenticated according to your StoreAll authentication settings
for the cluster can access the share as read-write. Users must also have access permissions at the
file system level to read or write. If any users are specified on this page, only those users may
access the share and all other users are denied regardless of their file system permissions.
IMPORTANT: Ensure that all users who are given read or write access to shares have sufficient
access permissions at the file system level for the directories exposed as shares.
Managing FTP from the GUI 107
To define permissions for a user, click Add to open the Add User to Share dialog box.
Managing the FTP configuration
Select File Shares > FTP from the Navigator to display the current FTP configuration. The FTP Config
Profiles panel lists the profiles that have been created. The Shares panel shows the FTP shares
associated with the selected profile.
108 Using FTP
Use the buttons on the panels to modify or delete the selected configuration profile or share. You
can also add another FTP share to the selected configuration profile. Use the Modify FTP Share
dialog box if you need to allow NAT connections on the share.
Managing FTP from the CLI
FTP is managed with the ibrix_ftpconfig and ibrix_ftpshare commands. For detailed
information, see the HP StoreAll Storage CLI Reference Guide.
Configuring FTP
To configure FTP, first add a configuration profile, and then add an FTP share:
Add a configuration profile:
ibrix_ftpconfig -a PROFILENAME [-h HOSTLIST] [-S SETTINGLIST]
For the -S option, use a comma to separate the settings, and enclose the settings in quotation
marks, such as "passive_enable=TRUE,maxclients=200". To see a list of available settings
for the profile, use the following command:
ibrix_ftpconfig -L
Add an FTP share:
ibrix_ftpshare -a SHARENAME -c PROFILENAME -f FSNAME -p dirpath -I
IP-Address:port [-u USERLIST] [-S SETTINGLIST]
For the -S option, use a comma to separate the settings, and enclose the settings in quotation
marks, such as "browseable=true,readonly=true". For the -I option, use a semicolon to
separate the IP address:port settings and enclose the settings in quotation marks, such as
"ip1:port1;ip2:port2;...". To list the available settings for the share, use the following
command:
ibrix_ftpshare -L
Managing the FTP configuration
Use the following commands to view, modify, or delete the FTP configuration. In the commands,
use -v 1 to display detailed information.
View configuration profiles:
ibrix_ftpconfig -i -h HOSTLIST [-v level]
Managing FTP from the CLI 109
Modify a configuration profile:
ibrix_ftpshare -m SHARENAME -c PROFILENAME [-f FSNAME -p dirpath] -I
IP-Address:port [-u USERLIST] [-S SETTINGLIST]
Delete a configuration profile:
ibrix_ftpconfig -d PROFILENAME
View an FTP share:
ibrix_ftpshare -i SHARENAME -c PROFILENAME [-v level]
List FTP shares associated with a specific profile:
ibrix_ftpshare -l -c PROFILENAME [-v level]
List FTP shares associated with a specific file system:
ibrix_ftpshare -l -f FSNAME [-v level]
Modify an FTP share:
ibrix_ftpshare -m SHARENAME -c PROFILENAME [-f FSNAME -p dirpath] -I
IP-Address:port [-u USERLIST] [-S SETTINGLIST]
Delete an FTP share:
ibrix_ftpshare -d SHARENAME -c PROFILENAME
The vsftpd service
When the cluster services are started on a file serving node, the vsftpd service starts automatically
if the node is included in a configuration profile. Similarly, when the cluster services are stopped,
the vsftpd service also stops. If necessary, use the Linux command ps -ef | grep vsftpd
to determine whether the service is running.
If you do not want vsftpd to run on a particular node, remove the node from the configuration
profile.
IMPORTANT: For FTP share access to work properly, the vsftpd service must be started by
StoreAll software. Ensure that the chkconfig of vsftpd is set to OFF (chkconfig vsftpd
off).
Starting or stopping the FTP service manually
Start the FTP service:
/usr/local/ibrix/ftpd/etc/vsftpd start /usr/local/ibrix/ftpd/hpconf/
Stop the FTP service:
/usr/local/ibrix/ftpd/etc/vsftpd stop /usr/local/ibrix/ftpd/hpconf/
Restart the FTP service:
/usr/local/ibrix/ftpd/etc/vsftpd restart /usr/local/ibrix/ftpd/hpconf/
NOTE: When the FTP configuration is changed with the GUI or CLI, the FTP daemon is restarted
automatically.
110
Using FTP
Accessing shares
Clients can access an FTP share by specifying a URL in their Web browser, such as Internet Explorer.
In the following URLs, IP_address:port is the IP (or virtual IP) and port configured for the share.
•
For a share configured with the anonymous parameter is set to true, use the following URL:
ftp://IP_address:port/
•
For a share configured with a userlist and having the anonymous parameter set to false,
use the following URL:
ftp://<ADDomain\username>@IP_address:port/
NOTE: When a file is uploaded into an FTP share, the file is owned by the user who uploaded
the file to the share.
If a user uploads a file to an FTP share and specifies a subdirectory that does not already exist,
the subdirectory will not be created automatically. Instead, the user must explicitly use the mkdir
ftp command to create the subdirectory. The permissions on the new directory are set to 755. If
the anonymous user created the directory, it is owned by ftp:ftp. If a non-anonymous user
created the directory, the directory is owned by user:group.
You can also use curl commands to access an FTP share. (The default SSL port is 990.)
You can access the shares as follows:
•
As an anonymous share. See “FTP and FTPS commands for anonymous shares” (page 111).
•
As a non-anonymous share. See “FTP and FTPS commands for non-anonymous shares”
(page 112).
•
From any Fusion Manager that has FTP clients. See “FTP and FTPS commands for Fusion
Manager” (page 113).
FTP and FTPS commands for anonymous shares
This section provides the following FTP and FTPS commands for anonymous shares. All commands
should be entered on one line.
Table 1 Upload a file by using the FTP protocol for anonymous shares
Use this command when...
Command
Files can be uploaded by any user
curl -T <filename> ftp://IP_address/pub/
You must provide the default user name and password
(“ftp” for the username and “ftp” for the password)
curl -T <filename> ftp://IP_address/pub/
-u ftp:ftp
Table 2 Upload a file by using the FTPS protocol for anonymous shares
Use this command when...
Command
You do not need to specify the user name and password
curl --ftp-ssl-reqd --cacert <certificate
file> -T <filename>
ftp://IP_address:990/pub/
You must provide the default user name and password
(“ftp” for the username and “ftp” for the password)
curl --ftp-ssl-reqd --cacert <certificate
file> -T <filename>
ftp://IP_address:990/pub/ -u ftp:ftp
Accessing shares
111
Table 3 Download a file by using the FTP protocol
Use this command when...
Command
You do not need to specify the user name and password
curl ftp://IP_address/pub/server.pem -o
<path to download>\<filename>
You must provide the default user name and password
(“ftp” for the username and “ftp” for the password)
curl ftp://IP_address/pub/server.pem -o
<path to download>\<filename> -u ftp:ftp
Table 4 Download a file by using the FTPS protocol
Use this command when...
Command
You do not need to specify the user name and password
curl --ftp-ssl-reqd --cacert <certificate
file> ftp://IP_address:990/pub/<filename>
-o <filename>
You must provide the default user name and password
(“ftp” for the username and “ftp” for the password)
curl --ftp-ssl-reqd --cacert <certificate
file> ftp://IP_address:990/pub/<filename>
-o <filename> -u ftp:ftp
The following example shows a web browser accessing an anonymous share.
FTP and FTPS commands for non-anonymous shares
This section provides the following FTP and FTPS commands for non-anonymous shares. All
commands should be entered on one line.
Table 5 Upload a file by using the FTP protocol for domain user
112
Use this command when...
Command
You do not need to specify the domain
curl -T <filename> ftp://IP_address/ -u
USER:PASSWORD
You must specify the domain
curl -T <filename> ftp://IP_address/ -u
DOMAIN\\USER:PASSWORD
Using FTP
Table 6 Upload a file by using the FTPS protocol for local user
Use this command when...
Command
You need to supply the user name and password but not
the domain
curl --ftp-ssl-reqd --cacert <certificate
file> -T <filename>
ftp://IP_address:990/pub/ -u USER:PASSWORD
You must specify the domain, such as for an Active
Directory user
curl --ftp-ssl-reqd --cacert <certificate
file> -T <filename> ftp://IP_address/ -u
DOMAIN\\USER:PASSWORD
Table 7 Download a file by using the FTP protocol for domain user
Use this command when...
Command
You do not need to specify the domain
curl ftp://IP_address/<filename> -o <path
to download>\<filename> -u USER:PASSWORD
You must specify the domain, such as for an Active
Directory user.
curl ftp://IP_address/<filename> -o <path
to download>\<filename> -u
DOMAIN\\USER:PASSWORD
Table 8 Download a file by using the FTPS protocol for local user
Use this command when...
Command
You do not need to specify the domain
curl --ftp-ssl-reqd --cacert <certificate
file> ftp://IP_address:990/pub/<filename>
-o <path to download>\<filename> -u
USER:PASSWORD
You must specify the domain
curl --ftp-ssl-reqd --cacert <certificate
file> ftp://IP_address:990/pub/<filename>
-o <path to download>\<filename> -u
DOMAIN\\USER:PASSWORD
FTP and FTPS commands for Fusion Manager
Shares can be accessed from any Fusion Manager that has FTP clients:
ftp <Virtual_IP>
For FTPS, use the following command from the active Fusion Manager:
lftp -u <user_name> -p <ssl port> -e 'set ftp:ssl-force true' <share_IP>
Accessing shares
113
9 Using HTTP
The HTTP feature allows you to create HTTP file shares for data stored on the cluster. Clients access
the HTTP shares using standard HTTP and HTTPS protocol services.
IMPORTANT: Before configuring HTTP, select an authentication method (either Local Users or
Active Directory). See “Configuring authentication for SMB, FTP, and HTTP” (page 61) for more
information.
The HTTP configuration consists of a configuration profile, a virtual host, and an HTTP share. A
profile defines global HTTP parameters that apply to all shares associated with the profile. The
virtual host identifies the IP addresses and ports that clients will use to access shares associated
with the profile. A share defines parameters such as access permissions and lists the file system to
be accessed through the share.
HTTP is administered from the GUI or CLI. On the GUI, select HTTP from the File Shares list in the
Navigator. The HTTP Config Profiles panel lists the existing configuration profiles and the virtual
hosts configured on the selected profile.
HTTP share types
StoreAll software provides three types of HTTP shares:
Table 9 Types of HTTP shares
HTTP share type
Description
Standard HTTP share
This type of share is used to access file system data.
HTTP-StoreAll Representation State
Transfer (REST) API share in
file-compatible mode
This type of share provides programmatic access to user-stored files and their
metadata. The metadata is stored on the HP StoreAll Express Query database
in the StoreAll cluster and provides fast query access to metadata without
scanning the file system.
HTTP-StoreAll REST API shares in object This type of share provides concepts similar to OpenStack Object Storage API
mode
to support programmatic access to user-stored files. Users create containers
within each account to hold objects (files), and the user's string identifier for
the object maps to a hashed path name on the file system.
114
Using HTTP
Uses for the StoreAll REST API
Although the StoreAll REST API is not generally intended for your end users, it lets you create
applications using the StoreAll file systems and Express Query.
You can develop applications that:
•
Gather user input and send requests programmatically to StoreAll
•
Digest responses from StoreAll and present results to user in a readable format
•
Can be coded in any language for example, Java or python, on any client operating system,
such as Windows or Linux.
Certain tools let you send ad-hoc direct requests and show responses:
•
Web browsers with add-ons, curl, and others. You must enter request data in the StoreAll
REST API syntax.
•
Sample Java client application provided by HP to guide customer developers. See “Obtaining
the HP StoreAll REST API Sample Client Application” (page 116) for information on how to
access the sample Java client.
Features for each file share mode
The following table provides supported features for each type of file share mode:
Table 10 Supported features for each share node
Features
Standard HTTP share (no
REST API)
File-Compatible mode
Object mode
File upload/download
X
X
X
File system hierarchy stored X
as-is on a StoreAll file system
X
Access by other protocols,
such as SMB and NFS
X
X
No access by other
protocols
Account and container
partitioning of objects
X
Objects that do not fit into a
file system hierarchy) files
stored opaquely on the
StoreAll file system
X
Express Query metadata DB
for:
X
No Express Query
• System/custom metadata
queries
• Custom metadata
assignment
• WORM/Retention
management
Best practices for HTTP REST API shares
Keep in mind the following best practices for file-compatible and object mode shares:
•
Do not put file-compatible and object mode shares on the same filesystem.
•
Avoid putting object mode shares on retention-enabled file systems, unless the auto-commit
feature is needed. The object mode API does not include retention management features.
Managing WORM or retention states must be performed outside the API as described in
“Managing data retention” (page 196).
Uses for the StoreAll REST API
115
•
You must assign read, write, and execute permissions to the share’s directory path and all
parent directories up to the file system mount point to allow accounts to be created by their
owners through the API. For example, if your share’s directory path is /objFS1/objStore,
and the file system objFS1 is mounted at /objFS1, both directories must be set to read,
write, and execute permissions.
•
Do not set the directory path of the file-compatible mode share to a subdirectory of the mount
point. Make the mount point directory be the directory path for the share.
Obtaining the HP StoreAll REST API Sample Client Application
The HP StoreAll product includes a REST API to provide programmatic access to user-stored files
and their metadata. HP provides a sample Java client application to guide customer developers.
To obtain the sample Java client:
1. Go to the StoreAll download drivers and software page:
http://www.hp.com/support/StoreAllDownloads.
2.
3.
4.
Select your product and then select the operating system.
Locate and download the HP StoreAll REST API sample client application.
Contents from the file must be extracted in a specific folder. Before you unzip the download,
see the HP StoreAll REST API Sample Client Application Design Reference that downloaded
with the Java client for specific instructions.
At publication time of the HP StoreAll Storage File System User Guide, the Java sample client
only covers file-compatible mode, and the HP StoreAll REST API Sample Client Application
Design Reference uses terminology from StoreAll version 6.2, such as referring to file-compatible
mode as REST Object API.
Checklist for creating HTTP shares
Use the following checklist for creating HTTP shares.
NOTE: Some of the steps listed in the following table only apply to HTTP-StoreAll REST API shares.
If you are creating standard HTTP shares, you can skip steps 2 through 3 in the following table.
Table 11 Checklist for creating HTTP shares
Step
Step applies
only
to REST API
Shares
Task
1
No
Make sure the file system is mounted.
“Creating and mounting
file systems” (page 14)
2
Yes
Enable data retention on the mounted file system if it
Required only was not enabled when the file system was created.
.
for
“Enabling file systems for
data retention” (page 198)
nl
Where to find more
information
file-compatible
shares.
Optional for
object mode
shares.
3
Yes
Required only
for
file-compatible
shares.
116
Using HTTP
Enable Express Query on the mounted file system (file “Enabling file systems for
compatible mode only) if it was not enabled when the data retention” (page 198)
file system was created.
IMPORTANT: Do not enable Express Query on
object mode shares or on any existing file system that
has any object mode API shares defined.
Step
completed?
Table 11 Checklist for creating HTTP shares (continued)
Step applies
only
to REST API
Shares
nl
Step
4
Step applies
to all HTTP
share types.
Task
Step applies
to all HTTP
share types.
Step
completed?
Create or select an exiting HTTP config profile through • CLI:“Creating HTTP
the GUI or through the CLI (ibrix_httpconfig).
shares from the CLI”
(page 131)
IMPORTANT: Each server can have only one HTTP
profile.
5
Where to find more
information
• GUI: “Creating HTTP
shares from the GUI”
(page 118).
Create or select an existing HTTP virtual host through • CLI:“Creating HTTP
the GUI or through the CLI (ibrix_httpvhost).
shares from the CLI”
(page 131)
• GUI: “Creating HTTP
shares from the GUI”
(page 118).
6
Step applies
to all HTTP
share types.
Create the HTTP share through the GUI or by using
the CLI (ibrix_httpshare).
• CLI: “Creating HTTP
shares from the CLI”
(page 131)
• GUI: “Creating HTTP
shares from the GUI”
(page 118)
Best practices for configuring HTTP
When configuring HTTP, follow these best practices:
•
If an SSL certificate will be required for HTTPS access, add the SSL certificate to the cluster
before creating the shares. See “Managing SSL certificates” (page 174) for information about
creating certificates in the format required by StoreAll software and then adding them to the
cluster.
•
When configuring a share on a file system, the file system must be mounted.
•
If the directory path to the share includes a subdirectory, be sure to create the subdirectory
on the file system and assign read/write/execute permissions to it. (StoreAll software does
not create the subdirectory if it does not exist, and instead adds a /pub/ directory to the
share path.)
•
Ensure that all users who are given read or write access to HTTP shares have sufficient access
permissions at the file system level for the directories exposed as shares.
•
For High Availability, when specifying IP addresses for accessing a share, use IP addresses
for VIFs having VIF backups. See the administrator guide for your system for information about
creating VIFs.
Object mode shares and data retention
You can choose to create object mode API shares on regular file systems or on retention-enabled
file systems. Since the object mode API offers no functionality to manage WORM or retained files,
you would normally create them on regular file systems.
If you choose to put an object mode API share on a retention-enabled file system, and if you choose
to enable auto-commit on the file system, objects will become WORM automatically after the
auto-commit period of no filesystem activity on the object. A WORM object cannot be modified
nor replaced via the API. If a default retention period is also defined, it will also become retained
at the same time, and cannot be deleted via the API for the default retention period. Such a file
can be removed administratively using the ibrix_reten_adm CLI command.
Best practices for configuring HTTP
117
Creating HTTP shares from the GUI
Use the Add New File Share Wizard to create the HTTP share. You can then view or modify the
configuration as necessary.
To create HTTP shares:
Table 12 Creating HTTP shares from the GUI
Type of share to create
See
Standard HTTP share
“Creating standard HTTP shares ” (page 118)
StoreAll REST API share
“Creating StoreAll REST API shares” (page 123)
Creating standard HTTP shares
To create a standard HTTP share:
1. On the GUI, select File Shares from the Navigator to open the File Shares panel, and then
click Add to start the Add New File Share Wizard.
2. On the File Share page, select HTTP from the File Sharing Protocol menu.
3.
118
Enter the share name and directory path. Then, click Next.
Using HTTP
4.
Configure a new profile on the Config Profile dialog box, specifying a name and the
appropriate parameters for the profile. Select Host servers on the “Host Servers” page. Click
Next.
Creating HTTP shares from the GUI
119
5.
On the “Virtual Host” page, enter the vhost name. Select the false option from the Enable
StoreAll REST API menu. Fill in remaining details of SSL certificate, domain and IP address.
Click Next.
120 Using HTTP
6.
On the “Settings” page, enter the URL path and fill in the remaining details. Click Next. On
the Settings page, set the appropriate parameters for the share. Note the following:
•
When specifying the URL Path, do not include http://<IP address> or any variation
of this in the URL path. For example, /reports/ is a valid URL path. The beginning and
ending slashes of the path are optional. For example, /reports/, reports, and
/reports are valid entries and will be stored as /reports/.
•
Default Permissions: New files uploaded via the HTTP share are given default permissions
in standard UNIX octal notation. The owner user and group receive read-write permissions
(77) and everyone else receive read-only permission (5). This default permission is ignored
when creating directories, which are set to 0755.
•
When the WebDAV feature is enabled for a standard HTTP share, the share becomes a
readable and writable medium with locking capability. The primary user can make edits,
while other users can only view the resource in read-only mode. The primary user must
unlock the resource before another user can make changes.
Set the Anonymous field to false only if you want to restrict access to specific users.
•
Browseable: Keep in mind the following for a standard HTTP share. :
◦
If Browseable is set to true, the user can issue HTTP GET requests for any directory
path within the share's directory tree, and that directory's listing of files and
subdirectories will be returned in the HTTP response. An error will be returned if the
user issuing the HTTP request does not have file system permission to navigate down
the path to that directory and read its contents.
◦
If Browseable is set to false, a GET request for a directory path will always return
an error, regardless of user’s permissions.
Creating HTTP shares from the GUI
121
7.
On Summary page, ensure that the right parameters are displayed.
Ensure that:
122
•
In the Virtual Host summary section, the value of IBRIX REST API is displayed as Disabled
•
In the File Share summary section, the value of IBRIX REST API Mode as disabled.
Using HTTP
8.
Click Finish.
When the wizard is complete, users can access the share from a browser. For example, if you
configured the share with the anonymous user, specified 99.226.50.92 as the IP address on the
Create Vhost dialog box, and specified /reports/ as the URL path on the Add HTTP Share
dialog box, users can access the share using the following URL:
http://99.226.50.92/reports/
The users will see the directory listing of the base URL path directory of the share (if the browseable
property of the share is set to true), and can open and save files. For more information about
accessing shares and uploading files, “Accessing standard and file-compatible mode HTTP shares”
(page 133).
Creating StoreAll REST API shares
Keep in mind the following when creating StoreAll REST API shares in object mode:
•
Do not create file-compatible mode and object mode REST API shares on the same file system.
Use separate file systems for each type of REST API share.
•
Do not create an object mode REST API share on any file system where Express Query is
enabled. Express Query does not support storing metadata from objects in object mode shares.
If Express Query is enabled on a file system with an object mode API share, metadata from
the object mode files are ingested incorrectly, causing unusable metadata to be added to the
Express Query database. This situation negatively impacts the performance of Express Query
for the files outside the object mode share on the same file system that it ingests correctly.
To create a StoreAll REST API share:
Creating HTTP shares from the GUI
123
1.
2.
3.
124
On the GUI, select File Shares from the Navigator to open the File Shares panel, and then
click Add to start the Add New File Share Wizard.
On the File Share page, select HTTP from the File Sharing Protocol menu. Select the file system,
which must be mounted, and enter a share name and the default directory path for the share.
Select an existing profile or configure a new profile on the Config Profile dialog box, specifying
a name and the appropriate parameters for the profile.
Using HTTP
4.
The Host Servers dialog box displays differently whether you selected a previous profile or
you are create a new one. If you selected the option Create a new HTTP Profile, you are
prompted to select the file server nodes on which the HTTP service will be active. Only one
configuration profile can be in effect on a particular server.
Creating HTTP shares from the GUI
125
126
5.
If you selected an existing profile on the Config Profile dialog box, you are shown the hosts
defined for that profile, as shown in the following figure.
6.
The Virtual Host dialog box displays differently whether you selected a previous profile or you
are create a new one. If you are creating a new profile, the Virtual Host dialog box prompts
you to enter additional information, as shown in the following figure. Enter a name for the
virtual host. If an HTTP-StoreAll REST API share is to be created then select true from the Enable
StoreAll REST API menu. If the standard HTTP share is to be created then select false from the
Enable StoreAll REST API menu. Specify an SSL certificate and domain name if used. Also add
one or more IP addresses:ports for the virtual host. For High Availability, specify a VIF having
a VIF backup.
Using HTTP
7.
If you selected a previous profile, the Virtual Host prompts you to select a pre-existing Vhost
or create an HTTP Vhost.
8.
9.
If you already have Vhosts defined, you can select an existing Vhost.
On the Settings page, set the appropriate parameters for the share. Note the following:
•
When specifying the URL Path, do not include http://<IP address> or any variation
of this in the URL path. For example, /reports/ is a valid URL path. The beginning and
ending slashes of the path are optional. For example, /reports/, reports, and
/reports are valid entries and will be stored as /reports/. For REST API shares in
File-Compatible mode, do not define a URL path of more than one directory level, such
as reports/sales; however, your single-directory URL path can correspond to any
arbitrarily deep directory path on the StoreAll file system.
•
StoreAll REST API Mode field on the Settings page is displayed only when the Enable
StoreAll REST API on Virtual Host page is selected as true (for example, when HTTP-StoreAll
REST API share is to created). The StoreAll REST API Mode can be selected as File
Compatible or Object from the drop-down list, and it and defines which mode's syntax
will be accepted by this API share. For example, if object mode is selected, then HTTP
requests using the File-Compatible mode syntax will not be understood and will most likely
return an error.
•
Default Permissions: For File-compatible shares, new files uploaded via the HTTP share
will be given these permissions on the file system. The value is in standard UNIX octal
notation, the default giving read-write permission to the owning user and group (the 77)
and read-only permission to everyone else (the 5). This default permission is ignored
when creating directories, which will always be set to 0755.
For object mode shares, this setting is ignored. Containers (directories, on the file system)
are always created with permissions 0700, and access to a container’s objects by other
users is controlled at the container level instead (see “Set Container Permission” (page 150)).
Permissions cannot be assigned to objects individually.
•
The Enable WebDAV option is greyed out for HTTP-StoreAll REST API shares and it is
shown as false because every StoreAll REST API share is also a WebDAV-disabled share.
Creating HTTP shares from the GUI
127
•
Set the Anonymous field to false only if you want to restrict access to specific users. The
Anonymous field must be set to false when an HTTP-StoreAll REST API share in object
mode is to be created.
•
Browseable: Keep in mind the following:
◦
◦
Standard and file compatible mode API shares:
–
If Browseable is set to true, the user can issue HTTP GET requests for any directory
path within the share's directory tree, and that directory's listing of files and
subdirectories will be returned in the HTTP response. An error will be returned
if the user issuing the HTTP request does not have file system permission to
navigate down the path to that directory and read its contents.
–
If Browseable is set to false, a GET request for a directory path will always return
an error, regardless of user’s permissions.
Object mode API shares: This setting has no effect.
10. On the Users page, specify the users to be given access to the share. If no users are specified
on this page, then any user who can be authenticated according to your StoreAll authentication
settings for the cluster can access the share as read-write. Users must also have access
permissions at the file system level to read or write. If any users are specified on this page,
only those users may access the share and all other users are denied regardless of their file
system permissions.
IMPORTANT: Ensure that all users who are given read or write access to shares have sufficient
access permissions at the file system level for the directories exposed as shares.
128
Using HTTP
11. To allow specific users read access, write access, or both, click Add. On the Add Users to
Share dialog box, assign the appropriate permissions to the user. When you complete the
dialog, the user is added to the list on the Users page.
The Summary panel presents an overview of the HTTP configuration. You can go back and modify
any part of the configuration if necessary.
When the wizard is complete, users can access the API HTTP share from a client. See “HTTP-REST
API object mode shares” (page 138) and “HTTP-REST API file-compatible mode shares” (page 152)
for details.
Managing the HTTP configuration
Select File Shares > HTTP from the Navigator to display the current HTTP configuration. The HTTP
Config Profiles panel lists the profiles that have been created. The Vhosts panel shows the virtual
hosts associated with the selected profile.
Creating HTTP shares from the GUI
129
Use the buttons on the panels to modify or delete the selected configuration profile or virtual host.
To view HTTP shares on the GUI, select the appropriate profile on the HTTP Config Profiles top
panel, and then select the appropriate virtual host from the lower navigator tree. The Shares bottom
panel shows the shares configured on that virtual host. Click Add Share to add another share to
the virtual host. For example, you could create multiple shares having the same physical path, but
with different sets of properties, and then assign users to the appropriate share. You can also have
any number of REST API and regular HTTP shares attached to the same Vhost.
Tuning the socket read block size and file write block size
By default, the socket read block size and file write block size used by Apache are set to 8192
bytes. If necessary, you can adjust the values with the ibrix_httpconfig command. The values
must be between 8 KB and 2 GB.
ibrix_httpconfig -a profile1 -h node1,node2 -S
"wblocksize=<value>,rblocksize=<value>"
You can also set the values on the Modify HTTP Profile dialog box:
130 Using HTTP
Creating HTTP shares from the CLI
On the command line, HTTP shares are managed by the ibrix_httpconfig,
ibrix_httpvhost, and ibrix_httpshare commands. The ibrix_httpshare command
is also used for creating a StoreAll REST API-enabled HTTP share. For detailed information, see
the HP StoreAll Storage CLI Reference Guide.
Table 13 Creating HTTP shares from the CLI
Step
Task
Command/Pointer
1
Add a
configuration
profile.
ibrix_httpconfig -a PROFILENAME [-h HOSTLIST] [-S SETTINGLIST]
2
Add a virtual
host.
For the -S option, use a comma to separate the settings, and enclose the settings in quotation
marks,I such as "keepalive=true,maxclients=200,...". To see a list of available
settings for the share, use ibrix_httpconfig -L.
ibrix_httpvhost -a VHOSTNAME -c PROFILENAME -I IP-Address:port [-S
SETTINGLIST]
To create a virtual host with the REST API enabled, use the ibrixrestapi setting for example:
ibrix_httpvhost -a VHOSTNAME -c PROFILENAME -I IP-Address:port -S
ibrixrestapi=true
For the -S option, use a comma to separate the settings, and enclose the settings in quotation
marks, such as "sslcert=name,...". To see a list of the allowable settings for the vhost,
use ibrix_httpvhost -L.
For the -I option, use a semicolon to separate the IP-address:port settings and enclose
the settings in quotation marks, such a "ip1:port1;ip2:port2;...". For example:
ibrix_httpvhost -a vhost1 -c myprofile -I "99.1.26.1:80;99.1.26.1:81"
Use the IP-Address of the User Virtual Interface (the User VIF) of a file serving node as that
interface will be migrated to the node’s HA partner in event of the first node failing. Use
theibrix_nic -l command to find the VIF. The VIF is the IP address of type cluster in an
up, Link Up state, and it is not an Active FM.
3
Add an HTTP
share.
See “Creating HTTP shares” (page 132) for detailed steps.
Creating HTTP shares from the CLI
131
Creating HTTP shares
See “Checklist for creating HTTP shares” (page 116) for a list of prerequisites that must be completed
before creating an HTTP share.
IMPORTANT:
mode:
Keep in mind the following when creating StoreAll REST API shares in file-compatible
•
Do not create file-compatible mode and object mode REST API shares on the same file system.
Use separate file systems for each type of REST API share.
•
Do not create an object mode REST API share on any file system where Express Query is
enabled. Express Query does not support storing metadata from objects in object mode shares.
If Express Query is enabled on a file system with an object mode API share, metadata from
the object mode files are ingested incorrectly, causing unusable metadata to be added to the
Express Query database. This situation negatively impacts the performance of Express Query
for the files outside the object mode share on the same file system that it ingests correctly.
Table 14 Adding an HTTP share
Type of HTTP share to add
Enter the following command on one line
Standard HTTP Share
ibrix_httpshare -a SHARENAME -c PROFILENAME -t VHOSTNAME
-f FSNAME -p dirpath -P urlpath [-u USERLIST] [-S
SETTINGLIST]
HTTP StoreAll REST API share in
file-compatible mode1
ibrix_httpshare -a SHARENAME -c PROFILENAME -t VHOSTNAME
-f FSNAME -p dirpath -P urlpath [-u USERLIST] -S
"ibrixRestApiMode=filecompatible"
HTTP StoreAll REST API share in object ibrix_httpshare -a SHARENAME -c PROFILENAME -t VHOSTNAME
mode1,2
-f FSNAME -p dirpath -P urlpath [-u USERLIST] -S
"ibrixRestApiMode=object,anonymous=false"
The anonymous setting must be set to false. If you do not provide the
anonymous setting ("ibrixRestApiMode=object"), the anonymous value
is false by default.
1
The parameter userlist is optional, and it is not necessarily needed for the StoreAll REST API.
All the other listed arguments are required for the StoreAll REST API.
2
Additional steps are required to take full advantage of the object mode and its use of containers.
See “Tutorial for using the HTTP StoreAll REST API object mode” (page 139).
For the -S option, use a comma to separate the settings, and enclose the settings in quotation
marks, such as "davmethods=true,browseable=true,readonly=true".
For example, to create a new HTTP share and enable the WebDAV property on that share:
# ibrix_httpshare -a share3 -c cprofile1 -t dav1vhost1 -f ifs1 -p
/ifs1/dir1 -P url3 -S "davmethods=true"
To see all of the valid settings for an HTTP share, use the following command:
ibrix_httpshare -L
Managing the HTTP configuration
View a configuration profile:
ibrix_httpconfig -i PROFILENAME [-v level]
Modify a configuration profile:
ibrix_httpconfig -m PROFILENAME [-h HOSTLIST] [-S SETTINGLIST]
Delete a configuration profile:
ibrix_httpconfig -d PROFILENAME
View a virtual host:
132
Using HTTP
ibrix_httpvhost -i VHOSTNAME -c PROFILENAME [-v level]
Modify a virtual host:
ibrix_httpvhost -m VHOSTNAME -c PROFILENAME -I IP-Address:port [-S
SETTINGLIST]
IMPORTANT: Once an HTTP vhost is created, you cannot change the value of the StoreAll REST
API mode (ibrixRestApiMode).
Delete a virtual host:
ibrix_httpvhost -d VHOSTNAME -c PROFILENAME
View an HTTP share:
ibrix_httpshare -i SHARENAME -c PROFILENAME -t VHOSTNAME [-v level]
Modify an HTTP share:
ibrix_httpshare -m SHARENAME -c PROFILENAME -t VHOSTNAME [-f FSNAME -p
dirpath] [-P urlpath] [-u USERLIST] [-S SETTINGLIST]
The following example modifies an HTTP share, enabling WebDAV:
# ibrix_httpshare --m share1 -c cprofile1 -t dav1vhost1 -S
"davmethods=true"
IMPORTANT: Once an HTTP share is created, you cannot change value of the StoreAll REST API
mode (ibrixRestApiMode).
Delete an HTTP share:
ibrix_httpshare -d SHARENAME -c PROFILENAME -t VHOSTNAME
Starting or stopping the HTTP service manually
Start the HTTP service:
/usr/local/ibrix/httpd/bin/apachectl -k start
Stop the HTTP service:
/usr/local/ibrix/httpd/bin/apachectl -k stop
Restart the HTTP service:
/usr/local/ibrix/httpd/bin/apachectl -k restart
NOTE: When the HTTP configuration is changed with the GUI or CLI, the HTTP daemon is restarted
automatically.
Accessing standard and file-compatible mode HTTP shares
Clients access an HTTP share by specifying a URL in their browser (Internet Explorer or Mozilla
Firefox). In the following URLs, IP_address:port is the IP (or virtual IP) and port configured for
the share:
http://IP_address:port/urlpath/pathname
If the anonymous parameter set to false, you'll be required to supply a user name and password
when prompted.
If the pathname ends with a directory and the browseable property of the share is set to true, an
HTML directory listing of the base URL path directory of the share is returned, showing all files and
subdirectories. The list elements are hyperlinks that can be clicked to open the files and
subdirectories. If the browseable property is set to false, an error is returned instead of the HTML
list.
Starting or stopping the HTTP service manually
133
If the pathname ends with a filename, the browser either opens the file or prompts the user to open
or save the file, depending on the browser settings.
You can also use curl commands to access an HTTP share.
NOTE: When a file is uploaded into an HTTP share, the file is owned by the user who uploaded
the file to the share.
If a user uploads a file to an HTTP share and specifies a subdirectory that does not already exist,
the subdirectory will be created. For example, you could have a share mapped to the directory
/ifs/http/ and using the URL path named http_url, a user could upload a file into the share:
curl -T file http://<ip>:<port>/http_url/new_dir/file
If the directory new_dir does not exist under /ifs/http, the http service automatically creates
the directory /ifs/http/new_dir/ and sets the permissions to 755. If the anonymous user
performed the upload, the new_dir directory is owned by daemon:daemon. If a non-anonymous
user performed the upload, the new_dir directory is owned by user:group.
For anonymous users:
•
Upload a file using HTTP protocol:
curl -T <filename> http://IP_address:port/urlpath/pathname
•
Upload a file using HTTPS protocol:
curl --cacert <cacert_file> -T <filename>
https://IP_address:port/urlpath/pathname/
The <filename> property does not have to have the same file name as specified in the
pathname.
•
Download a file using HTTP protocol:
curl http://IP_address:port/urlpath/pathname -o <local destination
path for download>/<filename>/
•
Download a file using HTTPS protocol:
curl --cacert <cacert_file> https://IP_address:port/urlpath/pathname
-o <local destination path for download>/<filename>/
For Active Directory users (specify the user as in this example: mycompany.com\\User1):
•
Upload a file using HTTP protocol:
curl -T <filename> -u <ADuser>
http://IP_address:port/urlpath/pathname
•
Upload a file using HTTPS protocol:
curl --cacert <cacert_file> -T <filename> -u <ADuser>
https://IP_address:port/urlpath/pathname
•
Download a file using HTTP protocol:
curl -u <ADuser> http://IP_address:port/urlpath/pathname -o <local
destination path for download>/<filename>/
•
Download a file using HTTPS protocol:
curl --cacert <cacert_file> -u <ADuser>
https://IP_address:port/urlpath/pathname -o <local destination path
for download>/<filename>/
For more information on operations that can be performed for HTTP-StoreAll REST API share in
file-compatible mode, see “HTTP-REST API file-compatible mode shares” (page 152).
134
Using HTTP
Configuring Windows clients to access HTTP WebDAV shares
Complete the following steps to set up and access WebDAV enabled shares:
•
Verify the entry in the Windows hosts file.
Before mapping a network drive in Windows, verify that an entry exists in the c:\Windows\
System32\drivers\etc\hosts file. For example, IP address 10.2.4.200 is assigned to
a Vhost named vhost1, and if the Vhost name is not being used to map the network drive,
the client should be able to resolve the domain name such as www.storage.hp.com (in reference
to domain name-based virtual hosts).
•
Verify the characters in the Windows hosts file.
The Windows c:\Windows\System32\drivers\etc\hosts file specifies IP versus
hostname mapping. Verify that the hostname in the file includes alphanumeric characters only.
•
Verify that the WebClient Service is started.
The WebClient Service must be started on Windows-based clients attempting to access the
WebDAV share. The WebClient service is missing by default on Windows 2008. To install
the WebClient service, the Desktop Experience package must be installed. See http://
technet.microsoft.com/en-us/library/cc754314.aspx for more information.
•
Update the Windows registry.
When using WebDAV shares in Windows Explorer, you must edit the Windows registry if
there are many files in the WebDAV shares or the files are large. Launch the windows registry
editor using the regedit command. Go to:
Computer\HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\services\WebClient\Parameters
Change the value of FileSizeLimitInBytes from the default value of 50000000 to
2147483648 (which is the value of 2 GB in bytes). Change the value of
FileAttributesLimitInBytes from the default value of 1000000 to 10000000.
•
Enable debug logging on the server.
Edit the /usr/local/ibrix/httpd/conf/httpd.conf file and change the line
LogLevel warn to LogLevel debug. Next, restart Apache on the file serving nodes:
Use the following command to stop Apache:
/usr/local/ibrix/httpd/bin/apachectl stop
Use the following command to start Apache:
/usr/local/ibrix/httpd/bin/apachectl start
•
Save documents during node failovers.
During a failover, MS Office 2010 restores the connection when the connection is lost on the
server, but you must wait until you are asked to refresh the document being edited. In MS
Office 2003 and 2007, you must save the document locally. After the failover is successful,
you must re-map the drive and save the document on the WebDAV share.
•
Create an SSL certificate. When using basic authentication to access WebDAV-enabled HTTP
shares, SSL-based access is mandatory.
•
Verify that the hostname in the certificate matches the Vhost name.
When creating a certificate, the hostname should match the Vhost name or the domain name
issued when mapping a network drive or opening the file directly using the URL such as https://
storage.hp.com/share/foo.docx.
•
Ensure that the WebDAV URL includes the port number associated with the Vhost.
Configuring Windows clients to access HTTP WebDAV shares
135
•
Consider the assigned IP address when mapping a network drive on Windows.
When mapping a network drive in Windows, if the IP address assigned to the Vhost is similar
to the format 10.2.4.200, there should be a corresponding entry in the Windows hosts file.
Instead of using the IP address in the mapping, use the name specified in the hosts file. For
example, 10.2.4.200 can be mapped as srv1vhost1, and you can issue the URL https://
srv1vhost1/share when mapping the network drive.
•
Unlock locked files.
Use the command BitKinex to unlock locked files if the files do not unlock before closing
the application.
•
Remove zero byte files created by Microsoft Excel.
Microsoft Excel creates 0 byte files on the WebDAV shares. For example, after editing the
file foo.xlsx and saving it more than once, a file such as ~$foo.xlsx is created with 0
bytes in size. Delete this file using a tool such as BitKinex, or remove the file on the file system.
For example, if the file system is mounted at /ifs1 and the share directory is /ifs1/dir1,
remove the file /ifs1/dir1/~$foo.xlsx.
•
Use the correct URL path when mapping WebDAV shares on Windows 2003.
When mapping WebDAV shares on Windows 2003, the URL should not end with a trailing
slash (/). For example, http://storage.hp.com/share can be mapped, but http://
storage.hp.com/ cannot be mapped. Also, you cannot map https:// because of
limitations with Windows 2003.
•
Delete read-only files through Windows Explorer.
If you map a network drive for a share that includes files designated as read-only on the server,
and you then attempt to delete one of those files, the file appears to be deleted. However,
when you refresh the folder (using the REFRESH command), the folder containing the deleted
file in Windows Explorer reappears. This behavior is expected in Windows Explorer.
NOTE: Symbolic links are not implemented in the current WebDAV implementation (Apache’s
mod-dav module).
NOTE: After mapping a network drive of a WebDAV share on Windows, Windows Explorer
reports an incorrect folder size or available free space on the WebDAV share.
Troubleshooting HTTP
After upgrading the StoreAll software, the HTTP WebDAV share might be
inaccessible or display a permission error when trying to write to a share
During the StoreAll software upgrade, the active connection to the WebDAV share might be lost
and cause share access issues. The share will be inaccessible while node failover is occurring. If
you still experience share access issues after the upgrade, remount the WebDAV share on the
Windows client machine:
net use * http://192.168.1.1/smita/
In this instance, the HTTP WebDAV share is 192.168.1.1/smita.
HTTP WebDAV share is inaccessible through Windows Explorer when files
greater than 10 KB are created
When files greater than 10 KB are created, the HTTP WebDAV share is inaccessible through
Windows Explorer and the following error appears: Windows cannot access this disc:
This disc might be corrupt. This condition is seen in various Windows clients such as
Windows 2008, Windows 7, and Windows Vista. The condition persists even if the share is
136
Using HTTP
disconnected and re-mapped through Windows Explorer. The files are accessible on the file serving
node and through BitKinex.
Use
1.
2.
3.
4.
5.
6.
7.
the following workaround to resolve this condition:
Disconnect the network drive.
In Windows, select Start > Run and enter regedit.
Increase FileAttributeLimitInBytes from the default value of 1000000 to 10000000
(by a factor of 10).
Increase FileSizeLimitInBytes 10 times by adding one extra zero.
Save the registry and quit.
Reboot the Windows system.
Map the network drive to allow you to access the WebDAV share containing large files.
HTTP WebDAV share fails when downloading a large file from a mapped
network drive
When downloading or copying a file greater than 800 MB in Windows Explorer, the HTTP
WebDAV share fails. Use the following workaround to resolve this condition:
1. In Windows, select Start > Run and type regedit to open the Windows registry editor.
2. Navigate to:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WebClient\Parameters
NOTE:
7.
3.
This hierarchy exists only if WebClient is installed on Windows Vista or Windows
Change the registry parameter values to allow for the increased file size.
a. Set the value of FileAttributesLimitInBytes to 1000000 in decimal.
b. Set the value of FileSizeLimitInBytes to 2147483648 in decimal, which equals
2 GB.
Mapping HTTP WebDAV share as AD or local user through Windows Explorer
fails if the HTTP Vhost IP address is used
Mapping the HTTP WebDAV share to a network drive as Active Directory or local user through
Windows Explorer fails on Windows 2008 if the HTTP Vhost IP address is used. To resolve this
condition, add the Vhost names and IP addresses in the hosts file on the Windows clients.
Troubleshooting HTTP
137
10 HTTP-REST API object mode shares
The StoreAll REST API share in object mode provides concepts similar to OpenStack Object Storage
API to support programmatic access to user-stored files. Users create containers within each account
to hold objects (files), and the user's string identifier for the object maps to a hashed path name
on the file system.
Terminology for StoreAll REST API object mode
Table 15 Terminology for StoreAll REST API object mode
Term
Description
Account
The user is represented by an account. Accounts are implemented as existing LDAP, AD, or
local user accounts. Accounts can contain up to 10,000 containers.
Container
Objects are stored in containers. Containers cannot be nested. A container maps to a
subdirectory of the account directory on the file system.
Container names must be at least 3 and no more than 63 characters long. Container names
must also be a series of one or more labels separated by a period (.), where each label:
• Must start with a lowercase letter or a number
• Must end with a lowercase letter or a number
• Can contain lowercase letters, numbers and dashes
Container names support the full UTF-8 character set.
Valid container names:
• 15.226.48.204
• 10.10.10.10.hello
• 10.10-10.10xyz123.----10.20
• 10-10-10-10
• 10---------10
• 1.2.3.4. newcontainer.1.2.3.4
Consecutive periods (..) are illegal in a container name, as shown in the following example:
new..container
Label
A label is defined as a string of any combination of one or more:
• Lowercase characters
• Digits
• ‘-‘ characters
Object
A file uploaded by a user into a container within a user's account. Users can store an infinite
amount of objects in a container.
Object ID
A string unique to this object within a container, identifying the object. It does not have to
relate to the original file being uploaded. It can be a path and file name, such as:
/dir1/file1.txt
But it does not have to be a file system construct. It can be any string, such as:
Monthly_report-Jan2013
The file is stored in a directory with the filename defined by StoreAll on the file system, in the
given container's directory, based on a hash code of the object ID string. The file location in
the file system is not based on any paths in the object ID string.
138
HTTP-REST API object mode shares
Tutorial for using the HTTP StoreAll REST API object mode
This section walks you through using the major components of object mode. You will be shown
how to:
•
Create a container.
•
Set permissions for the container.
•
Upload and create objects for the container.
•
View the contents of the container.
•
Download contents from the container.
It is assumed you have already created an HTTP StoreAll REST API share in object mode. See
“Checklist for creating HTTP shares” (page 116) for information on how to create an HTTP share.
IMPORTANT: You cannot use the local root account on the StoreAll servers. If you have not
already, identify a local user account or a user account on Active Directory. See “Configuring
authentication for SMB, FTP, and HTTP” (page 61) for additional information.
Gather the information listed in the following table before you begin the steps in this section:
Table 16 Required information
Required information
How to obtain
IP address and port of the virtual host
for the HTTP share
If the port is 80, you do not need to
include it in the command.
To obtain the name of the virtual host for the HTTP share, enter the following
command: ibrix_httpshare -l
Directory path and URL path of the
HTTP share
To find the directory path and URL path of the HTTP share, enter the following
command:
nl
To obtain the IP address of the virtual host, enter the following command:
ibrix_httpvhost -l -v 1
ibrix_httpshare -l -f <file_server_name>
The directory path of the HTTP share is under the Path column, and the URL Path
is under the URL Path column.
To begin the tutorial:
Tutorial for using the HTTP StoreAll REST API object mode
139
1.
Create a container.
When you first create a container, the account directory, named as the numeric user ID of the
user creating the container, is automatically created as a subdirectory of the root of the HTTP
share.
See “Terminology for StoreAll REST API object mode” (page 138) for a list of requirements for
creating the container name.
The curl format for this command is the following:
NOTE:
•
If secure HTTP is configured, replace the http with https.
•
Enter the following command on one line.
curl -X PUT http://<IP_address:port>/<urlpath>/<account_name>/
<container_name> -u <username>:<password>
In this command:
•
The <IP_address:port> is the IP address and port of the virtual host for the HTTP
share.
•
The <account_name> is the name of the account under which you want to create the
container, for example, jsmith.
•
The <container_name> is the name of the container you want to create.
•
The <username> is the user name of the account creating the container. Only the account
owner can create a container, so <account_name> and <username> must be the
same.
•
The <password> the password of your account.
The account and user name is either a StoreAll local user, an Active Directory user, or an
LDAP user. The user must be one that can authenticate to use the HTTP share. Use the
ibrix_localusers command to create a local user. See the HP StoreAll Storage CLI
Reference Guide for more information.
For example, for a local user:
curl -X PUT http://192.168.2.2/obj/localuser1/
container-a -u localuser1:mypassword
HTTP version of the command
PUT /<urlpath>/<account_name>/<container_name> HTTP/1.1
CURL version of the command for Active Directory users
You can use any of the following formats for Active Directory users:
NOTE:
Enter the following commands on one line.
•
curl -X PUT http://<IP_address:port/<urlpath>/
<account_name>/<container_name> -u <username>@<domain_name>:
<password> --header "x-ibrix-addomain:<domain_name>"
•
You can provide the <domain_name> and <account_name> three different ways in
the curl command:
◦
In the first format, double backslashes are used to preserve (escape) the backslash
separator between username and domain name:
curl -X PUT http://<IP_address:port>/<urlpath>/<domain_name>
\\<account_name>/<container_name> -u <domain_name>\\
<username>:<password>
140 HTTP-REST API object mode shares
As shown in the following example:
curl -X PUT http://192.168.2.2/obj/qa1\\administrator/
activedomaincontainer -u qa1\\administrator
◦
In the second format, double quotes are used to preserve the backslash:
curl -X PUT "http://<IP_address:port>/<urlpath>/<domain_name>
\<account_name>/<container_name>" -u "<domain_name>\
<username>:<password>"
◦
In the third format, the %5C URL encoding of the backslash is used in the URL, but it
cannot be used in the -u user parameter:
curl -X PUT http://<IP_address:port>/<urlpath>/<domain_name>
%5C<account_name>/<container_name> -u <domain_name>\\
<username>:<password>
As shown in the following example:
curl -X PUT http://192.168.48.204/obj/administrator/
activedomaincontainer -u administrator@qa1.hp.com:mypassword
HTTP version of the command
NOTE:
Enter the following command on one line.
PUT /<urlpath>/<domain_name>%5C<account_name>/
<container_name> HTTP/1.1
The %5C is the URL encoding for a backslash.
2.
Set the permissions of the container.
A container is always created with read-write permission by the account user and no permission
for any other user. This is represented in UNIX octal permissions as 700 (a digit for user,
group, and other permissions). These permissions can be changed by the account user to
allow other users to read and write objects in the container. Change the "group" permission
of the container to allow other users in the same default group as the account user to read
and/or write objects. Change the "other" permission to allow any user to read and/or write
objects. The execute bit must be set to allow read or write access. The read and write bits can
be set as needed.
The permissions can be entered as an integer in standard UNIX octal permissions format,
between the range 0 to 777. A two-digit permission, such as 77, would be considered to be
077, as single-digit permission, such as 7, would be considered to mean 007.
NOTE:
Enter the following commands on one line.
curl -X PUT http://<host_IP_address:port>/<urlpath>
/<user_id>/<container_name>?permission -u <username>
:<password> --header "x-ibrix-permission:<permission>"
In this instance <permission> is the integer permission value in octal, for example 770.
HTTP version of the command
PUT /<urlpath>/<account_name>/<container_name>?permission HTTP/1.1
x-ibrix-permission:<permission>
Tutorial for using the HTTP StoreAll REST API object mode
141
3.
Add items to your newly created container.
NOTE:
•
Enter the following commands on one line.
To create an empty object in your new container, enter the following command:
NOTE:
Object names support UTF-8.
curl -X PUT http://<host_IP_address:port>/<urlpath>/
<account_name>/<container_name>/<object_id> --user
<username>:<password>
The object ID can be any string uniquely identifying the object in this container. See
Object ID in Table 15 (page 138) for details.
An example of the command:
curl -X PUT http://192.168.2.2/obj/localuser1/
container-a/mydir1/mysubdir2/myobj.xyz -u localuser1:mypassword
•
To upload an object to your new container, enter the following command:
curl -T <local_pathname> http://<IP_address:port>
/<urlpath>/<account_name>/<container_name>/
<object_id> -u <username>:<password>
For example:
curl -T C:\temp\myLocalFile.txt http://192.168.2.2
/obj/localuser1/container-a/mydir1/mysubdir2/
myobj.xyz -u localuser1:mypassword
In this example, the user installed the open source cURL program on their Windows
computer.
The subdirectories /mydir1/mysubdir2 provided as an example in the command are
the virtual location of the myobj2.xyz file. The file path for the uploaded file is converted
into the hash name for the file object. If you used the CLI to traversed the directory structure
of the container, you would find myobj2.xyz stored under the first and second level
hash directories under the container instead of under the subdirectories specified during
the upload.
HTTP version of the command
PUT /<urlpath>/<account_name>/<container_name>/
<object_id> HTTP/1.1
4.
142
View the list of objects in the container. “Viewing the contents of a container” (page 144).
HTTP-REST API object mode shares
5.
Download files from your share:
NOTE:
Enter the following commands on one line.
curl -o <local_pathname> http://<IP_address:port>/
<urlpath>/<account_name>/<container_name>/
<object_id> -u <username>:<password>
For example:
curl -o C:\temp\myLocalFile.txt http://192.168.2.2/
obj/qa1\\administrator/container-a/mydir1/mysubdir2/
myobj.xyz -u qa1\\jsmith:mypassword
In this instance:
•
qa1\administrator is the administrator’s account
•
qa1\jsmith is the user making the HTTP request, who has been granted read access
to objects in the administrator's "container-a" container.
Double backslashes were used in this example instead of quotes to preserve the backslash
HTTP version of the command
GET /<urlpath>/<account_name>/<container_name> HTTP/1.1
Viewing the list of containers for an account
You can request a list of all of the containers created for an account and certain metadata of those
containers.
To view the containers for a given account:
NOTE:
Enter the following command on one line.
curl http://<IP_address>:<port>/<urlpath>/
<account_name> -u <username>:<password>
For example:
curl http://192.168.2.2/obj/qa1\\
administrator -u qa1\\
administrator:mypassword
The list of all containers created by the user associated with the given account are returned in
JSON format. For example:
[
{
"name":"container-a",
"attributes" : {
"system::size" : 4096,
"system::ownerUserId" :
"system::permissions" :
}
},
{ "name":"container-b",
"attributes" : {
"system::size" : 4096,
"system::ownerUserId" :
"system::permissions" :
}
}
"administrator",
700
"administrator",
775
]
The system::size refers to the number of bytes used by the directory inode representing the
container on the StoreAll server (initially 4096 for any new directory), not the number of objects
Viewing the list of containers for an account
143
in the container. In this example, the permissions for container-a are the default 700, but the
permissions for container-b have been changed by qa1\\administrator to 775.
Viewing the contents of a container
You can request a list of all of the objects in a container, and certain metadata of those objects.
To view the contents of a container:
NOTE:
Enter the following command on one line.
curl http://<IP_address:port>/<urlpath>/
<account_name>/<container name> -u <user_name>:<password>
For example:
curl http://192.168.2.2/obj/qa1\\
administrator/container-a -u qa1\\
administrator:mypassword
In this example:
•
qa1\\administrator is the account name
•
container-a is the name of the container
The list of all objects in the container will be returned in JSON format. For example:
NOTE: Although the system::permissions are shown as 666, access to all objects is subject only
to the container permissions assigned by the account owning the container. The default permissions
for a container is 700, allowing object access only to the account owner. Permissions cannot be
assigned to individual objects via the REST API.
144 HTTP-REST API object mode shares
Finding the corresponding object ID from a hash name
NOTE:
These steps are for someone with administrator privileges.
The HTTP StoreAll REST API object mode saves files on files system with hashed names which are
generated while uploading objects/files and not with their actual name, specified by the user
during their initial uploading.
In the steps below, assume your user name is jsmith, and that you know the location of the hash
reference to which you want to find the corresponding file name.
To obtain the corresponding object ID string name from a hash name:
1. To find the directory path of the HTTP share, enter the following command:
ibrix_httpshare -l -f ibrixFS
The directory path of the HTTP share is under the Path column, and the URL Path is under the
URL Path column.
2.
Go to the directory path of the HTTP share by entering the following command:
[root@bv07-07 3ca]#cd /ibrixFS/objectStore/
In the following example, /ibrixFS/objectStore/ is the directory path of the HTTP Object
Mode API share defined for the ibrixFS file system.
3.
View the contents of the /ibrixFS/objectStore/ directory by entering the following
command:
[root@bv07-07 objectStore]# ls -l
The following is displayed:
total 12
drwxrwxrwx 3 jsmith
drwxrwxrwx 3 ENAS\user1
objectapi_group
4096 Dec
ENAS\domain^users 4096 Dec
7 15:43 2003
7 14:54 367002807
In this example, 2003 is the owner user ID of the user jsmith and 367002807 is the Active
Directory ID of another user (user1) in the ENAS domain. To find the user ID of a local user,
enter the command:
ibrix_localusers -l -u <username>
To find the user ID of an AD or LDAP user, contact the AD or LDAP servers.
4.
Go to the 2003 directory because it references the user jsmith.
[root@bv07-07 objectStore]# cd 2003
5.
List the contents of the 2003 directory:
[root@bv07-07 2003]# ls -l
total 4 drwx------ 3 jsmith objectapi_group 4096 Dec 7 15:49
newcontainer
In this instance newcontainer is the name of the container containing the hash reference.
All items from this directory level down are a hash reference.
6.
Go to the container containing the hash name for the object that you want to find the
corresponding file name:
[root@bv07-07 2003]# cd newcontainer
Finding the corresponding object ID from a hash name
145
In this instance newcontainer is the container containing the hash reference.
7.
Enter the following command to list the contents of newcontainer.
[root@bv07-07 newcontainer]# ls -l
total 4 drwxrwxrwx 3 jsmith objectapi_group 4096 Dec 7 15:49 45
In this instance 45 is the first-level directory created from the 11th to 20th least significant bits
of the 40-byte hexadecimal value that was created when the file was uploaded or created on
the share.
8.
Go to the first-level directory, named 45:
[root@bv07-07 newcontainer]# cd 45
9.
List the contents for 45 directory by entering the following command:
[root@bv07-07 45]# ls -l
total 4
drwxrwxrwx 2 jsmith objectapi_group 4096 Dec 7 15:49 3ca
In this instance, the 3ca hash reference refers to the second-level directory created from the
10 least significant bits of the 40-byte hexadecimal value that was created when the file was
uploaded or created on the share.
10. Go to the second–level directory, named 3ca:
[root@bv07-07 45]cd 3ca folder
11. List the contents of the 3ca directory by entering the following command:
[root@bv07-07 3ca]# ls -l
The following is displayed.
-rwx------ 1 jsmith objectapi_group 0 Dec 7 15:49
c9abd2747714446e9190da1389b1f8bc901117ca
In this instance, c9abd2747714446e9190da1389b1f8bc901117ca is the corresponding
40–byte hash reference of the object ID string.
12. To obtain the corresponding file for the hash name, enter the following command:
NOTE:
Enter the following command on one line.
getfattr -n "user.bucket_mode_key" c9abd2747
714446e9190da1389b1f8bc901117ca
In this instance, c9abd2747714446e9190da1389b1f8bc901117ca is the hash name for
the corresponding file.
The command displays the following:
#file: c9abd2747714446e9190da1389b1f8bc901117ca
user.bucket_mode_key="file1.txt"
In this instance, file1.txt is the object ID string assigned by the user when the object was
created or uploaded.
Finding the corresponding hash name from an object ID
When you upload a file or create a file in a container, a 40–byte long SHA-1 hash code value is
calculated for the object. The hash code provides a pointer to where the object is stored. When
the system stores all objects in a two-level directory structure within each container directory, it
uses the 10 least significant bits of the hash code as the second-level directory name, and the next
10 least significant bits as the first-level directory name. The object's filename in this 2-level directory
is the hexadecimal value of the entire hash code.
146
HTTP-REST API object mode shares
The first time a user creates a container, a directory with the numeric user ID of the user representing
that account, is created to hold the container. The container directory within this account directory
is the container name provided by the user in the container creation request. Subsequent containers
created by that user are also stored under the same account directory. Each container contains
the first level directory and then the second level directory containing the SHA-1 hash code for the
file object. The following diagram shows the directory layout for objects created in object mode
API shares.
All file objects have first and second level directories, regardless of any directory paths that might
be present in the user's object ID string. For example, assume you upload a file to
<container_name>/mydirectory/subdirectory1/subdirectory2/subdirectory3.
If you traverse the directory structure, the hash file would appear in its second-level directory.
All HTTP commands always contain the following path:
<URL of the file server>/<URL Path of the HTTP
share>/<account_name>/<container_name>
NOTE: The hash name is based off of the object ID string and not of the content. If you have two
different objects with the same ID string, they will be in the same hash directory within the container.
If you upload a file to a container that has the same object ID string as an existing object, it will
replace it.
To find the hash name corresponding to the object ID string of an object stored via the object mode
API:
Finding the corresponding hash name from an object ID
147
1.
Enter the following command on the StoreAll server:
echo -n '<object_id>' | openssl dgst -sha1
For example, if your object identifier string is mydir1/mysubdir2/myobj.xyz, the command
would be the following:
echo -n 'mydir1/mysubdir2/myobj.xyz' | openssl dgst -sha1
The SHA-1 hash code for the string will be returned, for example:
c610260e3075673aadec3afc4983101449db2f05. This hash name is the name of the
file on the StoreAll file system that contains the object contents.
NOTE: If you are not sure of the object ID string of the stored object you wish to find, view
the contents of the container, as described in “Viewing the contents of a container” (page 144).
The object string is the entire string between the double quotes for each object listed, as shown
in the following image:
2.
To determine the names of the two directory levels within the container where this object is
stored, calculate the hex value of the 10 least significant bits of the returned hash code, then
the next 10 least significant bits. In the above example, these values are 33a and 2ca, in
lowercase letters. The first hexadecimal value is the second-level directory name. The second
hexadecimal value is the first-level directory name. So, the above file will be located at the
path:
<Path of object mode REST API share>/<user ID>/<container
name>/2ca/33a
Commands for object mode
The following sections provides the commands for object mode HTTP REST API shares. Enter the
commands on one line, except when otherwise specified.
List Containers
Type of Request: Account Services
Description: Returns the list of containers for a user account.
HTTP command:
GET /<urlpath>/<account_name> HTTP/1.1
CURL command (Enter on one line):
curl http://<IP_address:port>/<urlpath>/
<user_id> -u <user_name>:<password>
Create Container
Type of Request: Container services
Description: Creates a container. See “Terminology for StoreAll REST API object mode” (page 138)
for information in regards to the naming requirements.
148
HTTP-REST API object mode shares
HTTP command:
PUT /<urlpath>/<account_name>/<container_name>
HTTP/1.1
CURL command (Enter on one line:
curl -X PUT http://<IP_address:port>/<urlpath>/<user_id>/
<container_name> -u <user_name>:<password>
You can use a number of different formats for Active Directory users:
NOTE:
Enter commands on one line.
•
curl -X PUT http://<IP_address:port/<urlpath>/<account_name>/
<container_name> -u <username>@<domain_name>:
<password> --header "x-ibrix-addomain:<domain_name>"
•
You can provide the <domain_name> and <account_name> three different ways in the
curl command:
◦
In the first format, double backslashes are used to preserve (escape) the backslash
separator between username and domain name:
curl -X PUT http://<IP_address:port>/<urlpath>/<domain_name>
\\<account_name>/<container_name> -u <domain_name>
\\<username>:<password>
As shown in the following example:
curl -X PUT http://192.168.2.2/obj/qa1\\administrator/
activedomaincontainer -u qa1\\administrator
◦
In the second format, double quotes are used to preserve the backslash:
curl -X PUT "http://<IP_address:port>/<urlpath>/<domain_name>
\<account_name>/<container_name>" -u "<domain_name>
\<username>:<password>"
◦
In the third format, the %5C URL encoding of the backslash is used in the URL, but it cannot
be used in the -u user parameter:
curl -X PUT http://<IP_address:port>/<urlpath>/<domain_name>
%5C<account_name>/<container_name> -u <domain_name>
\\<username>:<password>
As shown in the following example:
curl -X PUT http://192.168.48.204/obj/administrator
/activedomaincontainer -u administrator@qa1.hp.com
:mypassword
List Objects
Type of Request: Container services
Description: Lists the objects in a container.
HTTP command:
GET /<urlpath>/<account_name>/<container_name> HTTP/1.1
CURL command (Enter on one line):
curl http://<IP_address:port>/<urlpath>/<account_name>/
<container name> -u <user_name>:<password>
The list of all objects in the container will be returned in JSON format.
Commands for object mode
149
Delete Container
Type of Request: Container services
Description: Deletes the container.
IMPORTANT:
The container must be empty before it can be deleted.
HTTP command:
DELETE /<urlpath>/<account_name>/<container_name> HTTP/1.1
CURL command (Enter on one line):
curl -X DELETE http://<IP_address:port>/<urlpath>/<user_id>/
<container_name> -u <username>:<password>
Set Container Permission
Type of Request: Container services
Description: Sets the permission of a container. The permissions are based on standard UNIX
permissions. The permissions can be entered as an integer in standard UNIX octal permissions
format, between the range 0 to 777. A two-digit permission, such as 77, would be considered to
be 077, as single-digit permission, such as 7, would be considered to mean 007.
HTTP command:
PUT /<urlpath>/<account_name>/<container_name>?permission
x-ibrix-permission:<permission> HTTP/1.1
CURL command (Enter on one line):
curl -X PUT http://<IP_address:port>/<urlpath>/<user_id>/
<container_name>?permission -u <username>:
<password> --header "x-ibrix-permission:<permission>"
Get Container Permission
Type of Request: Container services
Description: Gets the permission of a container. The permission is returned in the x-ibrix-permission
HTTP header of the response.
HTTP command:
GET /<urlpath>/<account_name>/<container_name>?permission HTTP/1.1
CURL command (Enter on one line):
curl http://<IP_address:port>/<urlpath>/<user_id>/
<container_name>?permission -u <username>:<password>
Create/Update Object
Type of Request: Object Requests
Description: Uploads an object into a container.
HTTP command:
PUT /<urlpath>/<account_name>/<container_name>/<object_id> HTTP/1.1
CURL command (Enter on one line):
curl -T <local_pathname> http://<IP_address:port>/
<urlpath>/<account_name>/<container_name>/
<object_id> -u <username>:<password>
The user_id is the id of the authenticated user in the case of the user trying to put an object in
its own account.
150
HTTP-REST API object mode shares
Retrieve Object
Type of Request: Object Requests
Description: Returns the list of containers for a user account.
HTTP command:
GET /<urlpath>/<account_name>/<container_name> HTTP/1.1
CURL command (Enter on one line):
curl -o <local_pathname> http://<IP_address:port>/<urlpath>/
<account_name>/<container_name>/<object_id> -u <username>:<password>
Delete Object
Type of Request: Object Requests
Description: Deletes an object.
HTTP command:
DELETE /<urlpath>/<account_name>/<container_name>/<object_name> HTTP/1.1
CURL command (Enter on one line):
curl -X DELETE http://<IP_address:port>/<urlpath>/<user_id>/
<container_name>/<objectname> -u <username>:<password>
Commands for object mode
151
11 HTTP-REST API file-compatible mode shares
The StoreAll REST API share in file-compatible mode provides programmatic access to user-stored
files and their metadata. The metadata is stored on the HP StoreAll Express Query database in the
StoreAll cluster and provides fast query access to metadata without scanning the file system. For
more information on managing Express Query, see “Express Query” (page 217).
The StoreAll REST API file-compatible mode provides the ability to upload and download files,
assign custom (user-defined) metadata to files and directories, manage file retention settings, and
query the system and custom metadata of files and directories. You can associate any number of
custom (user-defined) metadata attributes to any file or directory stored on a StoreAll
retention-enabled file system where the Express Query metadata store is enabled. Each custom
attribute consists of an attribute name and assigned value. The API provides commands to create
custom metadata entries for a file, replace values of existing entries, and delete entries. You can
create HTTP-StoreAll REST API shares to access Express Query on the file system.
When the StoreAll server receives an HTTP request, it parses the URL Path from the URL (immediately
following the hostname and port in the URL) and directs the request to the appropriate handler for
that HTTP share, which can be either a standard HTTP share, a file compatible mode API share,
or an object mode API share.
When listing StoreAll REST API-enabled shares through the Management Console, the RestApiMode
field indicates the share type and the URL Path field indicates the handler for a given HTTP request.
You can also obtain information about the RestAPIMode and URL Path, by entering the following
CLI command:
ibrix_httpshare -l
The following is displayed.
Component overview
The StoreAll REST API for the file-compatible mode has a number of components, such as custom
metadata assignments, metadata queries, and retention properties assignments.
File content transfer
You can upload and download files. You can also view the list of files and subdirectories of any
directory in the HTTP share, provided you have permission to navigate to that directory.
Custom metadata assignment
You can associate any number of custom (user-defined) metadata attributes to any file or directory
stored on a StoreAll retention-enabled file system where the Express Query metadata store is
enabled. Each custom attribute consists of an attribute name and assigned value. The API provides
commands to create custom metadata entries for a file, replace values of existing entries, and
delete entries.
Metadata queries
You can issue StoreAll REST API commands that query the pathname and custom and system
metadata attributes for a set of files and directories. Queries can be augmented with a search
criterion for a certain system or custom attribute; only files and directories that match the criterion
are included in the results. The query can specify a single file or a directory. If identifying a directory,
152
HTTP-REST API file-compatible mode shares
the user can query all files in that directory only, or all files in all subdirectories of that directory
recursively.
Retention properties assignment
You can issue StoreAll REST API commands to change a file to the WORM (and optionally retained)
state and set its retention expiration time, subject to the file system’s retention policy settings.
General topics regarding HTTP syntax
Each feature is described in HTTP (with URL-encoded characters where required) and equivalent
curl formats, such as:
PUT command
Enter the following command on one line:
PUT /<urlpath>[/<pathname>]?[version=1]assign=<attribute1>='<value1>'
nl
[,<attribute2>='<value2>'…] HTTP/1.1
curl command
Enter the following command on one line:
curl -g -X PUT
"http[s]://<IP_address>:<port>/<urlpath>[/<pathname>]?[version=1]
nl
assign=<attribute1>='<value1>'[,<attribute2>='<value2>'…]"
When using this syntax, note the following:
•
Optional parameters are shown in square brackets [ and ]. Everything enclosed in the brackets
can be omitted from the request. Do not include the square brackets in the request. For example,
the API supports either http or https for all requests, hence the http[s] nomenclature.
•
Parameters are shown in angle brackets < and >. Replace the parameter with the actual value,
without the angle brackets.
•
Other characters shown in the syntax (such as =, ?, &, and /) must also be entered as-is in
the request and sometimes must be URL-encoded.
•
All parameters before the ? (such as pathname) should be entered as strings without any
surrounding quotes in standard URL format.
•
All parameters after the ? (the query string in HTTP parlance) are either commands, attribute
names, or literals:
◦
Attribute names must be 80 characters or less. The first character must be alphabetic (a-z
or A-Z), followed by a sequence of alphanumeric characters or underscores. No other
characters are allowed. Colon characters (:) are allowed in system attribute names. All
attribute names are case-sensitive.
◦
Literals are either strings or numeric values.
◦
Literal strings must be enclosed in single quotes. Non-escaped UTF-8 characters are
allowed. Literals are case-sensitive. Any single quotes that are part of the string must be
escaped with a second single quote (no double quotes). For example:
'Dave''s book'
◦
Literal numeric values must not be enclosed by quotes, and are always in decimal (0-9).
Component overview
153
•
All HTTP query responses generated by the API code follow the JSON standard. No XML
response format is provided at this time.
•
HTTP request messages have a practical limit of about 2000 bytes, and it can be less if certain
proxy servers are traversed in the network path.
URL encoding
HTTP query strings are URL-decoded by the API code. API clients must encode special characters,
such as greater-than character (>), by replacing them with their hexadecimal equivalent values as
shown by the examples in this section.
The API’s URL decoder interprets certain special characters properly without being URL encoded.
Before the question mark character (?) in any HTTP request URL, the following characters are safe
and do not need to be URL encoded:
/ : - _ . ~ @ #
After the question mark character (?), the following characters are safe and do not need to be URL
encoded:
= & #
All other characters must be URL encoded as their hexadecimal value as described in the ISO-8859-1
(ISO-Latin) standard. For example, the plus character (+) must be encoded as %2B, and the greater
than character (>) must be encoded as %3E.
Spaces can be encoded as either %20 or as the plus character (+), such as "my%20file.txt"
or "my+file.txt" for the file "my file.txt". The plus character (+) is converted to a space
when the URL is decoded by the API code. To include a plus character (+) in the URL, encode it
as %2B, such as "A%2B" instead of "A+".
If you are using a tool such as curl to send the HTTP request, the tool might URL-encode certain
characters automatically, although you might have to enclose at least part of the URL in quotes for
it to do so. The exact behavior depends on the tool.
In the curl examples shown in this section, the entire URL is enclosed in double quotes so that the
non-encoded characters can be shown for readability. The curl tool URL-encodes all required
characters within the double quotes correctly. If you are using a different tool or constructing the
URL programmatically or manually, ensure that the right characters are URL-encoded before sending
it over HTTP to the API.
Pathname parameters
The pathname parameter provided in HTTP requests throughout the syntax must be specified as a
relative path from the <urlpath>, including the file name.
However, the system metadata attribute system::path available for metadata queries must be
specified as a path relative to the mount point of the StoreAll file system. Paths are stored in the
metadata database by this technique.
Optional version parameter
Every StoreAll REST API request can optionally include the following literal string immediately after
the question mark character (?) in the request:
version=1
The version field is recommended, but not required. In the syntax descriptions, it is surrounded by
square brackets to indicate that it is optional.
Changes to this API will normally be backward-compatible, and not require any client-side syntax
changes to perform the same operations. However, certain changes might be required in future
API versions that break backward compatibility. In that case, the version is increased to the next
value. Any request without the version field might no longer work as desired or it might return an
154 HTTP-REST API file-compatible mode shares
error. Any request with the version field and a value less than or equal to the current version, is
handled correctly by the new API version unless the capability has been removed or is beyond the
support lifetime of the product.
API date formats
All date/time values accepted by the API in HTTP requests must be in seconds only (no nanoseconds)
since the UNIX epoch start point, which is:
1 Jan 1970 00:00:00 UTC
In the following example, the user provided the number of seconds that have occurred between
the UNIX epoch and April 17, 2012, 06:09:22 UTC/GMT as the date and time value in an HTTP
request:
1334642962
All dates and times provided in API HTTP responses are in seconds and nanoseconds since the
UNIX epoch start point. For example, the following date/time value as returned by the StoreAll
REST API in a JSON HTTP response:
1334642962.678934883
This signifies the number of seconds since UNIX epoch on April 17, 2012, 06:09:22.678934883
UTC/GMT.
Some of the time fields stored in the inode of the file in the StoreAll file system store a granularity
of seconds only, no nanoseconds. In these cases the nanoseconds portion is returned as zeros (for
example, 1334642962.000000000).
Some of the time fields stored in the metadata database store a granularity of microseconds instead
of nanoseconds. In these cases, the last 3 digits of the nanoseconds portion is returned as zeros
(for example, 1334642962.865449000).
Authentication and permissions
Any user accessing the API must authenticate as one of the valid users or groups configured for
this HTTP share, except for anonymous shares.
For file content transfer operations (see “File content transfer” (page 156)), including file upload,
download, or deletion, the user must also have permission to perform the file system operation, as
defined by the permissions and ownership of the file and its containing directories. Anonymous
users on anonymous shares can only operate on files that an anonymous user has uploaded using
the HTTP share.
For custom metadata assignment and metadata queries, the user must also have file system
permission to navigate to the directory containing the file or directory defined in the URI. If the user
has that permission, custom metadata assignment and queries will be allowed regardless of the
ownership or permissions of the file or directory. If a directory is specified, the operations will be
allowed on all files and subdirectories regardless of their ownership and permissions. Custom
metadata is stored only in the Express Query database and is not stored with the file or its inode
on the file system. An anonymous user on an anonymous share must have this navigate permission
as the Linux user “daemon” and group “daemon” (daemon:daemon), since that is the user the
HTTP Server acts as, for anonymous operations.
For retention properties assignment, the user must also have file system permission to navigate to
the directory containing the file defined in the URI. Additionally, the user must be the owner of the
file according to the file system’s properties for the file’s owning user. If these permissions are not
satisfied, the operation will not be allowed. Retention properties can never be assigned by
anonymous users on anonymous shares, but they can be assigned by authenticated users with
sufficient permissions, on anonymous shares.
Component overview
155
File content transfer
Files can be uploaded and downloaded with the normal StoreAll HTTP shares feature with WebDAV
enabled, as described in earlier sections. In addition, the API defines an HTTP DELETE command
to delete a file. The delete command is only for WebDAV enabled shares.
Upload a file (create or replace)
This command transfers the contents of a file from the client to the HTTP share. If the file identified
does not already exist on the share, it will be created. If it already exists, the contents will be
replaced with the client’s file, if allowed by the file’s StoreAll permissions and retention properties.
Upload capability exists already in the StoreAll HTTP shares feature, and it is documented here
for completeness.
File creation and replacement is subject to StoreAll permissions on the file and directory, and it is
subject to retention settings on that file system. If it is denied, an HTTP error is returned.
The HTTP command is sent in the form of an HTTP PUT request.
HTTP syntax
The HTTP request line format is:
PUT /<urlpath>/<pathname> HTTP/1.1
The file’s contents are supplied as the HTTP message body.
The equivalent curl command format is:
NOTE:
The following command should be entered on one line.
curl -T <local_pathname>
http[s]://<IP_address>:<port>/<urlpath>/<pathname>
If the urlpath does not exist, an HTTP 405 error is returned with the message (Method Not
Allowed).
See “Using HTTP” (page 114) for information about the IP address, port, and URL path.
Parameter
Description
local_pathname
The pathname of the file, stored on the client’s system, to be uploaded to the HTTP share.
pathname
The pathname to be assign to the new file being created on the HTTP share, if the file
does not yet exist. If the file does exist, the file will be overwritten. The pathname should
be specified as a relative path from the <urlpath>, including the file’s name.
Example
curl -T temp/a1.jpg https://99.226.50.92/ibrix_share1/lab/images/xyz.jpg
This example uploads the file a1.jpg, stored on the client’s machine in the temp subdirectory of
the user’s current directory, to the HTTP share named ibrix_share1.
The share is accessed by the IP address 99.226.50.92. Because it is accessed using the standard
HTTPS port (443), the port number is not needed in the URL.
The file is created as filename xyz.jpg in the subdirectory lab/images on the share. If the file
already exists at that path in the share, its contents are overwritten by the contents of a1.jpg ,
provided that StoreAll permissions and retention settings on that file and directory allow it. If the
overwriting is denied, an HTTP error is returned.
If the local file does not exist, the response behavior depends on the client tool. In the case of
curl, it returns an error message, such as the following:
curl: can't open '/temp/a1.jpg'
156
HTTP-REST API file-compatible mode shares
Download a file
This command transfers the contents of a file to the client from the HTTP share. Download capability
already exists in the StoreAll HTTP shares feature, and it is documented here for completeness. If
the file does not exist, a 404 Not Found HTTP error is returned, in addition to HTML output such
as the following:
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"><html><head>
<title>404 Not Found</title>
</head><body>
<h1>Not Found</h1>
<p>The requested URL /api/myfile.txt was not found on this server.</p>
</body></html>
If using curl, the HTML output is saved to the specified local file as if it were the contents of the
file.
The HTTP command is sent in the form of an HTTP GET request.
HTTP syntax
The HTTP request line format is:
GET /<urlpath>/<pathname> HTTP/1.1
The equivalent curl command format is:
curl -o <local_pathname>
http[s]://<IP_address>:<port>/<urlpath>/<pathname>
See “Using HTTP” (page 114) for information about the IP address, port, and URL path.
Parameter
Description
local_pathname
The pathname of the file to be downloaded from the HTTP share and stored on the client’s
system.
pathname
The pathname of the existing file on the HTTP share to download to the client.
Example
curl -o temp/a1.jpg http://99.226.50.92/ibrix_share1/lab/images/xyz.jpg
This example downloads an existing file called xyz.jpg in the lab/images subdirectory of the
ibrix_share1 HTTP share. The file is created with the filename a1.jpg on the client system, in
the subdirectory temp of the user’s current directory.
If the file already exists at that path on the client, its contents are overwritten by the contents of
xyz.jpg, provided that the local client’s permissions and retention settings on that file and directory
allow it. If the overwriting is denied, a local client system-specific error message is returned.
Delete a file
This command removes a file from the StoreAll file system by using the HTTP share interface. File
deletion is subject to StoreAll permissions on the file and directory, and it is subject to retention
settings on that file system. If file deletion is denied, an HTTP error is returned. If the file does not
exist, a 404 Not Found HTTP error is returned.
NOTE:
The delete command is only for WebDAV enabled shares.
The HTTP command is sent in the form of an HTTP DELETE request.
File content transfer
157
HTTP syntax
The HTTP request line format is:
DELETE /<urlpath>/<pathname> HTTP/1.1
The equivalent curl command format is:
curl -X DELETE http[s]://<IP_address>:<port>/<urlpath>/<pathname>
See “Using HTTP” (page 114) for information about the IP address, port, and URL path.
Parameter
Description
pathname
The pathname of the existing file on the HTTP share to be deleted.
Example
curl -X DELETE http://99.226.50.92/ibrix_share1/lab/images/xyz.jpg
This example deletes the existing file called xyz.jpg in the lab/images subdirectory on the
ibrix_share1 HTTP share.
Custom metadata assignment
The API provides commands to define a custom metadata attribute for an existing file or directory
in an HTTP share and to delete an existing custom metadata attribute.
For information about retrieving or querying existing custom metadata already applied to files,
see “Metadata queries” (page 152).
Upload custom metadata (add or replace)
This command adds or modifies one or more custom metadata attributes for an existing file or
directory on the HTTP share. If the specified attribute does not exist for the file, it is added to the
custom metadata list for that file or directory. If an attribute already exists, the current value is
replaced with the client’s value.
Custom metadata applied to a directory does not also apply to files or directories in that directory,
only to the directory itself.
Up to 15 metadata attributes can be assigned in one command. However, there is no defined limit
on the number of metadata attributes that can be assigned to a file. To assign more than 15
attributes, send multiple PUT requests for the same file.
The ability to add or replace custom metadata depends on the permissions of the directory and
file being updated. The custom metadata can be changed if the user has permission to navigate
to the directory containing the file (depending on the “x” permission bits of the directory and all
of its parents) and the user has read permission on the file.
NOTE: The file's write permission is ignored. This behavior will be corrected in a future release
so that the file's write permission is checked instead of its read permission.
Although attributes and values may be up to 80 characters each, HTTP request messages have a
practical limit of about 2000 bytes which supersedes these maximums.
When custom metadata is added, the file or directory becomes protected from renames or moves
and such actions will be denied, even if all custom metadata is later removed. This protection is
independent of file WORM and retention states.
If the file does not exist, a "500 (Internal Server Error)" HTTP error is returned (not a "404 Not
Found").
The HTTP command is sent in the form of an HTTP PUT request.
158
HTTP-REST API file-compatible mode shares
HTTP syntax
The HTTP request line format is:
NOTE:
Enter the following commands on a single line.
PUT command
PUT /<urlpath>[/<pathname>]?[version=1]assign=<attribute1>='<value1>'
[,<attribute2>='<value2>'…] HTTP/1.1
curl command
The equivalent curl command format is:
curl -g -X PUT "http[s]://<IP_address>:<port>/<urlpath>[/<pathname>]?
assign=<attribute1>=<value1>[,<attribute2=value2>…"]
See “Using HTTP” (page 114) for information about the IP address, port, and URL path.
If the urlpath does not exist, an HTTP 405 error is returned with the message (Method Not
Allowed).
Parameter
Description
pathname
The name of the existing file/directory on the HTTP share for which custom metadata is
being added or replaced.
Directory pathnames must end in a trailing slash /.
If the pathname parameter is not present, custom metadata is applied to the directory
identified by <urlpath>.
attribute[n]
The attribute name. Up to 15 attributes can be assigned in a single command. The first
character must be alphabetic (a-z or A-Z), followed by a sequence of alphanumeric
characters or underscores. No other characters are allowed. Attribute names must be
80 characters in length or less.
value[n]
The value to associate with this attribute. Currently, only a string value can be assigned
and the value must be enclosed in single quotes. Future versions of the API may support
numeric or other value types. If the attribute already exists for this file or directory, its
value will be replaced with this supplied value. Values must be less than 80 characters
in length or less.
Example
curl -g -X PUT
"https://99.226.50.92/ibrix_share1/lab/images/xyz.jpg?assign=physician
='Smith,+John;+8136',scan_pass='17'"
This example assigns two custom metadata attributes to the existing file called xyz.jpg in the
lab/imagessubdirectory on the ibrix_share1 HTTP share.
Attribute
Description
Metadata Value
The first attribute is physician.
Its value contains the last name, first
name, and physician’s ID number in
the medical center.
The metadata value for this key is
Smith, John; 8136 with spaces
encoded as the plus character (+).
The second attribute is scan_pass.
Its value identifies this image as the
17th pass of a multi-image scan.
The metadata value for this key is 17.
All custom metadata values, even if they
are numeric, must be quoted, since all
custom metadata values are stored as
strings.
Custom metadata assignment
159
Delete custom metadata
This command removes one or more metadata attributes from an existing file or directory in the
HTTP share. Up to 15 metadata attributes may be removed in one command. The ability to delete
custom metadata is not currently constrained by WORM/retention settings.
The ability to delete custom metadata is not currently constrained by file permissions or the file's
WORM setting. If the file's directory is accessible to the API user, custom metadata operations are
allowed, regardless of file permissions or the WORM state of the file.
Deleting all custom metadata does not remove the protection from renames or moves. This protection
is independent of file WORM and retention states. If the file does not exist, no HTTP error status is
returned. A JSON error message is returned instead, as shown in the following example:
[
{ "physician" : "error in deleting"
]
If the file exists but any attributes being deleted do not exist, no HTTP error status is returned, and
the non-existent attributes are silently ignored.
The HTTP command is sent in the form of an HTTP DELETE request.
HTTP syntax
The HTTP request line format is the following on one line:
DELETE /<urlpath>[/<pathname>]?[version=1]attributes=<attribute1>
nl
[,<attribute2>…] HTTP/1.1
The equivalent curl command format is the following on one line:
curl -g -X DELETE
"http[s]://<IP_address>:<port>/<urlpath>[/<pathname>]?[version=1]
nl
attributes=<attribute1>[,<attribute2>…"]
See “Using HTTP” (page 114) for information about the IP address, port, and URL path.
Parameter
Description
pathname
The name of the existing file/directory on the HTTP share for which custom metadata is
to be deleted.
Directory pathnames must end in a trailing slash /.
attribute[n]
The existing name(s) for the custom metadata attribute(s) to be deleted from the file or
directory custom metadata list.
Example
curl -g -X DELETE "http://99.226.50.92/ibrix_share1/lab/images/xyz.jpg?
attributes=physician,scan_pass"
This example deletes two custom metadata attributes from an existing file called xyz.jpg in the
lab/images subdirectory on the ibrix_share1 HTTP share. The first attribute to delete is
physician and the second is scan_pass.
Metadata queries
The API provides a command to query the metadata about a file or directory on a StoreAll HTTP
share. The command defines the file or directory to query, the metadata fields to return, and how
to filter the list of files and directories returned based on metadata criteria. All queries are performed
on the Express Query database, requiring no other file system access or scans.
The HTTP command is sent in the form of an HTTP GET request.
160 HTTP-REST API file-compatible mode shares
System and custom metadata
Two types of metadata are supported for queries, and both can be referenced in the same query:
•
System metadata applies to all files and directories. Each file and directory stored in StoreAll
includes a fixed set of attributes comprising its system metadata. System metadata attributes
are distinguished from custom metadata attributes by the system:: prefix. System metadata
attributes cannot be deleted by the user through the API.
•
Custom metadata applies only to files and directories where the user assigns them. Custom
metadata names are user-defined, with value strings also defined by the user. Custom metadata
is meaningful to the user, but it is not used by StoreAll. Custom metadata can be added,
replaced, or deleted by the user (see “Custom metadata assignment” (page 152)).
System metadata available
The following table describes the system metadata attributes available for query and updates using
the API.
For "date" types, see “API date formats” (page 155).
With the exception of system::deleteTime, all of the system metadata attributes listed in this
table are valid for live (for example, not-yet-deleted) files and directories. For deleted files, only
the following attributes are valid: system::path, system::deleteTime,
system::lastActivityTime and system::lastActivityReason.
System attribute (key)
Type
Description
Example
Writable
system::path
string
The pathname of the
images/xray.jpg no
file or directory,
expressed as a path
relative to the mount
point of the StoreAll file
system.
This attribute is always
returned above the
JSON stanza of
requested attributes
within curly braces { },
not inside the stanza.
system::ownerUserId
numeric
The StoreAll user ID
(UID) number of the
owner of the file or
directory.
433
no
system::size
numeric
The file size. The
number of bytes stored
by StoreAll to hold the
file’s contents.
1025489
no
system::ownerGroupId
numeric
The StoreAll group ID
700
(GID) number of the
primary group to which
the owner of the file or
directory belongs.
no
Metadata queries
161
System attribute (key)
Type
Description
Example
system::onDiskAtime
numeric
The date/time recorded • Query criteria
in the atime field of the
(seconds):
file inode in the file
1334642962
system. See
“system::onDiskAtime” • JSON response
(page 165).
(including
nanoseconds):
Writable
no
334642962.556708192
See “API date
formats” (page 155).
system::lastChangedTime
numeric
The date/time of the
last status change
(ctime).
See “API date
no
formats” (page 155).
system::lastModifiedTime
numeric
The date/time of the
last content
modification (mtime).
See “API date
no
formats” (page 155).
system::retentionExpirationTime
numeric
The date/time when a See “API date
retained file will expire formats” (page 155).
(or has expired) from
retention. After
expiration, the file
reverts to WORM but
not retained status. This
attribute applies only to
files, returning 0 for
directories. If a file has
never been retained,
this value is 0.
yes (see
“Retention
properties
assignment”
(page 153))
system::mode
numeric
The Linux
mode/permission bits,
a combination of the
values shown by the
Linux man 2 stat
command). See
“system::mode”
(page 165) for more
information.
no
system::tier
numeric
The user-defined name tier1_fast
of the StoreAll tier of
storage hosting this file
or directory.
A decimal number,
such as 33060 for
the octal value
0100444 (regular
file, read-only for
owner / group /
other).
no
If the file is stored in a
segment that is not
assigned to any tier, the
string literal no tier
is returned.
system::createTime
162
HTTP-REST API file-compatible mode shares
numeric
The date/time when the See “API date
no
file or directory was
formats” (page 155).
created (added or
uploaded) to the
StoreAll file system.
System attribute (key)
Type
Description
Example
system::retentionState
numeric
The current
WORM/retention state
of the file, which is a
combination of these bit
values:
A decimal number,
partial (see
such as 11 for the bit system::worm)
value 0x0B (under
legal hold, and
retained, and
WORM)
0x01: WORM
0x02: Retained
0x04: (not used)
0x08: Under legal hold
Writable
nl
nl
nl
This attribute applies
only to files, returning
0 for directories. A
value of zero for files
indicates it is a normal
file, not WORM or
retained.
system::worm
numeric
WORM status of the
file.
true
yes, to
true only,
at most one
time (see
“Retention
properties
assignment”
(page 153))
system::deleteTime
numeric
The date/time when the See “API date
no
file or directory was
formats” (page 155).
deleted from the
StoreAll file system. The
system attribute,
system::deleteTime,
is only valid for deleted
files.
Deleted files will be
returned in query results
only if the query
explicitly includes
system::deleteTime
as an attribute to be
returned or as a query
criterion.
Metadata queries
163
System attribute (key)
Type
Description
Example
Writable
system::lastActivityTime
numeric
The latest date/time of
the following 5
attributes of the file or
directory:
See “API date
no
formats” (page 155).
system::createTime
nl
system::lastModifiedTime
nl
system::lastChangedTime
nl
system::deleteTime
The system attribute,
system::lastActivityTime,
is useful for determining
the last date/time at
which a file had any
modification activity.
This attribute will only
be returned in query
results if the request
explicitly includes
system::lastActivityTime
as an attribute to be
returned.
system::lastActivityReason
numeric
The attribute that is
represented by the
system::lastActivityTime,
which is a combination
of the following values:
A decimal number,
no
such as 6, signifying
that the last activity
on this file was a
content modification,
which changes both
0x1:
lastModifiedTime
system::createTime (0x2) and
lastChangedTime
0x2:
(0x4) .
nl
nl
nl
system::lastModifiedTime
nl
0x4:
nl
system::lastChangedTime
nl
0x8:
nl
system::deleteTime
nl
0x10:
nl
custom metadata
assignment time
(not queryable as
a system::
attribute)
This attribute is returned
in query results if the
request explicitly
includes
system::lastActivityReason
as an attribute to be
returned.
164 HTTP-REST API file-compatible mode shares
system::onDiskAtime
The atime inode field in StoreAll can be accessed as the system::onDiskAtime attribute from
the API. This field represents different concepts in the lifetime of a WORM/retained file, and it
often represents a concept other than the time of the file’s last access, which is why the field was
named onDiskAtime rather than (for example) lastAccessedTime. (See “Retention properties
assignment” (page 153) for a description of this life cycle).
•
Before a file is retained, whether WORM state or not, atime represents the last accessed time,
as long as the file system is mounted with the non-default atime option. If the file system is
mounted with the default noatime option, atime is the file’s creation time, and never changes
unless the file is retained (see the second bullet). See “Creating and mounting file systems”
(page 14) for more information about mount options.
•
While a file is in the retained state, atime represents the retention expiration time.
•
After retention expires, atime represents the time at which the file was first retained (even if
the file has been retained and expired more than once), and it never changes again, unless
the file is re-retained (see the second bullet).
If you have enabled the auditing of file read events, then reads are logged in the audit logs.
However, file reads do not update system::onDiskAtime even if the file reads are being
audited. All other file accesses modify the system::onDiskAtime with the current value of
atime. Therefore, before the file is retained (first bullet), if the file system is mounted with the
atime option, system::onDiskAtime represents the last accessed time before the last file
modification, not necessarily the current atime or the last accessed time. To list all read accesses
to a file, use the ibrix_audit_reports command as described in the CLI Reference Guide
system::mode
The following system::mode bits are defined (in octal):
0140000
0120000
0100000
0060000
0040000
0020000
0010000
0004000
0002000
0001000
0000400
0000200
0000100
0000040
0000020
0000010
0000004
0000002
0000001
socket
symbolic link
regular file
block device
directory
character device
FIFO
set UID bit
set-group-ID bit
sticky bit
owner has read permission
owner has write permission
owner has execute permission
group has read permission
group has write permission
group has execute permission
others have read permission
others have write permission
others have execute permission
Metadata queries
165
Wildcards
The StoreAll REST API provides three wildcards:
Wildcard
Description
*
A single attribute name of * returns all system and custom
metadata attributes for the files and directories matching
the query.
system::*
A single attribute name of system::* returns all system
metadata attributes for the files and directories matching
the query. It does not include any custom metadata entries.
custom::*
A single attribute name of custom::* returns all custom
(user-defined) metadata attributes for the files and
directories matching the query. It does not include any
system metadata entries.
For wildcards that return system metadata attributes, the results will not include attributes that
describe deleted files (system::deleteTime, system::lastActivityTime, and
system::lastActivityReason).
Pagination
The StoreAll REST API provides a way for users to specify a portion of the total list of records (files
and directories) to return in the JSON query results.
Parameter
Description
skip
The skip parameter defines the number of records to skip
before returning any results. The value is zero-based. For
example, skip=100 skips the first 100 records of the
results and the 101st record and later are returned. If the
total result set is less than 101, no results are returned.
top
The top parameter defines the maximum number of total
records to return. For example, top=2000 returns at most
2000 rows.
The skip and top parameters can be combined. For example, supplying both skip=100 and
top=2000 returns records 101 through 2100. By combining these two parameters, the user can
absorb a large result set in chunks, for example, records 1-2000, 2001-4000, and so on.
The following limitations apply:
•
Every query will be executed in full, even if only a subset of results is returned. For some
queries, this may place a substantive load on the system. Keeping top values as large as
possible will limit this load.
•
Because a query is executed for every request, there may be inconsistencies in query results
if files are created or deleted between API requests.
By default, if the skip parameter is not supplied, the results will not skip any records. Similarly,
if the top parameter is not supplied, the results will contain all records.
HTTP syntax
The HTTP request line format is the following on one line:
GET /<urlpath>[/[<pathname>]]?[version=1][attributes=<attr1>[,<attr2>,…]]
[&query=<query_attr><operator><query_value>][&recurse][&skip=<skip_records>]
[&top=<max_records>][&ordered] HTTP/1.1
The equivalent curl command format is the following on one line:
166 HTTP-REST API file-compatible mode shares
curl -g "http[s]://<IP_address>:<port>/<urlpath>[/[<pathname>]]?
nl
[version=1][attributes=<attr1>[,<attr2>,…]]&query=<query_attr><operator><query_value>
nl
[&recurse][&skip=<skip_records>][&top=<max_records>][&ordered]"
See “Using HTTP” (page 114) for information about the IP address, port, and URL path. If the
urlpath or pathname does not exist, a JSON output of no results is returned (see the “JSON
response format” (page 168)), and the HTTP status code 200 (OK) is returned rather than an HTTP
error such as 404 (Not Found).
Parameter
Description
pathname
The name of the existing file or directory on the HTTP share, if querying metadata of a
single file/directory. If not present, the query applies to the <urlpath>. Furthermore:
• Directory pathnames must end in a trailing slash /.
• If the &recurse identifier is supplied for a directory, the query applies to the entire
directory tree: the directory itself, all files in that directory, and all subdirectories
recursively.
• If the &recurse identifier is not supplied and the pathname is for a directory, the
query operates only on the given directory and all files in that directory or directory
of files, but not subdirectories.
• If the pathname is for a file, the query applies only to the file
attr[n]
A comma-separated list of system and/or custom metadata attribute names to be returned
in the JSON response for each file or directory matching the query criterion. The special
attribute names *, system::*, and custom::* are described under “Wildcards”
(page 166).
query_attr
A system and/or custom metadata attribute to be compared against the value as the
query criterion. Only one attribute can be listed per command.
operator
The query operation to perform against the query_attr and value, one of:
= (equals exactly)
!= (does not equal)
< (less than)
<= (less than or equal to)
> (greater than)
>= (greater than or equal to)
Only for custom attributes and string-valued system attributes (for example,
system::path, system::tier):
~ (regular expression match)
!~ (does not match regular expression)
query_value
The value to compare against the query_attr using the operator. The value is either
a numeric or string literal. See “General topics regarding HTTP syntax ” (page 153) for
details about literals.
recurse
If the recurse attribute is present, the query searches through the given directory and
all of its subdirectories. If the recurse attribute is not present, the query operates only
on the given file, directory, or directory of files (but not subdirectories). See pathname
earlier in this table for details.
skip_records
If this attribute is present, it defines the number of records to skip before returning any
results. The value is zero-based. See “HTTP syntax” (page 166).
max_records
If this attribute is present, it defines the maximum number of total records to return from
the result set. See “HTTP syntax” (page 166).
ordered
If this attribute is present, the list of files and attributes returned is sorted lexicographically
by file name. The use of ordered on large results sets might affect the performance of
the query. Without ordered, files might occur in any order in the result set.
Metadata queries
167
Regular expressions
The arguments to the regular expression operators (~ and !~) are POSIX regular expressions, as
described in POSIX 1003.1-2008 at http://pubs.opengroup.org/onlinepubs/9699919799/,
section 9, Regular Expressions.
JSON response format
The result of the query is an HTTP response in JSON format, as in the following example:
[
{
"mydir" :
{
"system::ownerUserId" : 1701,
"system::size" : 0,
"system::ownerGroupId" : 650,
"system::onDiskAtime" : 1346895723.552374000,
"system::lastAccessedTime" : 1346895723.552810000,
"system::lastChangedTime" : 1346895723.552374000,
"system::lastModifiedTime" : 1346895723.552374000,
"system::retentionExpirationTime" : 0.000000000,
"system::mode" : 16877,
"system::tier" : "no tier",
"system::createTime" : 1346895723.552374000,
"system::retentionState" : 0,
"system::worm" : false
}
},
{
"mydir/myfile.txt" :
{
"system::ownerUserId" : 1701,
"system::size" : 3,
"system::ownerGroupId" : 650,
"system::onDiskAtime" : 1378432229.000000000,
"system::lastAccessedTime" : 1346896240.316746000,
"system::lastChangedTime" : 1346896235.000000000,
"system::lastModifiedTime" : 1346895753.000000000,
"system::retentionExpirationTime" : 1378432229.000000000,
"system::mode" : 33060,
"system::tier" : "no tier",
"system::createTime" : 1346895753.815070000,
"system::retentionState" : 3,
"system::worm" : true,
"scan_pass" : "17",
"physician" : "Smith, John; 8136"
}
},
}
If no files or directory meet the criteria of the query (an empty result set), or if the urlpath or pathname
does not exist, then a JSON output of no results is returned, consisting of just an open and close
bracket on two separate lines:
[
]
168 HTTP-REST API file-compatible mode shares
Example queries
Get selected metadata for a given file
The following is one command line:
curl -g "http://99.226.50.92/ibrix_share1/lab/images/xyz.jpg?
nl
attributes=system::size,physician"
This example queries only the file called xyz.jpg in the lab/images subdirectory on the
ibrix_share1 HTTP share. A JSON document is returned containing the system size value and
the custom metadata value for the physician attribute, for this file only.
Get selected system metadata for all files in a given directory
The following is one command line:
curl -g "http://99.226.50.92/ibrix_share1/lab/images/?attributes=
system::mode,system::tier"
This example queries only the directory lab/images on the ibrix_share1 HTTP share. A JSON
document is returned containing the POSIX mode/permission bits and storage tier name for the
lab/images directory itself in addition to the files and directories in lab/images (but not in
any recursive subdirectories, because there is no &recurse option).
Get selected metadata for all files in a given directory
The following is one command line:
curl -g "http://99.226.50.92/ibrix_share1/lab/images/?attributes=
system::size,physician"
This example queries all files in the ibrix_share1 HTTP share in the subdirectory lab/images
of the share, but not files or directories in any subdirectories. A JSON document is returned
containing the system size value and the custom metadata value for the physician attribute, for
all files in lab/images. For files that don’t have a physician attribute, only the system::size
is returned.
Get selected metadata for all files in a given directory tree
The following is one command line:
curl -g "http://99.226.50.92/ibrix_share1/lab/images/?attributes=
system::size,physician&recurse&ordered"
This example queries all files in the lab/images subdirectory of the ibrix_share1 HTTP share,
in addition to the files in all subdirectories, recursively walking the directory tree. A JSON document
is returned containing the system size value and the custom metadata value for the physician
key, for all files and subdirectories in the lab/images directory tree, as well as for the lab/
imagesdirectory itself. The list of files is ordered alphabetically by file name.
Get selected metadata for a page of files in a given directory tree
The following is one command line:
curl -g "http://99.226.50.92/ibrix_share1/lab/images/?attributes=
system::size,physician&recurse&skip=2000&top=100"
This example queries all files in the lab/images subdirectory of the ibrix_share1 HTTP share,
in addition to the files in all subdirectories, recursively walking the directory tree. A JSON document
is returned containing the system size value and the custom metadata value for the physician
key, for result set entries 2001 through 2100. The results are not ordered, which speeds up the
query. In a typical scenario, such as in the example mentioned in this section, the client has already
Metadata queries
169
issued queries to receive the first 2000 results. The client usually issues further queries until no more
results are returned.
Get selected metadata for all files in a given directory tree that matches a system metadata
query
The following is one command line:
curl -g "http://99.226.50.92/ibrix_share1/lab/images/?attributes=
system::size,physician&query=system::size>2048&recurse"
This example queries all files larger than 2 KB in the lab/images subdirectory of the
ibrix_share1 HTTP share, as well as all files in all subdirectories, recursively walking the
directory tree. A JSON document is returned containing the system size value and the custom
metadata value for the physician attribute, for all > 2KB files and subdirectories in the lab/
images directory tree, as well as for the lab/imagesdirectory itself (if >2KB).
Get selected metadata for all files in a given directory tree that matches a custom metadata
query
The following is one command line:
curl -g "http://99.226.50.92/ibrix_share1/lab/images/?attributes=
system::size,physician&query=department!='billing'&recurse"
This example queries all files that do not have a custom metadata attribute of department with
a value other than billing, in the lab/images subdirectory of the ibrix_share1 HTTP share,
in addition to the files in all subdirectories, recursively walking the directory tree. A JSON document
is returned containing the system size value and the custom metadata value for the physician
attribute, for all files and subdirectories not in the billing department in the lab/images
directory tree. Files without a department attribute are not included in the results.
Get all metadata for all files in a given directory tree that matches a custom metadata
query
The following is one command line:
curl -g "http://99.226.50.92/ibrix_share1/lab/images/?attributes=
*&query=physician~'^S.*'&recurse"
This example queries all files that have a custom metadata attribute of physician with a value
that starts with S in the lab/images subdirectory of the ibrix_share1 HTTP share, in addition
to the files in all subdirectories, recursively walking the directory tree. A JSON document is returned
containing all attribute values, for all files and subdirectories in the lab/images directory tree
that matches the custom metadata criterion.
Get all custom metadata for all files in a given directory tree that matches a custom metadata
query
The following is one command line:
curl -g "http://99.226.50.92/ibrix_share1/lab/images/?attributes=
custom::*&query=physician~'^S.*'&recurse"
This example queries all files that have a custom metadata attribute of physician with a value
that starts with S in the lab/images subdirectory of the ibrix_share1 HTTP share, in addition
to the files in all subdirectories, recursively walking the directory tree. A JSON document is returned
containing all custom metadata attribute values, for all files and subdirectories in the lab/images
directory tree that matches the custom metadata criterion.
170
HTTP-REST API file-compatible mode shares
Get all files that match a name pattern
The following is one command line:
curl -g "http://99.226.50.92/ibrix_share1/lab/images?query=
nl
system::path~'.*\.(gif|jpg)$'"
This example returns a JSON document that contains all files in the lab/images directory that
end in .gif or .jpg.
Get all activity-related times for files with recent activity
The following is one command line:
curl -g "http://99.226.50.92/ibrix_share1/lab/?query=system::
nl
createTime,system::lastChangedTime,system::lastModifiedTime,system::deleteTime&query=system::
nl
lastActivityTime>1334642962"
This example returns a JSON document that contains all files in the lab/images directory that have
experienced activity since April 17, 2012, 06:09:22 UTC/GMT. For live files, the following
attributes are returned: system::createTime,system::lastChangedTime and
system::lastModifiedTime. For live files, system::deleteTime is returned.
Retention properties assignment
Retention and WORM support was initially implemented in StoreAll v6.0 following an atime-based
retention date interface, to be compatible with existing products that implement retention this way.
This feature is independent of the StoreAll REST API.
Briefly, without the API, the sequence of events is:
1. A user creates or uploads a file to a StoreAll file system.
2. If the autocommit feature is enabled, the file becomes WORM after a certain period of inactivity.
If a non-zero default retention period is defined for the file system, then the file is also set to
the retained state for that period of time, at the same time it becomes WORM.
3. If no autocommit, the user sets the last access time (atime) to the desired retention expiration
date/time (or skips this step if a non-zero default retention period is defined for the file system
and the user does not want to override that period).
4. If no autocommit, after setting the atime, the user turns off all write permission bits on the file.
This triggers the file’s state transition to WORM, and also to retained if the atime is in the
future or if there is a non-zero default retention period.
5. Later, the user can change the atime to change the expiration time, subject to the file system’s
retention policy settings.
The StoreAll REST API commands provide the same ability to perform these actions, but in fewer
steps.
Retention properties assignment
171
HTTP syntax
The commands provided in this section should be entered on one line.
The HTTP request line format is the following on one line:
PUT
/<urlpath>/<pathname>?assign=[system::retentionExpirationTime=<retentionExpirationTime>]
[,system::worm='true'] HTTP/1.1
The equivalent curl command format is the following on one line:
curl -g -X PUT
"http[s]://<IP_address>:<port>/<urlpath>/<pathname>?assign=
nl
[system::retentionExpirationTime= <retentionExpirationTime>]
[,system::worm='true']"
Either or both system::retentionExpirationTime or system::worm can be specified.
Parameter
Description
pathname
The name of an existing file on the HTTP share. The retention properties of
this file will be changed.
system:retentionExpirationTime If present, defines the date/time at which the file should expire from the
retained state. After that time, the file will still be WORM (immutable) forever,
but the file can be deleted. The date/time must be formatted according to
“API date formats” (page 155).
If the file is not currently in the retained state, the date/time is stored as the
file’s atime, but retention rules are only applied to the file if
system::worm=true in this command or a later command.
If the file is already retained, the date/time is changed to
system::retentionExpirationTime, unless
system::retentionExpirationTime is earlier than the file’s existing
retention expiration date/time and the file system’s retention mode is set to
enterprise. In this case, an error is returned and the date/time is not
changed. The retention period can be shortened only in relaxed mode, not
in enterprise mode.
If not present, and system::worm is present, the default retention period
is applied to the file, if a default is defined for this file system. If no default
is applied, then the file becomes WORM (immutable) but not retained (so it
can still be deleted).
system::worm
This attribute sets the state of the file to WORM. If present, the value must be
the literal string true; no other value is accepted. At the same time, if the
atime (retention expiration date/time) is in the future, or if the file system’s
default retention period is nonzero, it sets the retention expiration date/time
either to the atime (if it is in the future) or the default retention period.
A file’s state can be changed to WORM only once. A file in WORM or
retained state cannot be reverted to non-WORM, and cannot be un-retained
through the StoreAll REST API. See the ibrix_reten_adm command or
the equivalent Management Console actions for administrative override
methods to un-retain a file.
Example: Set a file to WORM without specifying retention expiration
curl -g -X PUT
"https://99.226.50.92/ibrix_share1/lab/images/xyz.jpg?assign=system::worm='true'"
In this example, no retention expiration date/time is provided, but the file state is changed to
WORM.
172
HTTP-REST API file-compatible mode shares
As part of processing this command, the file may also be set to the retained state. This will occur
if the atime has already been set into the future, or if the file system’s default retention period is
non-zero. The retention expiration time will be set to the atime (if in the future) or the default.
Example: Set a file to WORM and retained with a retention expiration date/time
curl -g -X PUT
"https://99.226.50.92/ibrix_share1/lab/images/xyz.jpg?assign=system::retentionExpirationTime=
1376356584,system::worm='true'"
In this example, the file state is changed to WORM and retained. The retention expiration date/time
is set to 13 Aug 2013 01:16:24. The file system default retention period is ignored.
Example: Set/change the retention expiration date/time without a WORM state transition
curl -g -X PUT
"https://99.226.50.92/ibrix_share1/lab/images/xyz.jpg?assign=system::retentionExpiration
Time=1376356584"
In this example, a file’s retention expiration date/time is assigned, but no state transition to WORM
is performed.
If the file is not already retained, the atime is assigned this value, and it remains un-retained. But
the value will take effect if the file is ever transitioned to WORM in the future, either manually or
by autocommit. If the file is already retained, the retention expiration date/time will be changed
to this new value. If retention settings prohibit this, an error is returned.
HTTP Status Codes
The following HTTP status codes can be returned by the StoreAll REST API. For error status codes,
check the following files for further information about the error:
•
access_log
•
error_log
The logs are in the following directory, on the Active FM server node of the cluster:
/usr/local/ibrix/httpd/debug/logs
By default, there is no activity written to the access_log file. To enable the HTTP Server to write
entries to the file for every HTTP access from a client, uncomment this line in the file /usr/local/
ibrix/httpd/conf/httpd.conf:
# CustomLog "debug/logs/access_log" common
Be aware that this log file can grow quickly from client HTTP accesses. Manage the size of this file
so that it does not fill up the local root file system. Enable it only when needed to diagnose HTTP
traffic.
Status code
Description
200 (OK)
If no errors are encountered, the status code 200 is returned.
204
If no errors are encountered and there is no content to be returned that fills the Store REST API
query conditions/restrictions, the status code 204 is returned in the message header.
400 (Bad Request)
If the URL parser in the StoreAll REST API detects an error in the URL it receives, it returns a 400
error. See the access and error logs for details.
404 (Not Found)
If the path and filename in the URL does not exist and the request is not a PUT (upload) of a
new file, the StoreAll REST API returns a 404 error.
500 (Internal Server
Error)
If the StoreAll REST API encounters an error other than those described previously, it returns a
500 error. See the access and error logs for details.
HTTP Status Codes
173
12 Managing SSL certificates
Servers accepting FTPS and HTTPS connections typically provide an SSL certificate that verifies the
identity and owner of the web site being accessed. You can add your existing certificates to the
cluster, enabling file serving nodes to present the appropriate certificate to FTPS and HTTPS clients.
StoreAll software supports PEM certificates.
When you configure the FTP share or the HTTP vhost, select the appropriate certificate.
You can manage certificates from the GUI or the CLI, On the GUI, select Certificates from the
Navigator to open the Certificates panel. The Certificate Summary shows the parameters for the
selected certificate.
Creating an SSL certificate
Before creating a certificate, OpenSSL must be installed and must be included in your PATH variable
(in RHEL5, the path is /usr/bin/openssl).
There are two parts to a certificate: the certificate contents (specified in a .crt file) and a private
key (specified in a .key file). Certificates added to the cluster must meet these requirements:
•
The certificate contents (the .crt file) and the private key (the .key file) must be concatenated
into a single file.
•
The concatenated certificate file must include the headers and footers from the .crt and
.key files.
•
The concatenated certificate file cannot contain any extra spaces.
Before creating a real certificate, you can create a self-signed SSL certificate and test access with
it. Complete the following steps to create a test certificate that meets the requirements for use in a
StoreAll cluster:
174
Managing SSL certificates
1.
Generate a private key:
openssl genrsa -des3 -out server.key 1024
You will be prompted to enter a passphrase. Be sure to remember the passphrase.
2.
Remove the passphrase from the private key file (server.key). When you are prompted for
a passphrase, enter the passphrase you specified in step 1.
cp server.key server.key.org
openssl rsa -in server.key.org -out server.key
rm -f server.key.org
3.
Generate a Certificate Signing Request (CSR):
openssl req -new -key server.key -out server.csr
4.
Self-sign the CSR:
openssl x509 -req -days 365 -in server.csr -signkey server.key -out
server.crt
5.
Concatenate the signed certificate and the private key:
cat server.crt server.key > server.pem
When adding a certificate to the cluster, use the concatenated file (server.pem in our example)
as the input for the GUI or CLI.
The following example shows a valid PEM encoded certificate that includes the certificate contents,
the private key, and the headers and footers:
-----BEGIN CERTIFICATE----MIICUTCCAboCCQCIHW1FwFn2ADANBgkqhkiG9w0BAQUFADBtMQswCQYDVQQGEwJV
UzESMBAGA1UECBMJQmVya3NoaXJlMRAwDgYDVQQHEwdOZXdidXJ5MQwwCgYDVQQK
EwNhYmMxDDAKBgNVBAMTA2FiYzEcMBoGCSqGSIb3DQEJARYNYWRtaW5AYWJjLmNv
bTAeFw0xMDEyMTEwNDQ0MDdaFw0xMTEyMTEwNDQ0MDdaMG0xCzAJBgNVBAYTAlVT
MRIwEAYDVQQIEwlCZXJrc2hpcmUxEDAOBgNVBAcTB05ld2J1cnkxDDAKBgNVBAoT
A2FiYzEMMAoGA1UEAxMDYWJjMRwwGgYJKoZIhvcNAQkBFg1hZG1pbkBhYmMuY29t
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDdrjHH/W93X7afTIUOrllCHw21
u31tinMDBZzi+R18r9SZ/muuyvG4kJCbOoQnohuir/s4aAEULAOnf4mvqLfZlkBe
25HgT+ImshLzyHqPImuxTEXvjG5H1sEDLNuQkHvl8hF9Wxao1tv4eL8TL5KqK1W6
8juMVAw2cFDHxji2GQIDAQABMA0GCSqGSIb3DQEBBQUAA4GBAKvYJK8RXKMObCKk
ae6oJ36FEkdl/ACHCw0Nxk/VMR4dv9lIk8Dv8sdYUUqHkNAME2yOaRI190c5bWSa
MjhSjOOqUmmgmeDYlAu+ps3/1Fte5yl4ZV8VCu7bHCWx2OSy46Po03MMOu99JXrB
/GCKE8fO8Fhyq/7LjFDR5GeghmSw
-----END CERTIFICATE---------BEGIN RSA PRIVATE KEY----MIICXgIBAAKBgQDdrjHH/W93X7afTIUOrllCHw21u31tinMDBZzi+R18r9SZ/muu
yvG4kJCbOoQnohuir/s4aAEULAOnf4mvqLfZlkBe25HgT+ImshLzyHqPImuxTEXv
jG5H1sEDLNuQkHvl8hF9Wxao1tv4eL8TL5KqK1W68juMVAw2cFDHxji2GQIDAQAB
AoGBAMXPWryKeZyb2+np7hFbompOK32vAA1vLZHUwFoI0Tch7yQ60vv2PBvlZCQf
4y06ik5xmkqLA+tsGxarx8DnXKUy0PHJ3hu6mTocIJdqqN0n+KO4tG2dvDPdSE7l
phX2sY9MVt4X/QN3eNb/F3cHjnM9BYEr0BY3mTkKXz61jzABAkEA+M3PProYwvS6
P8m4DenZh6ehsu4u/ycjmW/ujdp/PcRd5HBAWJasTXTezF5msugHnnNBe8F1i1q4
9PfL0C+kuQJBAOQXjrmPZxDc8YA/V45MUKv4eHHN0E03p84budtblHQ70BCLaO41
n267t3DrZfW+VtsVDVBMja4UhoBasgv3rGECQQCILDR6k2YMBd+OG/xleRD6ww+o
G96S/bvpNa7t6qFrj/cHmTxOgCDLv+RVHHG/B2lsGo7Dig2oeL30LU9aoUjZAkBV
KSqDw7PyitusS3oQShQQsTufGf385pvDi3yQFxhNcYuUschisCivumyaP3mZEBDz
yV9oLLz1UvqI79PsPfPhAkEAxSqebd1Ymqr2wi0RnKTmHfDCb3yWLPi57kc+lgrK
LUlxawhTzDwzTWJ9m4gQqRlAaXoIElfk6ITwW0g9Th5Ouw==
-----END RSA PRIVATE KEY-----
NOTE: When you are ready to create a real SSL certificate, consult the following site for a
description of the procedure:
http://httpd.apache.org/docs/2.0/ssl/ssl_faq.html#selfcer
Creating an SSL certificate
175
Adding a certificate to the cluster
To add an existing certificate to the cluster, click Add on the Certificates panel. On the Add
Certificate dialog box, enter a name for the certificate. Use a Linux command such as cat to
display your concatenated certificate file. For example:
cat server.pem
Copy the contents of the file to the Certificate Content section of the dialog box. The copied text
must include the certificate contents and the private key in PEM encoding. It must also include the
proper headers and footers, and cannot contain any extra spaces.
NOTE:
You can add only one certificate at a time.
The certificate is saved on all file serving nodes in the directory /usr/local/ibrix/pki.
To add a certificate from the CLI, use the following command.
ibrix_certificate -a -c CERTNAME -p CERTPATH
For example:
# ibrix_certificate -a -c mycert -p server.pem
Run the command from the active Fusion Manager. To add a certificate for a different node, copy
that certificate to the active Fusion Manager and then add it to the cluster. For example, if node
ib87 is hosting the active Fusion Manager and you have generated a certificate for node ib86,
copy the certificate to ib87:
scp server.pem ib87/tmp
Then, on node ib87, add the certificate to the cluster:
ibrix_certificate -a -c cert86 -p /tmp/server.pem
176
Managing SSL certificates
Exporting a certificate
If necessary, you can display a certificate and then copy and save the contents for future use. This
step is called exporting. Select the certificate on the Certificates panel and click Export.
To export a certificate from the CLI, use this command:
ibrix_certificate -e -c CERTNAME
Deleting a certificate
To delete a certificate from the GUI, select the certificate on the Certificates panel, click Delete,
and confirm the operation.
To delete a certificate from the CLI, use this command:
ibrix_certificate -d -c CERTNAME
Exporting a certificate 177
13 Using remote replication
This chapter describes how to configure and manage the Continuous Remote Replication (CRR)
service.
NOTE: Keep in mind that when you set up CRR, the Express Query database is not replicated.
You must set up periodic exports as described in “Metadata and continuous remote replication”
(page 221).
Overview
The CRR service provides a method to replicate changes in a source file system on one cluster to
a target file system on either the same cluster (intra-cluster replication) or a second cluster (inter-cluster
replication). Both files and directories are replicated with remote replication, and no special
configuration of segments is needed. A remote replication task includes the initial synchronization
of the source and target file systems.
When selecting file systems for remote replication, you should be aware of the following:
•
One, multiple, or all file systems in a single cluster can be replicated.
•
Remote replication is a one-way process. Bidirectional replication of a single file system is not
supported.
•
The mountpoint of the source file system can be different from the mountpoint on the target
file system.
•
The directory path /mnt/ibrix is reserved for use by CRR for internal operations. Do not
use the /mnt/ibrix path for mounting any file systems, including ibrix. The CRR feature
does not work properly if /mnt/ibrix is occupied by another file system mount.
Remote replication has minimal impact on these cluster operations:
•
Cluster expansion (adding a new server) is allowed as usual on both the source and target.
•
File systems can be exported over NFS, SMB, FTP, or HTTP.
•
Source or target file systems can be rebalanced while a remote replication job is in progress.
•
File system policies (ibrix_fs_tune) can be set on both the source and target without any
restrictions.
The Fusion Manager initializes remote replication. However, each file serving node runs its own
replication and synchronization processes, independent of and in parallel with other file serving
nodes. The individual daemons running on the file serving nodes perform the actual file system
replication.
The source-side Fusion Manager monitors the replication and reports errors, failures, and so on.
Continuous or run-once replication modes
CRR can be used in two modes: continuous or run-once.
Continuous replication. This method tracks changes on the source file system and continuously
replicates these changes to the target file system. The changes are tracked for the entire file system
and are replicated in parallel by each file serving node. There is no strict order to replication at
either the file system or segment level. The continuous remote replication program tries to replicate
on a first-in, first-out basis.
When you configure continuous remote replication, you must specify a file system as the source.
(A source directory cannot be specified.) File systems specified as the replication source or target
must already exist. The replication starts at the root of the source file system (the mount point).
178
Using remote replication
Run-once replication. This method replicates a single directory sub-tree or an entire file system from
the source file system to the target file system. Run-once is a single-pass replication of all files and
subdirectories within the specified directory or file system. All changes that have occurred since
the last replication task are replicated from the source file system to the target file system. File
systems specified as the replication source or target must exist. If a directory is specified as the
replication source, the directory must exist on the source cluster under the specified source file
system.
NOTE: Run-once can also be used to replicate a single software snapshot. This must be done on
the GUI.
You can replicate to a remote cluster (an intercluster replication) or the same cluster (an intracluster
replication).
Using intercluster replications
Intercluster configurations can be continuous or run-once:
•
Continuous: asynchronously replicates the initial state of a file system and any changes to it.
Snapshots cannot be replicated.
•
Run-once: replicates the current state of a file system, folder, or file system snapshot.
The examples in the configuration rules use three StoreAll clusters: C1, C2, and C3:
•
C1 has two file systems, c1ifs1 and c1ifs2, mounted as /c1ifs1 and /c1ifs2.
•
C2 has two file systems, c2ifs1 and c2ifs2, mounted as /c2ifs1 and /c2ifs2.
•
C3 has two file systems, c3ifs1 and c3ifs2, mounted as /c3ifs1 and /c3ifs2.
In the examples, <cluster name>:<target path> designates a replication target such as
C1:/c1ifs1/target1.
The following rules apply to intercluster replications:
•
Remote replication is not supported between 6.1.x and 6.2 clusters in either direction if Express
Query is enabled on the 6.2 cluster.
•
Only one continuous Remote Replication task can run per file system. It must replicate from
the root of the file system; you cannot continuously replicate a subdirectory of a file system.
•
A continuous Remote Replication task can replicate to only one target cluster.
•
Replication targets are directories in a StoreAll file system and can be:
◦
The root of a file system such as /c3ifs1.
◦
A subdirectory such as /c3ifs1/target1.
Targets must be explicitly exported using CRR commands to make them available to CRR
replication tasks.
•
A subdirectory created beneath a CRR export can be used as a target by a replication task
without being explicitly exported in a separate operation. For example, if the exported target
is /c3ifs1/target1, you can replicate to folder /c3ifs1/target1/subtarget1 if the
folder already exists.
•
Directories exported as targets cannot overlap. For example, if C1 is replicating /c1ifs1 to
C2:/c2ifs1/target1, C3 cannot replicate /c3ifs1 to C2:/c2ifs1/target1/target2.
•
A cluster can be a target for one replication task at the same time that it is replicating data to
another cluster. For example, C1 can replicate /c1ifs1 to C2:/c2ifs1/target1 and C2
can replicate /c2ifs2 to C1:/c1ifs2/target2, with both replications occurring at the
same time.
Overview
179
•
A cluster can be a target for multiple replication tasks. For example, C1 can replicate /c1ifs1
to C3:/c3ifs1/target1 and C2 can replicate /c2ifs1 to C3:/c3ifs1/target2, with
both replications occurring at the same time.
•
Continuous Remote Replication tasks can be linked. For example:
◦
C1 replicates /c1ifs1 to C2:/c2ifs1/target1.
◦
C2 replicates /c2ifs1/target1 to C3:/c3ifs2/target2.
NOTE:
cluster.
•
If a different file system is used for the target, the linkage can go back to the original
To replicate a directory or snapshot on a file system covered by continuous replication, first
pause the continuous task and then initiate a run-once replication task.
For information about configuring intercluster replications, see “Configuring the target export for
replication to a remote cluster” (page 180).
Using intracluster replications
There are two forms of intracluster replication:
•
The same cluster and a different file system. Configure either continuous or run-once replication.
You will need to specify a target file system and optionally a target directory (the default is
the root of the file system or the mount point).
•
The same cluster and the same file system. Configure run-once replication. You will need to
specify a file system, a source directory, and a target directory. Be sure to specify two different,
non-overlapping subdirectories as the source and target. For example, the following replication
is not allowed:
From <fs_root>dir1 to <fs_root>dir1/dir2
However, the following replication is allowed:
From <fs_root>dir1 to <fs_root>dir3/dir4
File system snapshot replication
You can use the run-once replication mode to replicate a single file system snapshot. If a snapshot
replication is not explicitly configured, snapshots and all related metadata are ignored/filtered
out during remote replications. Replication is not supported for block snapshots.
Configuring the target export for replication to a remote cluster
Use the following procedure to configure a target export for remote replication. In this procedure,
target export refers to the target file system and directory (the default is root of the file system)
exported for remote replication.
Planning Considerations
When planning for StoreAll Continuous Replication, consider the following:
•
All changes on the source are replicated to the target, including creation or deletion of files
and directories, whether planned or accidental. File system snapshots can be used on the
source and target clusters to protect against accidental file deletion.
•
If you only change the attributes of a file or directory, only the attribute changes are replicated.
This includes changes to extended attributes.
•
If you make any updates to the data blocks of a previously replicated file the entire file is
replicated again, not just the changed blocks in the file.
180 Using remote replication
Because of the way StoreAll replication works it is important to understand how applications using
the system will modify files to avoid unexpectedly large amounts of data being replicated.
Applications typically behave in one of the following ways:
•
The application rarely changes files, so most files are replicated only once.
•
The application completely replaces the old file when saving changes. Some applications
create a local temporary copy of a file in memory or on disk while you are working on it. The
application then overwrites the old version with the new version when saving changes. Because
the whole file is new, it is a candidate for replication after updates, regardless of the replication
technology used.
•
The application updates ranges of blocks in the file or appends data to the file. This will cause
a file to be replicated regardless of how much or how little data was changed.
NOTE:
These steps are not required when configuring intracluster replication.
•
Register source and destination clusters. The source and target clusters of a remote replication
configuration must be registered with each other before remote replication tasks can be
created.
•
Create a target export. This step identifies the target file system and directory for replication
and associates it with the source cluster. Before replication can take place, you must create
a mapping between the source cluster and the target export that receives the replicated data.
This mapping ensures that only the specified source cluster can write to the target export.
•
Identify server assignments to use for remote replication. Select the servers and corresponding
NICs to handle replication requests, or use the default assignments. The default server
assignment is to use all servers that have the file system mounted.
NOTE: Do not add or change files on the target system outside of a replication operation. Doing
this can prevent replication from working properly.
Table 17 Configuring the target export for replication to a remote cluster
To run the steps...
See...
In the GUI
“GUI procedure” (page 181)
Using the CLI
“CLI procedure” (page 183)
GUI procedure
This procedure must be run from the target cluster, and is not required or applicable for intracluster
replication.
Select the file system on the GUI, and then select Remote Replication Exports from the lower
Navigator. On the Remote Replication Exports bottom panel, select Add. The Create Remote
Replication Export dialog box allows you to specify the target export for the replication. The mount
point of the file system is displayed as the default export path. You can add a directory to the
target export.
Configuring the target export for replication to a remote cluster
181
The Server Assignments section allows you to specify server assignments for the export. Check the
box adjacent to Server to use the default assignments. If you choose to assign particular servers
to handle replication requests, select those servers and then select the appropriate NICs.
If the remote cluster does not appear in the selection list for Export To (Cluster), you will need to
register the cluster. Select New to open the Add Remote Cluster dialog box and then enter the
requested information.
If the remote cluster is running an earlier version of StoreAll software, you will be asked to enter
the clustername for the remote cluster. This name appears on the Cluster Configuration page on
the GUI for the remote cluster.
182
Using remote replication
The Remote Replication Exports panel lists the replication exports you created for the file system.
Expand Remote Replication Exports in the lower Navigator and select the export to see the
configured server assignments for the export. You can modify or remove the server assignments
and the export itself.
CLI procedure
NOTE:
This procedure does not apply to intracluster replication.
Use the following commands to configure the target file system for remote replication:
1. Register the source and target clusters with each other using the ibrix_cluster -r
command if needed. To list the known remote clusters, run ibrix_cluster -l on the source
cluster.
2. Create the export on the target cluster. Identify the target export and associate it with the
source cluster using the ibrix_crr_export command.
3. Identify server assignments for the replication export using the ibrix_crr_nic command.
The default assignments are:
•
Use all servers that have the file system mounted.
•
Use the cluster NIC on each server.
Registering source and target clusters
Run the following command on both the target cluster and the source cluster to register the clusters
with each other. It is necessary to run the command only once per source or target.
ibrix_cluster -r -C CLUSTERNAME -H REMOTE_FM_HOST
CLUSTERNAME is the name of the Fusion Manager for a cluster.
For the -H option, enter the name or IP address of the host where the remote cluster's Fusion
Manager is running. For high availability, use the virtual IP address of the Fusion Manager.
To list clusters registered with the local cluster, use the following command:
ibrix_cluster -l
To unregister a remote replication cluster, use the following command:
ibrix_cluster -d -C CLUSTERNAME
Creating the target export
To create a mapping between the source cluster and the target export that receives the replicated
data, execute the following command on the target cluster:
ibrix_crr_export -f FSNAME [-p DIRECTORY] -C SOURCE_CLUSTER [-P]
Configuring the target export for replication to a remote cluster 183
FSNAME is the target file system to be exported. The -p option exports a directory located under
the root of the specified file system (the default is the root of the file system). The -C option specifies
the source cluster containing the file system to be replicated.
Include the -P option if you do not want this command to set the server assignments. You will then
need to identify the server assignments manually with ibrix_crr_nic, as described in the next
section.
To list the current remote replication exports, use the following command on the target cluster:
ibrix_crr_export -l
To unexport a file system for remote replication, use the following command:
ibrix_crr_export -U -f TARGET_FSNAME [-p DIRECTORY]
Identifying server assignments for remote replication
To identify the servers that will handle replication requests and, optionally, a NIC for replication
traffic, use the following command:
ibrix_crr_nic -a -f FSNAME [-p directory] -h HOSTLIST [-n IBRIX_NIC]
When specifying resources, note the following:
•
Specify servers by their host name or IP address (use commas to separate the names or IP
addresses). A host is any server on the target cluster that has the target file system mounted.
•
Specify the network using the StoreAll software network name (NIC). Enter a valid user NIC
or the cluster NIC. The NIC assignment is optional. If it is not specified, the host name (or IP)
is used to determine the network.
•
A previous server assignment for the same export must not exist, or must be removed before
a new assignment is created.
The listed servers receive remote replication data over the specified NIC. To increase capacity,
you can expand the number of preferred servers by executing this command again with another
list of servers.
You can also use the ibrix_crr_nic command for the following tasks:
•
Restore the default server assignments for remote replication:
ibrix_crr_nic -D -f FSNAME [-p directory]
•
View server assignments for remote replication. The output lists the target exports and associated
server assignments on this cluster. The assigned servers and NIC are listed with a corresponding
ID number that can be used in commands to remove assignments.
ibrix_crr_nic -l
•
Remove a server assignment:
ibrix_crr_nic -r -P ASSIGNMENT_ID1[,...,ASSIGNMENT_IDn]
To obtain the ID for a particular server, use ibrix_crr_nic -l.
Configuring and managing replication tasks on the GUI
NOTE: When configuring replication tasks, be sure to following the guidelines described in
“Overview” (page 178).
Viewing replication tasks
To view replication tasks for a particular file system, select that file system on the GUI and then
select Active Tasks > Remote Replication in the lower Navigator. The Remote Replication Tasks
bottom panel lists any replication tasks currently running or paused on the file system.
184 Using remote replication
You can use CRR health reports to check the status of CRR activities on the source and target cluster.
To see a list of health reports for active replication tasks, click List Report on the Remote Replication
Tasks panel.
Select a report from the CRR Health Reports dialog box and click OK to see details about that
replication task.
Configuring and managing replication tasks on the GUI 185
If the health check finds an issue in the CRR operation, it generates a critical event.
Reports are generated on the source cluster. If the target cluster is running a version of StoreAll
software earlier than 6.2, only the network connectivity check is performed.
It takes approximately two minutes to generate a CRR health report. Reports are updated every
10 minutes. Only the last five CRR health reports are preserved.
On the CLI, use the following commands to view reports:
List reports:
ibrix_crrhealth -l
Show details for a report:
ibrix_crrhealth -i -n REPORTNAME
To see other reports for a specific task, expand Active Tasks > Remote Replication and then select
the task (crr-25 in the following example). Select Overall Status to see a status summary.
Select Server Tasks to display the state of the task and other information for the servers where the
task is running.
186 Using remote replication
Starting a replication task
To start a replication task, click New on the Remote Replication Tasks panel and then use the New
Replication Task Wizard to configure the replication.
Replication Settings dialog box
Define the replication method on the Replication Settings dialog box.
Source Settings dialog box for continuous replications
For continuous replications, the Source Settings dialog box lists the file system selected on the
Filesystems panel. Specify a comma-separated list of file and directory exclude patterns in the
Exclude patterns text box. You can specify at most 16 patterns.
Configuring and managing replication tasks on the GUI
187
Source Settings dialog box for run-once replications
For a run-once replication of data other than a snapshot, specify the source directory on the Source
Settings dialog box. Specify a comma-separated list of file and directory exclude patterns in the
Exclude patterns text box. You can specify at most 16 patterns.
If you are replicating a snapshot, click Use a snapshot and then select the appropriate Snap Tree
and snapshot.
188 Using remote replication
Target Settings dialog box
For replications to a remote cluster, select the target cluster on the Target Settings dialog box. This
cluster must already be registered as a target export. If the remote cluster is not in the Target Cluster
selection list, select New to open the Add Remote Cluster dialog box and register the cluster as a
target export. (See “Configuring the target export for replication to a remote cluster” (page 180)
for more information.) Then enter the target file system. Optionally, you can also specify a target
directory in the file system.
For replications to the same cluster and different file system, the Target Settings dialog box asks
for the target file system. Optionally, you can also specify a target directory in the file system.
Configuring and managing replication tasks on the GUI 189
For replications to the same cluster and file system, the Target Settings dialog box asks only for
the target directory. This field is required.
Specifying a target directory
Specifying a target directory is optional for remote cluster and same cluster/different file system
replications. It is required for same cluster/same file system replications. For example, you could
configure the following replication, which does not include a target directory:
•
Source directory: /srcFS/a/b/c
•
Exported file system and directory on target: /destFS/1/2/3
The contents of /srcFS/a/b/c are replicated to destFs/1/2/3/{contents_under_c}.
If you also specify the target directory a/b/c, the replication goes to /destFs/1/2/3/a/b/
c{contents_under_c}.
IMPORTANT: If you specify a target directory, be sure that it does not overlap with a previous
replication using the same target export.
Pausing or resuming a replication task
To pause a task, select it on the Remote Replication Tasks panel and click Pause. When you pause
a task, the status changes to PAUSED. Pausing a task that involves continuous data capture does
not stop the data capture. You must allocate space on the disk to avoid running out of space
because the data is captured but not moved. To resume a paused replication task, select the task
and click Resume. The status of the task then changes to RUNNING and the task continues from the
point where it was paused.
Stopping a replication task
To stop a task, select that task on the Remote Replication Tasks panel and click Stop. To view
stopped tasks, select Inactive Tasks from the lower Navigator. You can delete one or more tasks,
or see detailed information about the selected task.
Configuring and managing replication tasks from the CLI
NOTE: When configuring replication tasks, be sure to following the guidelines described in
“Overview” (page 178).
Starting a remote replication task to a remote cluster
Use the following command to start a continuous or run-once replication task to a remote cluster.
The command is executed from the source cluster.
ibrix_crr -s -f SRC_FSNAME [-o] [-S SRCDIR] -C TGT_CLUSTERNAME -F TGT_FSNAME [-X TGTEXPORT] [-P TGTDIR] [-R]
[-e EXCLUDE_PATTERNS]
190 Using remote replication
Use the -s option to start a continuous remote replication task. The applicable options are:
-f SRC_FSNAME
The source file system to be replicated.
-C TGT_CLUSTERNAME
The remote target cluster.
-F TGT_FSNAME
The remote target file system.
-X TGTEXPORT
The remote replication target (exported directory). The default is the root of the file
system.
NOTE:
This option is used only for replication to a remote cluster. The file system
specified with -F and the directory specified with -X must both be exported from the
target cluster (target export).
-P TGTDIR
A directory under the remote replication target export (optional). This directory must
exist on the target, but does not need to be exported.
-R
Bypass retention compatibility checking.
-e
Use the -e option to provide a comma separated list of file and directory exclude
patterns, which should be excluded during replication. Enter up to 16 patterns per
task. Enclose the list of patterns in double-quotes. The syntax of the patterns should be
proper. To exclude a directory and all its contents, the exclude pattern to be provided
is the following: dir_name/***
To exclude any file starting with some pattern, the exclude pattern to be used is
pattern*. To exclude any file ending with a particular pattern, the exclude pattern
to be provided is *pattern. For example to exclude text files, the exclude pattern
will be *.txt.
Omit the -o option to start a continuous replication task. A continuous replication task does an
initial full synchronization and then continues to replicate any new changes made on the source.
Continuous replication tasks continue to run until you stop them manually. Use the -o option for
run-once tasks. This option synchronizes single directories or entire file systems on the source and
target in a single pass. If you do not specify a source directory with the -S option, the replication
starts at the root of the file system. The run-once job terminates after the replication is complete;
however, the job can be stopped manually, if necessary.
Use -P to specify an optional target directory under the target export. For example, you could
configure the following replication, which does not include the optional target directory:
•
Source directory: /srcFS/a/b/c
•
Exported file system and directory on target: /destFS/1/2/3
The replication command is:
ibrix_crr -s -o -f srcFs -S a/b/c -C tcluster -F destFs -X 1/2/3
The contents of /srcFS/a/b/c is replicated to destFs/1/2/3/{contents_under_c}.
When the same command includes the -P option to specify the target directory a/b/c:
ibrix_crr -s -o -f srcFs -S a/b/c -C tcluster -F destFs -X 1/2/3 -P
a/b/c
The replication now goes to /destFs/1/2/3/a/b/c{contents_under_c}.
Starting an intracluster remote replication task
Use the following command to start a continuous or run-once intracluster replication task for the
specified file system:
ibrix_crr -s -f SRC_FSNAME [-o [-S SRCDIR]] -F TGT_FSNAME [-P TGTDIR] [-e EXCLUDE_PATTERNS]
The -F option specifies the name of the target file system (the default is the same as the source file
system). The -P option specifies the target directory under the target file system (the default is the
root of the file system).
Configuring and managing replication tasks from the CLI
191
Use the -o option to start a run-once task. The -S option specifies a directory under the source
file system to synchronize with the target directory.
Starting a run-once directory replication task
Use the following command to start a run-once directory replication for file system SRC_FSNAME.
The -S option specifies the directory under the source file system to synchronize with the target
directory. The -P option specifies the target directory.
ibrix_crr -s -f SRC_FSNAME -o -S SRCDIR -P TGTDIR [-e EXCLUDE_PATTERNS]
Stopping a remote replication task
Use the following command to stop a continuous or run-once replication task. Use the ibrix_task
-l command to obtain the appropriate ID.
ibrix_crr -k -n TASKID
The stopped replication task is moved to the inactive task list. Use ibrix_task -l -c to view
the inactive task list.
To forcefully stop a replication task, use the following command:
ibrix_crr -k -n TASKID
The stopped task is removed from the list of inactive tasks.
Pausing a remote replication task
Use the following command to pause a continuous replication or run-once replication task with the
specified task ID. Use the ibrix_task -l command to obtain the appropriate ID.
ibrix_crr -p -n TASKID
Resuming a remote replication task
Use the following command to resume a continuous or run-once replication task with the specified
task ID. Use the ibrix_task -l command to obtain the appropriate ID.
ibrix_crr -r -n TASKID
Querying remote replication tasks
Use the following command to list all active replication tasks in the cluster, optionally restricted by
the specified file system and servers.
ibrix_crr -l [-f SRC_FSNAME] [-h HOSTNAME] [-C SRC_CLUSTERNAME]
To see more detailed information, run ibrix_crr with the -i option. The display shows the status
of tasks on each node, as well as task summary statistics (number of files in the queue, number of
files processed). The query also indicates whether scanning is in progress on a given server and
lists any error conditions.
ibrix_crr -i [-f SRC_FSNAME] [-h HOSTNAME] [-C SRC_CLUSTERNAME]
The following command prints detailed information about replication tasks matching the specified
task IDs. Use the -h option to limit the output to the specified server.
ibrix_crr -i -n TASKIDS [ [-h HOSTNAME] [-C SRC_CLUSTERNAME]
192 Using remote replication
Replicating WORM/retained files
When using remote replication for file systems enabled for data retention, the following requirements
must be met:
•
The source and target file systems must use the same data retention mode (Enterprise or
Relaxed).
•
The default, maximum, and minimum retention periods must be the same on the source and
target file systems.
•
A clock synchronization tool such as ntpd must be used on the source and target clusters. If
the clock times are not in sync, file retention periods might not be handled correctly.
Also note the following:
•
Multiple hard links on retained files on the replication source are not replicated. Only the first
hard link encountered by remote replication is replicated, and any additional hard links are
not replicated. (The retainability attributes on the file on the target prevent the creation of any
additional hard links). For this reason, HP strongly recommends that you do not create hard
links on files that will be retained if you wish to replicate them.
•
For continuous remote replication, if a file is replicated as retained, but later its retainability
is removed on the source file system (using the ibrix_reten_adm -c command or the File
Administration panel on the Management Console), the new file’s attributes and any additional
changes to that file will fail to replicate. This is because of the retainability attributes that the
file already has on the target, which cause the file system on the target to prevent remote
replication from changing it. If necessary, use data retention management commands on the
corresponding file on the target to make the same changes.
•
When a legal hold is applied to a file (using the ibrix_reten_adm -h command or the
File Administration panel on the Management Console), the legal hold is not replicated on
the target. If the file on the target should have a legal hold, you will also need to set the legal
hold on that file. Likewise, you will need to release legal hold on the source and target file
separately.
•
If a file has been replicated to a target and you then change the file's retention expiration
time with the ibrix_reten_adm -e command or the File Administration panel on the
Management Console, the new expiration time is not replicated to the target. If necessary,
also change the file's retention expiration time on the target.
Configuring remote failover/failback
When remote replication is configured from a local cluster to a remote cluster, you can fail over
the local cluster to the remote cluster:
1. Stop write traffic to the local site.
2. Wait for all remote replication queues to drain.
3. Stop remote replication on the local site.
4. Reconfigure shares as necessary on the remote site. The cluster name and IP addresses (or
VIFs) are different on the remote site, and changes are needed to allow clients to continue to
access shares.
5. Redirect write traffic to the remote site.
When the local cluster is healthy again, take the following steps to perform a failback from the
remote site:
1. Stop write traffic to the remote site.
2. Set up Run-Once remote replication, with the remote site acting as the source and the local
site acting as the destination.
Replicating WORM/retained files
193
3.
4.
When the Run-Once replication is complete, restore shares to their original configuration on
the local site, and verify that clients can access the shares.
Redirect write traffic to the local site.
Understanding the ibrcfrworker log file (ibrcfrworker.log)
The format used by ibrcfrworker to log messages is "%t,<%p>,%i,%n%L".
In this instance:
•
%t is the date and time the file/directory was replicated.
•
%p represents the PID of the ibrcfrworker process
•
%n is the name of the file/directory being replicated.
•
%L represents the string "-> SYMLINK", " => HARDLINK", or "" (where SYMLINK or HARDLINK
is a filename).
•
%i has an output that is 11 letters long.
The general format is YXcstpogaux.
In this instance:
◦
Y is replaced by the type of update being done.
The update types that replace ‘Y’ are as follows:
◦
–
< is a file is being transferred to the remote host (sent).
–
> is a file is being transferred to the local host (received).
–
c is a local change/creation is occurring for the item (such as the creation of a
directory or the change of a symlink, etc.).
–
h is the item is a hard link to another item (requires --hard-links).
–
. is the item is not being updated (though it might have attributes that are being
modified).
X is replaced by the file-type
The file types that replace X are as follows:
◦
–
f is for a regular file
–
d is for a directory
–
L is for a symbolic link
–
D is for a device
–
S is for a special file (for example, named sockets and First Ins, First Outs (FIFOs)).
The other letters in the %i string are the actual letters that are outputted if the associated
attribute for the file is being updated or a dot (.) for no change. Three exceptions to this
are the following:
–
A newly created item replaces each letter with a plus sign (+)
–
An identical item replaces the dots with spaces.
–
An unknown attribute replaces each letter with a question mark (?). This situation
can happen when talking to an older version of ibrcfrworker.
The attribute that is associated with each letter is as follows:
194
–
c means the checksum of the file is different and will be updated by the file transfer
(requires -checksum and is not used in StoreAll version 6.0 or later).
–
s means the size of the file is different and will be updated by the file transfer.
Using remote replication
◦
–
t means the modification time is different and is being updated to the sender’s value
(requires --times). An alternate value of T means that the time will be set to the transfer
time (without --times).
–
p means the permissions are different and are being updated to the sender’s value
(requires --permissions).
–
o means the owner is different and is being updated to the sender’s value (requires
--owner and super-user privileges).
–
g means the group is different and is being updated to the sender’s value (requires
--group and the authority to set the group).
–
u means the atime is different and is being updated to the sender’s value.
–
a means the CIFS ACL is different and is being updated to the sender’s value.
–
x means the POSIX extended attributes are different and are being updated to the
sender’s values.
Another possible output for %i is - when deleting the files, the "%i" represents the string
"*deleting" for each item that is being removed.
Sample Output of the ibrcfrworker log
(2012/11/19
(2012/11/19
(2012/11/19
(2012/11/19
(2012/11/19
(2012/11/19
(2012/11/19
nl
nl
nl
nl
nl
nl
15:50:40),<jobid=5>,<4449>,.d..t......,./
15:50:40),<jobid=5>,<4449>,cd+++++++++,new_dir/
15:50:40),<jobid=5>,<4449>,<f+++++++++,new_dir/foo1.txt
15:50:40),<jobid=5>,<4449>,<f+++++++++,new_dir/foo10.txt
15:50:40),<jobid=5>,<4449>,<f+++++++++,new_dir/foo100.txt
15:50:40),<jobid=5>,<4449>,<f+++++++++,new_dir/foo11.txt
15:50:40),<jobid=5>,<4449>,<f+++++++++,new_dir/foo12.txt
Troubleshooting remote replication
Continuous remote replication fails when a private network is used
Continuous remote replication will fail if the configured cluster interface and the corresponding
cluster Virtual Interface (VIF) for the Fusion Manager are in a private network on either the source
or target cluster. By default, continuous remote replication uses the cluster interface and the Cluster
VIF (the ibrixinit -C and -v options, respectively) for communication between the source
cluster and the target cluster. To work around potential continuous remote replication communication
errors, it is important that the ibrixinit -C and -v arguments correspond to a public interface
and a public cluster VIF, respectively. If necessary, the ibrix_crr_nic command can be used
to change the server assignments (the server/NICs pairs that handle remote replication requests).
Truncating a file during replication
If a file is truncated while it is being replicated, the file is replicated successfully, but a Domain
scan failure message is displayed. Also, an error message is logged in the ibrcfrworker.log
file. The text of the message is similar to the following:
rsync: read errors mapping "/mnt/ibrix/filter/tasks/5/ifs1/test.dat": No data available (61)
rsync: read errors mapping "/ifs1/test.dat": No data available (61)
You can ignore these error messages. The file was replicated successfully.
Troubleshooting remote replication 195
14 Managing data retention
Data retention is intended for sites that need to archive read-only files for business purposes, and
ensures that files cannot be modified or deleted for a specific retention period. Data retention
includes the following optional features:
•
Data validation scans to ensure that files remain unchanged.
•
Data retention reports.
Overview
This section provides overview information for data retention and data validation scans.
Data retention
Data retention must be enabled on a file system. When you enable data retention, you can specify
a retention profile that includes minimum, maximum, and default retention periods that specify how
long a file must be retained.
WORM and WORM-retained files
The files in the file system can be in the following states:
•
Normal. The file is created read-only or read-write, and can be modified or deleted at any
time. A checksum is not calculated for normal files and they are not managed by data retention.
•
Write-Once Read-Many (WORM). The file cannot be modified, but can be deleted at any
time. WORM files can be managed by data retention. A checksum is calculated for WORM
files and they can be managed by data retention.
•
WORM-retained. A WORM file becomes WORM-retained when a retention period is applied
to it. The file cannot be modified, and cannot be deleted until the retention period expires.
WORM-retained files can be managed by data retention. A checksum is calculated for
WORM-retained files and they can be managed by data retention.
NOTE: You can apply a legal hold to a WORM or WORM-retained file. The file then cannot be
deleted until the hold is released, even if the retention period has expired.
For WORM and WORM-retained files, the file's contents and the following file attributes cannot
be modified:
•
File name (the file cannot be renamed or moved)
•
User and group owners
•
File access permissions
•
File modification time
Also, no new hard links can be made to the file and the extended attributes cannot be added,
modified, or removed.
The following restrictions apply to directories in a file system enabled for data retention:
•
A directory cannot be moved or renamed unless it is empty (even if it contains only normal
files).
•
You can delete directories containing only WORM and normal files, but you cannot delete
directories containing retained files.
Data retention attributes for a file system
The data retention attributes configured on a file system are called a retention profile. The profile
includes the following:
196
Managing data retention
Default retention period. If a specific retention period is not applied to a file, the file will be retained
for the default retention period. The setting for this period determines whether you can manage
WORM (non-retained) files as well as WORM-retained files:
•
To manage both WORM (non-retained) files and WORM-retained files, set the default retention
period to zero. To make a file WORM-retained, you will need to set the atime to a date in
the future.
•
To manage only WORM-retained files, set the default retention period to a non-zero value.
Minimum and maximum retention periods. Retained files cannot be deleted until their retention
period expires, regardless of the file system retention policy. You can set a specific retention period
for a file; however, it must be within the minimum and maximum retention periods associated with
the file system. If you set a time that is less than the minimum retention period, the expiration time
of the period will be adjusted to match the minimum retention period. Similarly, if the new retention
period exceeds the maximum retention period, the expiration time will be adjusted to match the
maximum retention period. If you do not set a retention period for a file, the default retention period
is used. If that default is zero, the file will not be retained.
Autocommit period. Files that are not changed during this period automatically become WORM
or WORM-retained when the period expires. (If the default retention period is set to zero, the files
become WORM. If the default retention period is set to a value greater than zero, the files become
WORM-retained.) The autocommit period is optional and should not be set if you want to keep
normal files in the file system.
IMPORTANT: For a file to become WORM, its ctime and mtime must be older than the
autocommit period for the file system. On Linux, ctime means any change to the file, either its
contents or any metadata such as owner, mode, times, and so on. The mtime is the last modified
time of the file's contents.
Retention mode. Controls how the expiration time for the retention period can be adjusted:
•
Enterprise mode. The expiration date of the retention period can be extended to a later date.
•
Relaxed mode. The expiration date of the retention period can be moved in or extended to
a later date.
The autocommit and default retention periods determine the steps you will need to take to make a
file WORM or WORM-retained. See “Setting a normal file to WORM or WORM-retained” (page 202)
for more information.
Data validation scans
To ensure that WORM and retained files remain unchanged, it is important to run a data validation
scan periodically. Circumstances such as the following can cause a file to change unexpectedly:
•
System hardware errors, such as write errors
•
Degrading of on-disk data over time, which can change the stored bit values, even if no
accesses to the data are performed
•
Malicious or accidental changes made by users
A data validation scan computes hash sum values for the WORM, WORM-retained, and
WORM-hold files in the scanned file system or subdirectory and compares them with the values
originally computed for the files. If the scan identifies changes in the values for a particular file,
an alert is generated on the Management Console. You can then replace the bad file with an
unchanged copy from an earlier backup or from a remote replication.
NOTE:
Normal files are not validated.
The time required for a data scan depends on the number of files in the file system or subdirectory.
If there are a large number of files, the scan could take up to a few weeks to verify all content on
Overview
197
storage. A scheduled scan will quit immediately if it detects that a scan of the same file system is
already running.
You can schedule periodic data validation scans, and you can also run on-demand scans.
Enabling file systems for data retention
You can enable a new or an existing file system for data retention and, optionally, other features
that require a retention-enabled file system, including validation, reporting, and Express Query.
When you enable a file system, you can define a retention profile that specifies the retention mode
and the default, minimum, and maximum retention periods.
New file systems
Enable data retention. The New Filesystem Wizard includes a WORM/Data Retention dialog box
that allows you to enable data retention and define a retention profile for the file system. You can
also enable and define schedules for data validation scans and data collection for reports.
The default retention period determines whether you can manage WORM (non-retained) files as
well as WORM-retained files. To manage only WORM-retained files, set the default retention
period. WORM-retained files then use this period by default; however, you can assign a different
retention period if desired.
To manage both WORM (non-retained) and WORM-retained files, uncheck Set Default Retention
Period, which sets the default retention period to 0 seconds. When you make a WORM file retained,
you will need to assign a retention period to the file.
The Set Auto-Commit Period option specifies that files will become WORM or WORM-retained if
they are not changed during the specified period. (If the default retention period is set to zero, the
files become WORM. If the default retention period is set to a value greater than zero, the files
become WORM-retained.) To use this feature, check Set Auto-Commit Period and specify the time
198 Managing data retention
period. The minimum value for the autocommit period is five minutes, and the maximum value is
one year. If you plan to keep normal files on the file system, do not set the autocommit period.
Enable Data Validation. Check this option to schedule periodic scans on the file system. Use the
default schedule, or select Modify to open the Data Validation Scan Schedule dialog box and
configure your own schedule.
Enable Report Data Generation. Check this option to generate data retention reports. Use the
default schedule, or select Modify to open the Report Data Generation Schedule dialog box and
configure your own schedule.
Enabling file systems for data retention 199
Enable Express Query. Check this option to enable Express Query on the file system. See “Express
Query” (page 217), for details.
Enabling data retention from the CLI
You can also enable data retention when creating a new file system from the CLI. Use ibrix_fs
-c and include the following-o options:
-o "retenMode=<mode>,retenDefPeriod=<period>,retenMinPeriod=<period>,
retenMaxPeriod=<period>,retenAutoCommitPeriod=<period>"
The retenMode option is required and is either enterprise or relaxed. You can specify any,
all, or none of the period options. retenDefPeriod is the default retention period,
retenMinPeriod is the minimum retention period, and retenMaxPeriod is the maximum
retention period.
The retenAutoCommitPeriod option specifies that files will become WORM or WORM-retained
if they are not changed during the specified period. (If the default retention period is set to zero,
the files become WORM. If the default retention period is set to a value greater than zero, the files
become WORM-retained.) The minimum value for the autocommit period is five minutes, and the
maximum value is one year. If you plan to keep normal files on the file system, do not set the
autocommit period.
When using a period option, enter a decimal number, optionally followed by one of these
characters:
•
s (seconds)
•
m (minutes)
•
h (hours)
•
d (days)
•
w (weeks)
•
M (months)
•
y (years)
If you do not include a character specifier, the decimal number is interpreted as seconds.
200 Managing data retention
The following example creates a file system with Enterprise mode retention, with a default retention
period of 1 month, a minimum retention period of 3 days, a maximum retention period of 5 years,
and an autocommit period of 1 hour:
ibrix_fs -o "retenMode=Enterprise,retenDefPeriod=1M,retenMinPeriod=3d,
retenMaxPeriod=5y,retenAutoCommitPeriod=1h" -c -f ifs1 -s ilv_[1-4] -a
Configuring data retention on existing file systems
NOTE: Data retention cannot be enabled on a file system created on StoreAll software 5.6 or
earlier versions. Instead, create a new file system on StoreAll software 6.0 or later, and then copy
or move files from the old file system to the new file system.
To enable or change the data retention or Express Query configuration an existing file system, first
unmount the file system. Select Active Tasks > WORM/Data Retention from the lower Navigator,
and then click Modify on the WORM/Data Retention panel. You do not need to unmount the file
system to change the configuration for data validation or report data generation.
To enable data retention on an existing file system using the CLI, run this command:
ibrix_fs -W -f FSNAME -o "retenMode=<mode>,retenDefPeriod=<period>,retenMinPeriod=<period>,
retenMaxPeriod=<period>"
To enable data retention on an existing file system, created with StoreAll version 6.0 or earlier,
follow the steps to upgrade the file system as described in the HP StoreAll 9300/9320 Storage
Administrator Guide in the section, Upgrading the StoreAll software to the 6.2 release. Then,
configure the file system retention profile as described in the steps provided earlier in this section.
Viewing the retention profile for a file system
To view the retention profile for a file system, select the file system on the Management Console,
and then select WORM/Data Retention from the lower navigator. The WORM/Data retention
panel shows the retention profile.
Enabling file systems for data retention 201
To view the retention profile from the CLI, use the ibrix_fs -i command, as in the following
example:
ibrix_fs -i -f ifs1
FileSystem: ifs1
=========================
{ … }
RETENTION
:
Enterprise [default=15d,mininum=1d,maximum=5y]
Changing the retention profile for a file system
The file system must be unmounted when you make changes to the retention profile. After unmounting
the file system, click Modify on the WORM/Data Retention panel to open the Modify WORM/Data
Retention dialog box and then make your changes.
To change the configuration from the CLI, use the following command:
ibrix_fs -W -f FSNAME -o "retenMode=<mode>,retenDefPeriod=<period>,retenMinPeriod=<period>,
retenMaxPeriod=<period>,retenAutoCommitPeriod=<period>"
Managing WORM and retained files
You can change a file to the WORM or WORM-retained state, view the retention information
associated with a file, and use administrative tools to manage individual files, including setting or
removing a legal hold, setting or removing a retention period, and administratively deleting a file.
Setting a normal file to WORM or WORM-retained
All files created or uploaded by the user to a file system are initially created as normal files. A
normal file can be set to WORM or WORM-retained after being created. The autocommit and
default retention periods determine the steps you will need to take.
202 Managing data retention
Autocommit period is set and the default retention period is zero seconds:
•
Files remaining unchanged during the autocommit period automatically become WORM but
are not retained and can be deleted. To make a WORM file retained, set the atime to a time
in the future, either before or after the file becomes WORM.
Autocommit period is set and the default retention period is non-zero:
•
Files remaining unchanged during the autocommit period automatically become
WORM-retained and use the default retention period. You can assign a different retention
period to a file if necessary.
Autocommit period is not set and the default retention period is zero seconds:
•
To make a normal file WORM, run a command to set the file to read-only. If the file was
created as read-only, the act of setting it to read-only will cause the file to become WORM,
even though the permissions will not change.
•
To make a WORM file retained, set the atime to a time in the future, either before or after the
file becomes WORM.
Auto commit period is not set and the default retention period is non-zero:
•
To make a normal file WORM-retained, run a command to set the file to read-only. If the file
was created as read-only, the act of setting it to read-only will cause the file to become WORM,
even though the permissions will not change. By default, the file uses the default retention
period.
•
To assign a different retention period to the WORM-retained file, set the atime to a time in
the future.
NOTE: If you are not using autocommit, files must explicitly be made read-only to make them
WORM or WORM-retained. Typically, you can configure your application to do this.
Making a file read-only
UNIX. For users logged on locally to StoreAll servers or for NFS users mounting a StoreAll file
system, usechmod to make the file read-only. For example:
chmod 444 myfile.txt
All of the write bits of the permissions must be cleared to set the file to WORM or WORM-retained,
but none of the other permissions bits, such as read and execute, need to be changed. To clear
all read bits without changing any other bits:
chmod a-w myfile.txt
Windows. For a Windows SMB client accessing a file on a StoreAll file system, for example via
a mapped share drive, use the attrib command to make the file read-only:
Z:\mydir> attrib +r myfile.txt
The Z drive specifies the current directory as a mapped share drive.
Setting the atime
UNIX. Use a command such as touch to set the access time to the future:
touch -a -d "30 minutes" myfile.txt
See the touch(1) man page documentation for the time/date formats allowed with the -d option.
You can also enter the following on a Linux command line to see the acceptable date/time strings
for the touch command:
info "Date input formats"
Windows. Windows does not include a touch command. Instead, use a third-party tool such as
cygwin or FileTouch to set the access time to the future.
Managing WORM and retained files 203
NOTE: For SMB users setting the access time manually for a file, the maximum retention period
is 100 years from the date the file was retained. For NFS users setting the access time manually
for a file, the retention expiration date must be before February 5, 2106.
The access time has the following effect on the retention period:
•
If the access time is set to a future date, the retention period of the file is set so that retention
expires at that date.
•
If the access time is not set, the file inherits the default retention period for the file system.
Retention expires at that period in the future, starting from the time the file is set read-only.
•
If the access time is not set and the default retention period is zero, the file will become WORM
but not retained, and can be deleted.
You can change the retention period if necessary; see “Changing a retention period” (page 206).
Viewing the retention information for a file
To view the retention information for a file, run the following command:
ibrix_reten_adm -l -f FSNAME -P PATHLIST
For example:
# ibrix_reten_adm -l -f sales_fs -P /sales_fs/dir1/contacts.txt
nl
/sales_fs/dir1/contacts.txt: state={retained} retain-to:{2011-Nov-10
15:55:06} [period: 182d15h (15778800s)]
In this example, contacts.txt is a retained file, its retention period expires on November 10,
2011, and the length of the retention period is 182 days, 15 hours. The period displayed by the
ibrix_reten_adm command shows the length of the retention period from when retention was
first applied to the file. If a file expires from retention, and you later re-retain a file to a new future
date, the period is the total length of time from the first retention.
File administration
To administer files from the Management Console, select File Administration on the WORM/Data
Retention panel. Select the action you want to perform on the WORM/Data Retention – File
Administration dialog box.
204 Managing data retention
To administer files from the CLI, use the ibrix_reten_adm command.
IMPORTANT: Do not use the ibrix_reten_adm command on a file system that is not enabled
for data retention.
Specifying path lists
When using the Management Console or the ibrix_reten_adm command, you need to specify
paths for the files affected by the retention action. The following rules apply when specifying path
lists:
•
A path list can contain one or more entries, separated by commas.
•
Each entry can be a fully-qualified path, such as /myfs1/here/a.txt. An entry can also
be relative to the file system mount point. For example, if myfs1 is mounted at /myfs1, the
path here/a.txt is a valid entry.
•
A relative path cannot begin with a slash (/). Relative paths are always relative to the mount
point; they cannot be relative to the user’s current directory, unlike other UNIX commands.
•
A directory cannot be specified in a path list. Directories themselves have no retention settings,
and the command returns an error message if a directory is entered.
To apply an action to all files in a directory, you need to specify the paths to the files. You can use
wildcards in the pathnames, such as /my/path/*,/my/path/.??*. The command does not
apply the action recursively; you need to enter subdirectories.
To apply a command to all files in all subdirectories of the tree, you can wrap the
ibrix_reten_adm command in a find script (or other similar script) that calls the command
for every directory in the tree. For example, the following command sets a legal hold on all files
in the specified directory, except for dot-hidden files such as .bashrc:
find /ibrixFS/mydir -type d -exec ibrix_reten_adm -h -f ibrixFS -P {}/*
\;
The following script includes files beginning with a dot, such as .a or .bashrc. (This includes
files uploaded to the file system, not file system files such as the .archiving tree.)
Managing WORM and retained files 205
find /ibrixFS/mydir -type d -exec ibrix_reten_adm -h -f ibrixFS -P
{/*,{}/.??*,.[!.]*
Setting or removing a legal hold
When a legal hold is set on a retained or WORM file, the file cannot be deleted until the hold is
released, even if the retention period has expired. On the WORM/Data Retention – File
Administration dialog box, select Set a Legal Hold and specify the appropriate file.
To remove a legal hold from a file, Remove a Legal Hold and specify the appropriate file. When
the hold is removed, the file is again under the control of its original retention policy.
To set a legal hold from the CLI, use this command:
ibrix_reten_adm -h -f FSNAME -P PATHLIST
To remove a legal hold from the CLI, use this command:
ibrix_reten_adm -r -f FSNAME -P PATHLIST
Changing a retention period
If necessary, you can change the length of the current retention period. For example, you might
want to assign a different retention period to a retained file currently using the default retention
period. This is done by resetting the expiration time of the period. Set the atime to a new retention
expiration date/time in the future. See “Setting the atime” (page 203). If the retention mode is
Enterprise, the new expiration time must be later than the current expiration time. If the retention
mode is Relaxed, the new expiration time can be earlier or later than the current expiration time.
Use the File Administration panel on the Management Console or the ibrix_reten_adm
command. For either the Management Console or CLI, the retention mode (Enterprise or Relaxed)
is ignored, and the retention expiration time can be set earlier or later than the existing period.
On the WORM/Data Retention – File Administration dialog box, select Reset Expiration Time and
specify the appropriate file. When you set the new expiration time, the length of the retention
period is adjusted accordingly. The retention expiration time is set to that amount of time in the
future starting from now, not that amount of time from the original start of retention. If the resulting
total length of the retention period, from the time it was first retained, is less than the minimum
retention period for the file system, the expiration time will be adjusted to match the minimum
retention period. Similarly, if the new time will exceed the maximum retention period, the expiration
time will be adjusted to match the maximum retention period.
206 Managing data retention
To reset the expiration time using the CLI:
ibrix_reten_adm -e expire_time -f FSNAME -P PATHLIST
If you specify an interval such as 20m (20 minutes) for the expire_time, the retention expiration
time is set to that amount of time in the future starting from now, not that amount of time from the
original start of retention. If you specify an exact date/time such as 19:20:02 or 2/16/2012 for
the expire_time, the command sets the retention expiration time to that exact time. If the new
exact date/time is in the past, the file immediately expires from retention and becomes WORM,
but it is no longer retained.
See the Linux date(1) man page for a description of the valid date/time formats for the
expire_time parameter.
Removing the retention period
When you remove the retention period from a retained file, the file becomes a WORM file. On
the WORM/Data Retention – File Administration dialog box, select Remove Retention Period and
specify the appropriate file.
To remove the retention period using the CLI:
ibrix_reten_adm -c -f FSNAME -P PATHLIST
Removing the WORM attribute from a file
You cannot remove the WORM attribute from a file. If you need to modify a WORM file that has
been retained, copy the file elsewhere, delete the original file (administratively if necessary), then
copy the file back as a normal file.
Deleting a file administratively
This option allows you to delete a file that is under the control of a data retention policy. On the
WORM/Data Retention – File Administration dialog box, select Administrative Delete and specify
the appropriate file.
CAUTION: Deleting files administratively removes them from the file system, regardless of the
data retention policy.
Managing WORM and retained files 207
To delete a file using the CLI:
ibrix_reten_adm -d -f FSNAME -P PATHLIST
Running data validation scans
Scheduling a validation scan
When you use the Management Console to enable a file system for data validation, you can set
up a schedule for validation scans. You might want to run additional scans of the file system at
other times, or you might want to scan particular directories in the file system.
NOTE: Although you can schedule multiple scans of a file system, only one scan can run at a
time for a given file system.
To schedule a validation scan, select the file system on the Management Console, and then select
Active Tasks from the lower navigator. Select New to open the Starting a New Task dialog box.
Select Data Validation as the Task Type.
When you click OK, the Start a new Validation Scan dialog box appears. Change the path to be
scanned if necessary.
Go to the Schedule tab to specify when you want to run the scan.
208 Managing data retention
Starting an on-demand validation scan
You can run a validation scan at any time. Select the file system on the Management Console,
and then select Active Tasks from the lower navigator. Click New to open the Starting a New Task
dialog box. Select Data Validation as the Task Type.
When you click OK, the Start a new Validation Scan dialog box appears. Change the path to be
scanned if necessary and click OK.
Running data validation scans 209
To start an on-demand validation scan from the CLI, use the following command:
ibrix_datavalidation -s -f FSNAME [-d PATH]
Viewing, stopping, or pausing a scan
Scans in progress are listed on the Active Tasks panel on the Management Console. If you need
to halt the scan, click Stop or Pause on the Active Tasks panel. Click Resume to resume the scan.
To view the progress of a scan from the CLI, use the ibrix_task command. The -s option lists
scheduled tasks.
ibrix_task -i [-f FILESYSTEMS] [-h HOSTNAME]
To stop a scan, use this command:
ibrix_task -k -n TASKID [-F] [-s]
To pause a scan, use this command:
ibrix_task -p -n TASKID
To resume a scan, use this command:
ibrix_task -r -n TASKID
Viewing validation scan results
While a validation scan is running, it is listed on the Active Tasks panel on the Management
Console (select the file system, and from the lower Navigator select Active Tasks > WORM/Data
Retention > Data Validation Scans). Information about completed scans is listed on the Inactive
Tasks panel (select the file system, and then select Inactive Tasks from the lower Navigator). On
the Inactive Tasks panel, select a validation task and then click Details to see more information
about the scan.
A unique validation summary file is also generated at the end of each scan. The files are created
under the root directory of the file system at {filesystem root}/.archiving/validation/
history. The validation summary files are named <ID-n>.sum, such as 1–0.sum, 2–0.sum,
and so on. The ID is the task ID assigned by StoreAll software when the scan was started. The
second number is 0 unless there is an existing summary file with the same task ID, in which case
the second number increments to make the filename unique.
210
Managing data retention
Following is a sample validation summary file:
# cat /fsIbrix/.archiving/validation/history/4-0.sum
JOB_ID=4
FILESYSTEM_NAME=fsIbrix
FILESYSTEM_MOUNT_DIR=/fsIbrix
PATH=/fsIbrix/./directory90
SCANTYPE=hashsum
CREATE_TIME=Wed Jul 18 05:18:12 2012
START_TIME=Wed Jul 18 05:18:12 2012
KICKOFF_TIME=Wed Jul 18 05:18:12 2012
STOP_TIME=Mon Jul 23 14:36:59 2012
NUM_JOB_ERRORS=0
NUM_FILES_VALIDATED=1000000
NUM_FILES_SKIPPED=0
NUM_CONTENT_INCONSISTENCIES=0
NUM_METADATA_INCONSISTENCIES=0
#
Viewing and comparing hash sums for a file
If a validation scan summary file reports inconsistent hash sums for a file and you want to investigate
further, use the showsha (SHA test utility) and showvms (Validation Express Query lookup utility)
commands to compare the current hash sums with the hash sums that were originally calculated
for the file.
The showsha command calculates and displays the hash sums for a file. For example:
# /usr/local/ibrix/sbin/showsha rhnplugin.py
Path hash: f4b82f4da9026ba4aa030288185344db46ffda7b
Meta hash: 80f68a53bb4a49d0ca19af1dec18e2ff0cf965da
Data hash: d64492d19786dddf50b5a7c3bebd3fc8930fc493
The showvms command displays the hash sums stored for the file. For example:
# /usr/local/ibrix/sbin/showvms rhnplugin.py
VMSQuery returned 0
Path hash: f4b82f4da9026ba4aa030288185344db46ffda7b
Meta hash: 80f68a53bb4a49d0ca19af1dec18e2ff0cf965da
Data hash: d64492d19786dddf50b5a7c3bebd3fc8930fc493
last attempt: Wed Dec 31 17:00:00 1969
last success: Wed Dec 31 17:00:00 1969
changed: 0
In this example, the hash sums match and there are no inconsistencies. The 1969 dates appearing
in the showvms output mean than the file has not yet been validated.
Handling validation scan errors
When a validation scan detects files having hash values inconsistent with their original values, it
displays an alert in the events section of the Management Console. However, the alert lists only
the first inconsistent file detected. It is important to check the validation summary report to identify
all inconsistent files that were flagged during the scan.
When there are inconsistencies, it is necessary to determine whether the cause is file corruption
or checksum corruption. Compare the file checksum with the backup file:
/usr/local/ibrix/sbin/showsha <fileFromBackup>
Running data validation scans
211
Checksum corruption:
If the checksums of the <filesystem/file> and <fileFromBackup> are identical, the
.archiving directory may have been corrupted (a checksum corruption). If this is the case, you
must restore the checksums:
•
If only a few files are inconsistent and you want to postpone restoring the checksums, you can
back up the files with a checksum inconsistency, delete those files from the file system, and
restore the backed up files to the file system.
•
If many checksums are corrupted, there may be a hardware failure. To restore the checksums,
complete the following procedure:
IMPORTANT:
1.
2.
3.
4.
This procedure should be performed only under the guidance of HP Support.
Create a backup of the existing .archiving directory for future reference.
Replace the faulty hardware.
Clean up the existing checksums.
Start a new data validation scan on the entire file system to compute the checksums.
File corruption
If the checksums of the <filesystem/file> and <fileFromBackup> are not identical, there
is data (content) corruption.
To replace an inconsistent file, follow these steps:
1. Obtain a good version of the file from a backup or a remote replication.
2. If the file is retained, remove the retention period for the file, using the Management Console
or the ibrix_reten_adm -c command.
3. Delete the file administratively using the Management Console or the ibrix_reten_adm
-d command.
4. Copy/restore the good version of the file to the data-retained file system or directory. If you
recover the file using an NDMP backup application, the proper retention expiration period is
applied from the backup copy of the file. If you copy the file another way, you will need to
set the atime and read-only status.
Creating data retention reports
Three reports are available: data retention, utilization, and validation. The reports can show results
either for the entire file system or for individual tiers. To generate a tiered report, the file system
must include at least one tier.
You can display reports as PDFs, CSV (CLI only), or in HTML format (Management Console only).
The latest files in each format are saved in /usr/local/ibrix/reports/output/<report
type>/.
When you generate a report, the system creates a CSV file containing the data for the report. The
latest CSV file is also stored in /usr/local/ibrix/reports/output/<report type>/.
NOTE: Older report files are not saved. If you need to keep report files, save them in another
location before you generate new reports.
The data retention report lists ranges of retention periods and specifies the number of files in each
range. The Number of Files reported on the graph scales automatically and is reported as individual
files, thousands of files, or millions of files. The following example shows a data retention report
for an entire file system.
212
Managing data retention
The utilization report summarizes how storage is utilized between retention states and free space.
The next example shows the first page of a utilization report broken out by tiers. The results for
each tier appear on a separate page. The total size scales automatically, and is reported as MB,
GB, or TB, depending on the size of the file system or tier.
A data validation report shows when files were last validated and reports any mismatches. A
mismatch can be either content or metadata. The Number of Files scales automatically and is
reported as individual files, thousands of files, or millions of files.
Creating data retention reports
213
Generating and managing data retention reports
To run an unscheduled report from the Management Console, select Filesystems in the upper
Navigator and then select WORM/Data Retention in the lower Navigator. On the WORM/Data
Retention panel, click Run a Report. On the Run a WORM/Data Protection Summary Report dialog
box, select the type of report to view, and then specify the output format.
If an error occurs during report generation, a message appears in red text on the report. Simply
run the report again.
214
Managing data retention
Generating data retention reports from the CLI
You can generate reports at any time using the ibrix_reports command. Scheduled reports
can be configured only on the Management Console.
First run the following command to scan the file system and collect the data to be used in the
reports:
ibrix_reports -s -f FILESYSTEM
Then run the following command to generate the specified report:
ibrix_reports -g -f FILESYSTEM -n NAME -o OUTPUT FORMAT
Use the -n option to specify the type of report, where NAME is one of the following;
•
retention
•
retention_by_tier
•
validation
•
validation by tier
•
utilization
•
utilization_by_tier
The output format specified with -o can be csv or pdf.
Using hard links with WORM files
You can use the Linux ln command without the -s option to create a hard link to a normal
(non-WORM) file on an retention-enabled file system. If you later make the file a WORM file, the
following restrictions apply until the file is deleted:
•
You cannot make any new hard links to the file. Doing this would increment the metadata of
the link count in the file's inode, which is not allowed under WORM rules.
•
You can delete hard links (the original file system entry or a hard-link entry) without deleting
the other file system entries or the file itself. WORM rules allow the link count to be
decremented.
Using remote replication
When using remote replication for file systems enabled for retention, the following requirements
must be met:
•
The source and target file systems must use the same retention mode (Enterprise or Relaxed).
•
The default, maximum, and minimum retention periods must be the same on the source and
target file systems.
•
A clock synchronization tool such as ntpd must be used on the source and target clusters. If
the clock times are not in sync, file retention periods might not be handled correctly.
Also note the following:
•
Multiple hard links on retained files on the replication source are not replicated. Only the first
hard link encountered by remote replication is replicated, and any additional hard links are
not replicated. (The retainability attributes on the file on the target prevent the creation of any
additional hard links). For this reason, HP strongly recommends that you do not create hard
links on retained files.
•
For continuous remote replication, if a file is replicated as retained, but later its retainability
is removed on the source file system (using data retention management commands), the new
file’s attributes and any additional changes to that file will fail to replicate. This is because of
the retainability attributes that the file already has on the target, which will cause the file system
on the target to prevent remote replication from changing it.
Using hard links with WORM files
215
•
When a legal hold is applied to a file, the legal hold is not replicated on the target. If the file
on the target should have a legal hold, you will also need to set the legal hold on that file.
•
If a file has been replicated to a target and you then change the file's retention expiration
time with the ibrix_reten_adm -e command, the new expiration time is not replicated to
the target. If necessary, also change the file's retention expiration time on the target.
Backup support for data retention
The supported method for backing up and restoring WORM/retained files is to use NDMP with
DMA applications. Other backup methods will back up the file data, but will lose the retention
configuration.
Troubleshooting data retention
Limitation on remote replication of retention-enabled file systems
When there is a large I/O load on the source file system and the auto-commit period is set to a
lower value, the following errors will display in the CRR logs:
•
Partial transfer: ibrcfrworker returned error 23 (Partial transfer), which
can be expected in retention-enabled (WORM) environments when CRR cannot modify a
retained file on the target cluster, or in environments where the filters on the target cluster are
different from the source cluster.
•
Data stream error: ibrcfrworker returned error 12 (RERR_STREAMIO), which
can be expected in retention-enabled environments (WORM) when CRR cannot modify a
retained file on the target cluster.
To avoid these errors, set the auto-commit period to a higher value (the minimum value is five
minutes and the maximum value is one year).
Attempts to edit retained files can create empty files
It you attempt to edit a WORM file in the retained state, applications such as the vi editor will be
unable to edit the file, but can leave empty temp files on the file system.
Applications such as vi can appear to update WORM files
If you use an application such as vi to edit a WORM file that is not in the retained state, the file
will be modified, and it will be retained with the default retention period. This is the expected
behavior. The file modification occurs because the editor edits a temporary copy of the file, tries
to rename it to the real file, and when that fails, deletes the original file and then does the rename,
which succeeds (because unretained WORM files are allowed to be deleted).
Cannot enable data retention on a file system with a bad segment
Data retention must be set on all segments of a file system to ensure that all files can be managed
properly. File systems with bad segments cannot be enabled for data retention. If a file system has
a bad segment, evacuate or remove the segment first, and then enable the file system.
216
Managing data retention
15 Express Query
Express Query provides a per-file system database of system and custom metadata, and audit
histories of system and file activity. When Express Query is enabled on the file system, you can
manage the metadata service, configure auditing, create reports from the audit history, assign
custom metadata and certain system metadata to files and directories, and query for selected
metadata from files.
NOTE:
Express Query can only be enabled on file systems which have Data Retention enabled.
Express Query provides the following components:
Component
Brief Overview
Where to find more information
Metadata Service
The processes that manage the set of “Managing the metadata service”
(page 217)
per-file system metadata databases,
including auditing of file changes and
accessing the database in response
to REST API requests.
Auditing
Auditing lets you configure what
system and file changes go into the
Express Query database and create
reports of selected parts of the audit
history from the database.
“Managing auditing” (page 222)
StoreAll REST API in File Compatibility The StoreAll REST API share in
“HTTP-REST API file-compatible mode
mode
file-compatible mode provides
shares” (page 152)
programmatic access to user-stored
files and their metadata. The metadata
is stored on the HP StoreAll Express
Query database in the StoreAll cluster
and provides fast query access to
metadata without scanning the file
system.
IMPORTANT: Express Query cannot
be performed on StoreAll REST API in
object mode
Managing the metadata service
The metadata service includes the database management processes, as well as the processes that
watch for state changes and log the associated metadata in the Express Query database. The
service is controlled with the ibrix_archiving command. One set of processes runs on the
Active Fusion Manager server and manages all per-file system databases. Stopping the service
disables access to all file systems’ databases. This section provides the basic commands for
managing the metadata service, in addition to the following sections on metadata:
•
“Saving and importing file system metadata” (page 219)
•
“Metadata and continuous remote replication” (page 221)
View the status of the metadata service:
ibrix_archiving -i
The Services section of the Management Console dashboard also displays the status of the metadata
service.
List file systems registered for the metadata service:
ibrix_archiving -l
Managing the metadata service
217
Start the metadata service:
ibrix_archiving -s
Stop the metadata service:
ibrix_archiving -S [-F] [-t timeout secs]
The -t option specifies the time (in seconds) to wait for the service to stop gracefully.
The -F option forcefully stops the archiving daemons and disables database access to all file
systems enabled for Express Query. When you restart the service after using -F, the database
enters in recovery mode, which can take a long time to complete depending on the size of the
database.
Restart the metadata service:
ibrix_archiving -r
Backing up and restoring file systems with Express Query data
Express Query stores its metadata database for each file system on the file system, in the
<mountpoint>/.archiving/database directory. Therefore, if you back up a snapshot of
the entire file system, you also back up the database. If you intend to restore the database with
the file system, you must take a snapshot of the entire file system and back up the snapshot, not
the live file system. Backing up the live file system does not produce a consistent view of the files
and the database.
You can:
•
Restore the backup to a new StoreAll file system. See “Restoring a backup to a new StoreAll
file system” (page 218).
or
•
Restore the backup to an existing file system that has Express Query enabled. See “Restore
to an existing file system that has Express Query enabled” (page 219).
Restoring a backup to a new StoreAll file system
To restore a backup to a new StoreAll file system:
1. Create a new file system with Express Query not yet enabled.
2. Restore the backed up file system to the new file system.
3. Enable Express Query on the new file system, either in the Management Console or by the
CLI command:
ibrix_fs -T -E -f FSNAME
4.
5.
6.
7.
(Optional) Enable auditing, using the ibrix_fs -A [-f FSNAME] -oa ... CLI command
or in the Management Console.
(Optional) Create REST API shares. See “Using HTTP” (page 114).
Express Query re-synchronizes the file system and database by using the restored database.
This process might take some time.
Wait for the metadata resync process to finish. Enter the following command to monitor the
resync process for a file system:
ibrix_archiving -l
The status should be at OK for the file system before you proceed. Refer to the
ibrix_archiving section in the HP StoreAll Storage CLI Reference Guide for information
about the other states.
218
Express Query
Restore to an existing file system that has Express Query enabled
To restore a backup to an existing file system that has Express Query enabled:
1. Disable the express query feature for the file system, including the removal of any StoreAll
REST API shares. Disable auditing feature before you disable the express query feature.
a. To disable auditing, enter the following command:
ibrix_fs -A [-f FSNAME] -oa audit_mode=off
b.
Remove all StoreAll REST API shares created in the file system by entering the following
command:
ibrix_httpshare -d -f <fs_name>
c.
To disable the express query settings on a file system, enter the following command:
ibrix_fs -T -D -f FSNAME
2.
Delete the previously existing metadata database:
rm -Rf <mountpoint>/.archiving/database
3.
Delete the previously existing archive journal files that the file system creates for Express Query
to ingest:
rm -Rf <mountpoint>/.audit/*
4.
5.
Restore the backed up file system to this file system, overwriting existing files.
Re-enable Express Query on the file system, either in the Management Console or by the CLI
command:
ibrix_fs -T -E -f <FSNAME>
6.
7.
(Optional) Enable auditing, using the ibrix_fs -A [-f FSNAME] -oa … CLI command
or in the Management Console.
(Optional) Create REST API shares. See “Using HTTP” (page 114).
Express Query re-synchronizes the file system and database by using the restored database
information. This process might take some time.
8.
Wait for the metadata resync process to finish. Enter the following command to monitor the
resync process for a file system:
ibrix_archiving -l
The status should be at OK for the file system before you proceed. See the ibrix_archiving
section in the HP StoreAll Storage CLI Reference Guide for information about the other states.
Saving and importing file system metadata
Use the following procedures to save, or export, metadata that is stored only in the Express Query
database and not in the files themselves. You can then import the metadata later on a CRR target.
You can also import the metadata if you need to recreate the Express Query database on the file
system from which you exported the metadata. If you are recovering a complete file system with
NDMP backup or a supported non-NDMP backup application, you do not need to import the
metadata. The Express Query database and the metadata are included in a full file system backup
and thus it is recovered with the file system.
Saving custom metadata for a file system
Use the perl script MDExport to save custom metadata in a CSV file. You can then use the file
later on to import the metadata. The script has the following syntax:
MDExport.pl [--help|?] --dbconfig /usr/local/Metabox/scripts/startup.xml
--database <dbname> --outputfile <fname> --user ibrix [--verbose]
Managing the metadata service
219
The options specify the following:
Options
Description
--dbconfig
The metadata configuration file. Use only this path and file name.
/usr/local/Metabox/scripts/startup.xml
--database <dbname>
The database containing the metadata. <dbname> is the name of
the file system.
--outputfile <fname>
The CSV output file used to save the metadata.
--user ibrix
The username for accessing the database. Use only the “ibrix”
username.
Use perl to invoke the script. For example:
perl /usr/local/ibrix/bin/MDExport.pl --database ibrixFS --user ibrix
--dbconfig /usr/local/Metabox/scripts/startup.xml --output
/home/mydir/save.csv
This command exports metadata from the ibrixFS file system and generates the output file
save.csv in the /home/mydir directory. The CSV file contains a row for every custom attribute,
such as:
subdir/myfile.txt,color,red
NOTE: This command must run as the “ibrix” user on the system, to interact with Express Query.
Therefore, the directory to contain the output file must be writable by the ibrix user, which is in the
“ibrix-user” group. Setting the output directory to world read, write, and execute permission, for
example, lets the file be written regardless of the directory’s owning user and group.
Saving audit journal metadata
The ibrix_audit_reports command saves audit data from Express Query on a specific file
system. The data is placed in a CSV file. The command has the following syntax:
ibrix_audit_reports -t SORT_ORDER-f FILESYSTEM [-p PATH] [-b BEGIN_DATE]
[-e END_DATE] [-o class1[,class2,...]]
For example:
ibrix_audit_reports -t unordered -o all -f ibrixFS
This command saves audit data for all events in file system ibrixFS. Use the “unordered” option
for the fastest performance. See the HP StoreAll Storage CLI Reference Guide for more information
about this command.
Importing metadata to a file system
Use the MDImport tool to import a CSV file containing custom or audit metadata into a new
Express Query database. The CSV file can be the output of either the MDExport script or the
ibrix_audit_reports command. The command has the following syntax:
MDImport -f <FSname> -n <Fname> -t <Ftype>
The options specify the following:
Options
Description
-f <FSname>
The file system to receive the import.
-n <Fname>
The name of the CSV file.
-t <TYPE>
The type of metadata being imported (either audit or custom).
The following command imports custom metadata exported by the MDExport script:
MDimport -f newIbrixFs -t custom -n /home/mydir/save.csv
220 Express Query
The next command imports audit metadata exported by the ibrix_audit_reports command:
MDimport -f target -t audit -n
simple_report_for_source_at_1341513594723.csv
The ibrix_audit_reports command automatically generates the file name
simple_report_for_source_at_1341513594723.csv.
Metadata and continuous remote replication
When continuous remote replication (CRR) is configured for a file system, or run-once replication
is performed, metadata that is stored only in Express Query from the source cluster is not transferred
to the target cluster. System metadata stored in the inodes, such as file permissions, are transferred
intact by remote replication. Custom and audit history metadata, which are stored only in the
Express Query database, are not transferred, but can be exported manually and periodically to
files that CRR will replicate.
Express Query on the target system processes files replicated to its file system by CRR as new file
creations and modifications, with the dates and times the files were replicated. The audit history
contains the history of file replications from the source system, not the audit history of file accesses
on the source system. The target Express Query is not aware of any custom metadata applied to
files on the source system; however, you can add the source system’s audit history and custom
metadata to the target Express Query’s database by exporting and importing this metadata.
To include Express Query metadata in remote replication, you can create shell scripts (or scripts
in other languages) and set up periodic execution of those scripts by using standard Linux tools
such as “cron”. Such scripts must perform the steps of the export of custom metadata and audit
history described in “Saving and importing file system metadata” (page 219), with parameters
appropriate to your file systems. The exported files are copied to the target automatically by
continuous remote replication (CRR). Those files on the target can be imported into Express Query
running on the target at any time, if the target needs to become the active Express Query-enabled
file system.
The output file listed on the MDExport.pl command line must be in a directory that will be
replicated by CRR.
NOTE: The <mountpoint>/.archiving directory is excluded from replication. The
ibrix_audit_reports command creates its report output file in the <mountpoint>/
.archiving/reports subdirectory. Therefore, after issuing the ibrix_audit_reports
command, your script must move or copy the report output file to another directory on the file
system outside the .archiving tree for it to be replicated.
Metadata and synchronized server times
Metadata database updates often require coordinated actions from multiple StoreAll servers.
Therefore, it is important to keep your nodes as close together in time as possible, using NTP.
Modifications to the same file from different servers within the time difference between StoreAll
servers could cause inconsistencies between system metadata stored in the file inode and the
metadata stored in Express Query. To ensure consistency, keep servers synchronized with NTP
and avoid modifying a file from clients connected to different StoreAll servers at the same time. If
the server cannot be chosen by the client, such as when using external load balancers, then separate
your client operations on the same file beyond the maximum expected time difference between
any two StoreAll servers.
Metadata and synchronized server times 221
Managing auditing
Auditing lets you:
•
Find out which events you have already captured in the Express Query database and control
what gets captured in regards to file changes in the Express Query database. See “Audit log”
(page 222) for more information.
•
Gather information from audit reports as to what is in the Express Query database. See “Audit
log reports” (page 223) for more information.
Audit log
The audit log provides a detailed history of activity for specific file system events. The Audit Log
panel shows the current audit configuration.
To change the configuration, click Modify on the Audit Log panel. On the Modify Audit Settings
dialog box, you can change the expiration policies and schedule, and you can change the events
that are audited. The default Audit Logs Expiration Policy is 45 days. If you need to keep audit
history for a longer period of time, increase the time period. Enable and disable event types and
groups using the checkboxes and the arrows to move events between the Disabled and Enabled
lists. If an event is not selected for auditing, it cannot be included in an audit report. By default,
all events are enabled. For significantly enhanced system performance and reduced audit log size,
if files are accessed frequently, disable the “File Read” event. Monitor the space used by the
<mount point>/.archiving/.database tree, which includes both current metadata and
audit log history. To reduce space usage, reduce the number of event types enabled for auditing
and/or shorten the Audit Logs Expiration Policy.
222 Express Query
Audit log reports
Audit log reports include metadata for selected file system events that occurred during a specific
time period. To generate an audit log report, click Run a Report on the Audit Log panel. Specify
the parameters for the report on the Run an Audit Log Report dialog box.
NOTE: Although you can select any of the events for a report, an event must be selected for
auditing to appear in the report. Use ibrix_fs -A or the Modify Audit Settings dialog box to
change the events selected for auditing.
Managing auditing 223
The audit reports are in CSV (comma-separated) format and are placed in the following directory:
<file_system_mountpoint>/.archiving/reports
The file names have this format:
<report_type>_report_for_<FS_name>_at_<numeric_timestamp>.csv
For example:
file_report_for_ibrixFS_at_1343771410270.csv
simple_report_for_ibrixFS_at_1343772788085.csv
nl
Following are definitions for the less obvious fields in an audit report.
Field
Description
seqno
The sequence number of this event, increased incrementally for each event processed per node
in the StoreAll cluster
eshost
The ID of the node in the StoreAll cluster that recorded this event a hex string)
eventsuccess
Whether the attempted operation succeeded or failed
eventerrorcode
Defined in the standard Linux header files errno_base.h and errno.h
description
Currently unused
reserved1/2/3
Always zero; ignore this field
POID_lo32/hi64
The Permanent Object ID that uniquely identifies the file within the StoreAll cluster (a 96-bit
integer split in two parts)
224 Express Query
Field
Description
*time[n]sec
The seconds and nanoseconds of that time, in UNIX epoch time, which is the number of seconds
since the start of Jan 1, 1970 in UTC
mode
The Linux mode/permission bits (a combination of the values shown by the Linux man 2 stat
command)
*hash, content*,
meta*
Currently unused
To generate reports from the command line, use the ibrix_audit_reports command:
ibrix_audit_reports -t SORT_ORDER -f FILESYSTEM [-p PATH] [-b BEGIN_DATE]
[-e END_DATE] [-o class1[,class2,...]]
See the HP StoreAll Storage CLI Reference Guide for more information about this command,
including the events that can be specified for the report.
Managing audit reports
When you create or modify a file system, you can set the following audit report options to specify
when old reports are deleted:
•
Audit log reports expiration policy: whether reports should be deleted after a specific number
of days, weeks, months, or years, or should never be deleted.
•
Audit log reports expiration schedule: the time each day at which expired audit reports are
deleted.
You can set also set these options for one or more file systems using the ibrix_audit_reports
command.
Be sure to monitor the space used by audit reports, especially if you are retaining them for a long
period of time.
Managing auditing 225
16 Configuring Antivirus support
The StoreAll Antivirus feature can be used with supported Antivirus software, which must be run
on systems outside the cluster. These systems are called external virus scan engines.
To configure the Antivirus feature on a StoreAll cluster, complete these steps:
1. Add the external virus scan engines to be used for virus scanning. You can schedule periodic
updates of virus definitions from the virus scan engines to the cluster nodes.
2. Enable Antivirus on the file systems.
3. Configure Antivirus settings as appropriate for your cluster.
For file sharing protocols other than SMB(CIFS), when Antivirus is enabled on a file system, scans
are triggered when a file is first read. Subsequent reads to the file do not trigger a scan unless the
file has been modified or the virus definitions have changed. For SMB, you must specify the file
operations that trigger a scan (open, close, or both).
The scans are forwarded to an external scan engine, which blocks the operation until the scan is
complete. After a successful scan, if the file is found to be infected, the system reports a
permission denied error message as the result of the file operation. If the file is clean, the file
operation is allowed to go through.
All infected files are quarantined by default. Use the quarantine utility (ibrix_avquarantine)
to manage the quarantined infected files, such as to move, delete, list or reset the infected files.
For more information, see the HP StoreAll Storage CLI Reference Guide.
You can define Antivirus exclusions on directories in a file system to exclude files from being
scanned. When you define an exclusion rule for a directory, all files/folders in that directory
hierarchy are excluded from Antivirus scans based on the rule.
Anti-virus support can be configured from the Management Console or the CLI. On the Management
Console, select Cluster Configuration from the Navigator, and then select Antivirus from the lower
Navigator. The Antivirus Settings panel displays the current configuration.
226 Configuring Antivirus support
On the CLI, use the ibrix_avconfig command to configure Antivirus support. Use the ibrix_av
command to update Antivirus definitions or view statistics.
Adding or removing external virus scan engines
The Antivirus software is run on external virus scan engines. You will need to add these systems
to the Antivirus configuration.
IMPORTANT: HP recommends that you add a minimum of two virus scan engines to provide
load balancing for scan requests and to prevent loss of scanning if one virus scan engine becomes
unavailable.
On the Management Console, select Virus Scan Engines from the lower Navigator to open the
Virus Scan Engines panel, and then click Add on that panel. On the Add dialog box, enter the IP
address of the external scan engine and the ICAP port number configured on that system.
NOTE: The default port number for ICAP is 1344. HP recommends that you use this port, unless
it is already in use by another activity. You may need to open this port for TCP/UDP in your firewall.
Adding or removing external virus scan engines 227
To remove an external virus scan engine from the configuration, select that system on the Virus
Scan Engines panel and click Delete.
To add an external virus scan engine from the CLI, use the following command:
ibrix_avconfig -a -S -I IPADDR -p PORTNUM
The port number specified here must match the ICAP port number configured on the virus scan
engines.
Use the following command to remove an external virus scan engine:
ibrix_avconfig -r -S -I IPADDR
Enabling or disabling Antivirus on StoreAll file systems
On the Management Console, select AV Enable/Disable file systems from the lower Navigator to
open the AV Enable Disable panel, which lists the file systems in the cluster. Select the file system
to be enabled, click Enable, and confirm the operation. To disable Antivirus, click Disable.
The CLI commands are as follows:
Enable Antivirus on all file systems in the cluster:
ibrix_avconfig -e -F
Enable Antivirus on specific file systems:
ibrix_avconfig -e -f FSLIST
If you specify more than one file system, use commas to separate the file systems.
Disable Antivirus on all file systems:
ibrix_avconfig -d -F
Disable Antivirus on specific file systems:
ibrix_avconfig -d -f FSLIST
Updating Antivirus definitions
You should update the virus definitions on the cluster nodes periodically. On the Management
Console, click Update ClusterWide ISTag on the Antivirus Settings panel. The cluster then connects
with the external virus scan engines and synchronizes the virus definitions on the cluster nodes with
the definitions on the external virus scan engines.
228 Configuring Antivirus support
NOTE: All virus scan engines should have the same virus definitions. Inconsistencies in virus
definitions can cause files to be rescanned.
Be sure to coordinate the schedules for updates to virus definitions on the virus scan engines and
updates of virus definitions on the cluster nodes.
On the CLI, use the following commands:
Schedule cluster-wide updates of virus definitions:
ibrix_av -t [-S CRON_EXPRESSION]
The CRON_EXPRESSION specifies the time for the virus definition update. For example, the
expression "0 0 12 * * ?" executes this command at noon every day.
View the current schedule:
ibrix_av -l -T
Configuring Antivirus settings
Defining the Antivirus unavailable policy
This policy determines how targeted file operations are handled when an external virus scan engine
is not available. The policies are:
•
Allow (Default). All operations triggering scans are allowed to run to completion.
•
Deny. All operations triggering scans are blocked and returned with an error. This policy
ensures that a virus is not returned when Antivirus is not available.
Following are examples of situations that can cause Antivirus to be unavailable:
•
All configured virus scan engines are unreachable.
•
The cluster nodes cannot communicate with the virus scan engines because of network issues.
•
The number of incoming scan requests exceeds the threads available on the cluster nodes to
process the requests.
The Antivirus Settings panel shows the current setting for this policy. To toggle the policy, click
Configure AV Policy.
To set the policy from the CLI, use this command:
ibrix_avconfig -u -g A|D
Configuring Antivirus settings 229
Defining protocol-specific policies
For certain file sharing protocols (currently only SMB/CIFS), you can specify the file operations
that trigger a scan (open, close, or both). There are three policies:
•
OPEN (Default). Scan on open.
•
CLOSE. Scan on close.
•
BOTH. Scan on open and close.
NOTE: If you configure the protocol specific policy to CLOSE – Scan on close, older written files
are not scanned automatically whenever the virtual scan engine is updated with newer virus
definitions. Also there is a delay of 35 seconds before the file is subject to scanning after the file
closes to flush all the data. Any read or open to the file during this time is not scanned.
As virus detection and updates of virus definition files always lags new viruses being discovered,
it is highly recommended that AV be configured to use the Both option to re-scan older written files
on open whenever new virus definitions are provided, thereby ensuring protection against virus
infections. Scan on Close should be used only as an optimization to take the virus scan penalty at
close time instead of at an open time.
To set the policy:
1. Select Protocol Scan Settings from the lower Navigator tree.
The AV Protocol Settings panel then displays the current setting.
2.
To set or change the setting, click Modify on the panel and then select the appropriate setting
from the Action dialog box.
To set the policy from the CLI, use this command:
ibrix_avconfig -u -k PROTOCOL -G O|C|B
Defining exclusions
Exclusions specify files to be skipped during Antivirus scans. Excluding files can improve
performance, as files meeting the exclusion criteria are not scanned. You can exclude files based
on their file extension or size.
By default, when exclusions are set on a particular directory, all of its child directories inherit those
exclusions. You can overwrite those exclusions for a child directory by explicitly setting exclusions
on the child directory or by using the No rule option to stop exclusion inheritance on the child
directory.
230 Configuring Antivirus support
IMPORTANT: The exclusion by file extension feature is not supported for files objects stored under
an HTTP StoreAll REST API share created in the object mode. If the share is created under the file
system on which you created the exclusion, the exclusion still does not apply to the file objects
present under that share in object mode. This situation occurs because the HTTP StoreAll REST API
object mode references file objects with hash names.
To configure exclusions by using the Management Console:
1. Select an appropriate AV-enabled file system from the list.
2.
3.
4.
Click Exclusion on the AV Enable/Disable Filesystem panel.
On the Exclusion dialog box, specify the directory path where the exclusion is to be applied.
Click Display Rule to set the Rule information.
Configuring Antivirus settings
231
5.
Select the appropriate type of rule:
•
Inherited Rule/Remove Rule. Use this option to reset or remove exclusions that were
explicitly set on the child directory. The child directory will then inherit exclusions from
its parent directory. You should also use this option to remove exclusions on the top-most
level directory where exclusions rules have been are set.
•
No rule. Use this option to remove or stop exclusions at the child directory. The child
directory will no longer inherit the exclusions from its parent directory.
•
Custom rule. Use this option to exclude files having specific file extensions or exceeding
a specific size. If you specify multiple file extensions, use commas to separate the extension.
To exclude all types of files from scans, enter an asterisk (*) in the file extension field.
You can specify either file extensions or a file size (or both).
232 Configuring Antivirus support
On the CLI, use the following options to specify exclusions with the ibrix_avconfig command:
•
-x FILE_EXTENSION — Excludes all files having the specified extension, such as .jpg. If
you specify multiple extensions, use commas to separate the extensions.
•
-s FILE_SIZE — Excludes all files larger than the specified size (in MB).
•
-N — Does not exclude any files in the directory hierarchy.
Add an exclusion to a directory:
ibrix_avconfig -a -E -f FSNAME -P DIR_PATH {-N | [-x FILE_EXTENSION]
[-s FILE_SIZE]}
View exclusions on a specific directory:
ibrix_avconfig -l -E -f FSNAME -P DIR_PATH
Remove all exclusions from a directory:
ibrix_avconfig -r -E -f FSNAME -P DIR_PATH
Managing Antivirus scans
You can run an Antivirus scan at any time, and you can schedule periodic Antivirus scans of an
entire file system or directory. Multiple Antivirus scans can run in the cluster; however, you can run
only one scan task at a time on a specific AV-enabled file system. The Antivirus scan honors the
AV exclusion rules defined. Any directory or files that meet the exclusion criteria is not scanned.
You can view the status of active and inactive Antivirus scan tasks, and you can stop, pause, or
resume active tasks.
Managing Antivirus scans 233
Recommendations for Antivirus scans:
•
Run Antivirus scans when the system is not being heavily used.
•
Configure your Antivirus scans so that a huge number of files in a subtree are not assigned
to an Antivirus scan.
•
Do not run Antivirus scans on many file systems at the same time as there is a resource limitation
on the AV daemon.
Keep in mind the following in regards to Antivirus scans:
•
Antivirus scan task let you specify a file system or directory path under which all the files will
be subjected to antivirus scans, which is different from on-access scanning, where a scan is
triggered, when the file is accessed by an application, typically during a open/read operation.
On-access scanning is done automatically by the kernel. The AV feature defines a mechanism,
which lets you run or schedule periodic Antivirus scans on an entire file system or directory
at any time.
•
Antivirus scans are independent of on-access scanning, and they can be run in parallel.
•
Antivirus scans are similar to on-access scans in that they continue to honor exclusion rules
defined by you.
When you set an Antivirus scan, you are asked to enter a value for the duration scan. The maximum
duration scan that can be provided is 168 hours (7 days). If a duration time is not provided for
the scan, all files in the given path are scanned without any timeout.
You can plan the scheduling option so that Antivirus scans run on multiple directories in serial. For
example, assume you have five directories on which you want to run Antivirus scans in a particular
priority order. You could schedule an Antivirus scan to run on the first directory for 2 hours maximum
(value in the Duration of Scans text box) at a set time. Then, schedule the scan task on the next
directory after 2 hours (T+2.15 (15 minutes extra as a previous Antivirus scan needs a few minutes
for cleanup)). Do the same steps for the next three directories.
Starting or scheduling Antivirus scans
You can start Antivirus scans by using the Management Console or the CLI. Only the Management
Console provides the functionality for scheduling Antivirus scans.
Management Console
To start a scan or schedule periodic scans on the Management Console:
1. Select the file system to be scanned from the Filesystems panel.
2. Select Active Tasks > Antivirus Scan from the lower Navigator panel.
3. Click Start on the Antivirus Task Summary panel
You can also click New on the Active Tasks panel and then select Antivirus Scan as the task
type on the Starting a New Task dialog box.
4.
5.
Complete the Scan Settings tab on the New Antivirus Scan Task dialog box. Specify the
directory path to be scanned and the maximum number of hours (optional) that the scan should
run. At the end of that time, the scan is stopped and becomes an inactive task. You can view
the scan statistics of an inactive task in the Inactive Tasks panel.
Antivirus scans can be scheduled or started immediately. If you click OK on the Scan tab
without populating the Schedule tab, the scan starts immediately.
234 Configuring Antivirus support
6.
On the Schedule tab, click Schedule this task and then select the frequency (once, daily,
weekly, monthly) and specify when the scan should run.
CLI
On the CLI, use the following command to start an Antivirus scan:
ibrix_avscan -s -f FSNAME -p PATH [-d DURATION]
The scan runs immediately.
Viewing, pausing, resuming, or stopping Antivirus scan tasks
Viewing an active task
To view an active scan task on a file system, select the file system on the Filesystems panel on the
Management Console, and then select Active Tasks from lower Navigator. The Active Tasks panel
lists all active tasks on the cluster, including Antivirus tasks that are currently running or paused on
the selected file system. To see details for the Antivirus task, select it on the Active Tasks panel and
Managing Antivirus scans 235
then select Active Tasks > Antivirus Scan from the lower Navigator. The Antivirus Task Summary
panel then shows current information for the scan.
Stopping or pausing an active task
Use the buttons on the Antivirus Task Summary panel to stop or pause a running task, or to resume
a paused task.
Viewing the results of an inactive task
To view inactive Antivirus scan tasks for a file system, select the file system on the Filesystems panel
and then select Inactive Tasks on the lower Navigator. The Inactive Tasks panel lists all inactive
tasks in the cluster, including the following types of Antivirus scan tasks:
•
Scan tasks that have run to completion
•
Scan tasks that stopped because the duration period expired
•
Scan tasks that were stopped manually
For more information about an inactive task, select the task and click Details on the Inactive Tasks
panel. Inactive tasks cannot be restarted but can be deleted.
Inodes scanned indicates the files that were scanned by the Antivirus scan.
Inodes might be marked as skipped when an Antivirus scan task runs on a file system in which:
•
AV becomes unavailable
•
The file system or directory have set exclusion rules
•
Files are already scanned
•
The file system has hot inodes
CLI commands for viewing, stopping or Pausing Antivirus scans
On the CLI, use the following commands to view, stop, pause, or resume Antivirus scans.
View a status summary of Antivirus scan tasks:
ibrix_avscan -l [-f FSLIST]
View detailed information about Antivirus scan tasks:
ibrix_avscan -i [-f FSLIST]
236 Configuring Antivirus support
Stop the specified Antivirus scan task:
ibrix_avscan -k -t TASKID [-F]
Pause the specified Antivirus scan task:
ibrix_task -p -n TASKID
Resume the specified Antivirus scan task:
ibrix_task -r -n TASKID
Run the ibrix_avscan -l command to obtain the task ID.
Viewing Antivirus statistics
Antivirus statistics are accumulated whenever a scan is run. To view statistics, select Statistics from
the lower Navigator. Click Clear Stats to clear the current statistics and start accumulating them
again.
The CLI commands are:
View statistics from all cluster nodes:
ibrix_av -l -s
Delete statistics from all nodes:
ibrix_av -d -s
Antivirus quarantines and software snapshots
The quarantine utility has the following limitations when used with snap files.
Limitation 1: When the following sequence of events occurs:
•
A virus file is created inside the snap root
•
A snap is taken
•
The original file is renamed or moved to another path
•
The original file is read
The quarantine utility cannot locate the snap file because the link was formed with the new filename
assigned after the snap was taken.
Viewing Antivirus statistics 237
Limitation 2: When the following sequence of events occurs:
•
A virus file is created inside the snap root
•
A snap is taken
•
The original file is renamed or moved to another path
•
The snap file is read
The quarantine utility cannot track the original file because the link was not created with its name.
That file cannot be listed, reset, moved, or deleted by the quarantine utility.
Limitation 3: When the following sequence of events occurs:
•
A virus file is created inside the snap root
•
The original file is read
•
A snap is taken
•
The original file is renamed or moved to another path
The quarantine utility displays both the snap name (which still has the original name), and the new
filename, although they are same file.
238 Configuring Antivirus support
17 Creating StoreAll software snapshots
The StoreAll software snapshot feature allows you to capture a point-in-time copy of a file system
or directory for online backup purposes and to simplify recovery of files from accidental deletion.
Software snapshots can be taken of the entire file system or selected directories. Users can access
the file system or directory as it appeared at the instant of the snapshot.
NOTE: To accommodate software snapshots, the inode format was changed in the StoreAll 6.0
release. Consequently, files used for snapshots must either be created on StoreAll 6.0 or later, or
the pre-6.0 file system containing the files must be upgraded for snapshots. To upgrade a file
system, use the upgrade60.sh utility. For more information, see the HP StoreAll Storage CLI
Reference Guide.
Before taking snapshots of a file system or directory, you must enable the directory tree for snapshots.
An enabled directory tree is called a snap tree. You can then define a schedule for taking periodic
snapshots of the snap tree, and you can also take on-demand snapshots.
Users can access snapshots using NFS or SMB. All users with access rights to the root of the
snapshot directory tree can navigate, view, and copy all or part of a snapshot.
NOTE: Snapshots are read only and cannot be modified, moved, or renamed. However, they
can be copied.
NOTE: You can use either the software method or the block method to take snapshots on a file
system. Using both snapshot methods simultaneously on the same file system is not supported.
File system limits for snap trees and snapshots
A file system can have a maximum of 1024 snap trees. Each snap tree can have a maximum of
1024 snapshots.
Configuring snapshot directory trees and schedules
You can enable a directory tree for snapshots using either the Management Console or the CLI;
however, the Management Console must be used to configure a snapshot schedule.
On the Management Console, select Snapshots from the Navigator. The Snap Trees panel lists all
directory trees currently enabled for snapshots. The Schedule Details panel shows the snapshot
schedule for the selected directory tree.
File system limits for snap trees and snapshots 239
To enable a directory tree for snapshots, click Add on the Snap Trees panel.
You can create a snapshot directory tree for an entire file system or a directory in that file system.
When entering the directory path, do not specify a directory that is a parent or child of another
snapshot directory tree. For example, if directory /dir1/dir2 is a snapshot directory tree, you
cannot create another snapshot directory tree at /dir1 or /dir1/dir2/dir3.
IMPORTANT:
StoreAll reliably supports up to 1,024 snapshots.
The snapshot schedule can include any combination of hourly, daily, weekly, and monthly snapshots.
Also specify the number of snapshots to retain on the system. When that number is reached, the
oldest snapshot is deleted.
All weekly and monthly snapshots are taken at the same time of day. The default time is 9 pm. To
change the time, click the time shown on the dialog box, and then select a new time on the Modify
Weekly/Monthly Snapshot Creation Time dialog box.
To enable a directory tree for snapshots using the CLI, run the following command:
ibrix_snap -m -f FSNAME -P SNAPTREEPATH
SNAPTREEPATH is the full directory pathname, starting at the root of the file system. For example:
ibrix_snap -m -f ifs1 -P /ifs1/dir1/dir2
240 Creating StoreAll software snapshots
IMPORTANT: A snapshot reclamation task is required for each file system containing snap trees
that have scheduled snapshots. If a snapshot reclamation task does not already exist, you will need
to configure the task. See “Reclaiming file system space previously used for snapshots” (page 245).
Modifying a snapshot schedule
You can change the snapshot schedule at any time. On the Snap Trees panel, select the appropriate
snap tree, select Modify, and make your changes on the Modify Snap Tree dialog box.
Managing software snapshots
To view the snapshots for a specific directory tree, select the appropriate directory tree on the Snap
Trees panel, and then select Snapshots from the lower Navigator. The Snapshots panel lists snapshots
for the directory tree and allows you to take a new snapshot or delete an existing snapshot. Use
the filter at the bottom of the panel to select the snapshots you want to view.
The following CLI commands display information about snapshots and snapshot directory trees:
•
List all snapshots, or only the snapshots on a specific file system or snapshot directory tree:
ibrix_snap -l -s [-f FSNAME [-P SnapTreePath]]
•
List all snapshot directory trees, or only the snapshot directory trees on a specific file system:
ibrix_snap -l [-f FSNAME]
Taking an on-demand snapshot
To take an on-demand snapshot of a directory tree, select the directory tree on the Snap Trees
panel and then click Create on the List of Snapshots panel.
Managing software snapshots
241
To take a snapshot from the CLI, use the following command:
ibrix_snap -c -f FSNAME -P SNAPTREEPATH -n NAMEPATTERN
SNAPTREEPATH is the full directory path starting from the root of the file system. The name that
you specify is appended to the date of the snapshot. The following words cannot be used in the
name, as they are reserved for scheduled snapshots:
Hourly
Daily
Weekly
Monthly
You will need to manually delete on-demand snapshots when they are no longer needed.
Determining space used by snapshots
Space used by snapshots counts towards the used capacity of the file system and towards user
quotas. Standard file system space reporting utilities work as follows:
•
The ls and du commands report the size of a file depending on the version you are viewing.
if you are looking at a snapshot, the commands report the size of the file when it was snapped.
If you are looking at the current version, the commands report the current size.
•
The df command reports the total space used in the file system by files and snapshots.
Accessing snapshot directories
Snapshots are stored in a read-only directory named .snapshot located under the directory tree.
For example, snapshots for directory tree /ibfs1/users are stored in the /ibfs1/users/
.snapshot directory. Each snapshot is a separate directory beneath the .snapshot directory.
Snapshots are named using the ISO 8601 date and time format, plus a custom value. For example,
a snapshot created on June 1, 2011 at 9am will be named 2011-06-01T090000_<name>. For
snapshots created automatically, <name> is hourly, daily, weekly, or monthly, depending
on the snapshot schedule. If you create a snapshot on-demand, you can specify the <name>.
242 Creating StoreAll software snapshots
The following example lists snapshots created on an hourly schedule for snap tree /ibfs1/users.
Using ISO 8601 naming ensures that the snapshot directories are listed in order according to the
time they were taken.
[root@9000n1 ~]# # cd /ibfs1/users/.snapshot/
[root@9000n1 .snapshot]# ls
2011-06-01T110000_hourly 2011-06-01T190000_hourly
2011-06-01T120000_hourly 2011-06-01T200000_hourly
2011-06-01T130000_hourly 2011-06-01T210000_hourly
2011-06-01T140000_hourly 2011-06-01T220000_hourly
2011-06-01T150000_hourly 2011-06-01T230000_hourly
2011-06-01T160000_hourly 2011-06-02T000000_hourly
2011-06-01T170000_hourly 2011-06-02T010000_hourly
2011-06-01T180000_hourly 2011-06-02T020000_hourly
2011-06-02T030000_hourly
2011-06-02T040000_hourly
2011-06-02T050000_hourly
2011-06-02T060000_hourly
2011-06-02T070000_hourly
2011-06-02T080000_hourly
2011-06-02T090000_hourly
Users having access to the root of the snapshot directory tree (in this example, /ibfs1/users/)
can navigate the /ibfs1/users/.snapshot directory, view snapshots, and copy all or part
of a snapshot. If necessary, users can copy a snapshot and overlay the present copy to achieve
manual rollback.
NOTE:
Access to .snapshot directories is limited to administrators and NFS and SMB users.
Accessing snapshots using NFS
Access over NFS is similar to local StoreAll access except that the mount point will probably be
different. In this example, NFS export /ibfs1/users is mounted as /users1 on an NFS client.
[root@rhel5vm1 ~]# cd /users1/.snapshot
[root@rhel5vm1 .snapshot]# ls
2011-06-01T110000_hourly 2011-06-01T150000_hourly
2011-06-01T120000_hourly 2011-06-01T160000_hourly
2011-06-01T130000_hourly 2011-06-01T170000_hourly
2011-06-01T140000_hourly 2011-06-01T180000_hourly
2011-06-01T190000_hourly
2011-06-01T200000_hourly
Accessing snapshots using SMB
Over SMB, Windows users can use Explorer to navigate to the .snapshot folder and view files.
In the following example, /ibfs1/users/ is mapped to the Y drive on a Windows system.
Managing software snapshots 243
Restoring files from snapshots
Users can restore files from snapshots by navigating to the appropriate snapshot directory and
copying the file or files to be restored, assuming they have the appropriate permissions on those
files. If a large number of files need to be restored, you may want to use Run Once remote replication
to copy files from the snapshot directory to a local or remote directory (see “Starting a replication
task ” (page 187)).
Deleting snapshots
Scheduled snapshots are deleted automatically according to the retention schedule specified for
the snapshot tree; however you can delete a snapshot manually if necessary. You also need to
delete on-demand snapshots manually. Deleting a snapshot does not free the file system space that
was used by the snapshot; you will need to reclaim the space.
IMPORTANT:
Before deleting a directory that contains snapshots, take these steps:
•
Delete the snapshots (use ibrix_snap).
•
Reclaim the file system space used by the snapshots (use ibrix_snapreclamation).
•
Remove snapshot authorization for the snap tree (use ibrix_snap).
Deleting a snapshot manually
To delete a snapshot from the Management Console, select the appropriate snapshot on the List
of Snapshots panel, click Delete, and confirm the operation.
To delete the snapshot from the CLI, use the following command:
ibrix_snap -d -f FSNAME -P SNAPTREEPATH -n SNAPSHOTNAME
If you are unsure of the name of the snapshot, use the following command to locate the snapshot:
ibrix_snap -l -s [-f FSNAME] [-P SNAPTREEPATH]
244 Creating StoreAll software snapshots
Reclaiming file system space previously used for snapshots
Snapshot reclamation tasks are used to reclaim file system space previously used by snapshots
that have been deleted.
IMPORTANT: A snapshot reclamation task is required for each file system containing snap trees
that have scheduled snapshots.
Using the Management Console, you can schedule a snapshot reclamation task to run at a specific
time on a recurring basis. The reclamation task runs on an entire file system, not on a specific
snapshot directory tree within that file system. If a file system includes two snapshot directory trees,
space is reclaimed in both snapshot directory trees.
To start a new snapshot reclamation task, select the appropriate file system from the Filesystems
panel and then select Active Tasks > Snapshot Space Reclamation from the lower Navigator.
Select New on the Task Summary panel to open the New Snapshot Space Reclamation Task dialog
box.
Managing software snapshots 245
On the General tab, select a reclamation strategy:
•
Maximum Space Reclaimed. The reclamation task recovers all snapped space eligible for
recovery. It takes longer and uses more system resources than Maximum Speed. This is the
default.
•
Maximum Speed of Task. The reclamation task reclaims only the most easily recoverable
snapped space. This strategy reduces the amount of runtime required by the reclamation task,
but leaves some space potentially unrecovered (that space is still eligible for later reclamation).
You cannot create a schedule for this type of reclamation task.
If you are using the Maximum Space Reclaimed strategy, you can schedule the task to run
periodically. On the Schedule tab, click Schedule this task and select the frequency and time to
run the task.
246 Creating StoreAll software snapshots
To stop a running reclamation task, click Stop on the Task Summary panel.
Managing reclamation tasks from the CLI
To start a reclamation task from the CLI, use the following command:
ibrix_snapreclamation -r -f FSNAME [-s {maxspeed | maxspace}] [-v]
The reclamation task runs immediately; you cannot create a recurring schedule for it.
To stop a reclamation task, use the following command:
ibrix_snapreclamation -k -t TASKID [-F]
The following command shows summary status information for all replication tasks or only the tasks
on the specified file systems:
ibrix_snapreclamation -l [-f FSLIST]
The following command provides detailed status information:
ibrix_snapreclamation -i [-f FSLIST]
Removing snapshot authorization for a snap tree
Before removing snapshot authorization from a snap tree, you must delete all snapshots in the snap
tree and reclaim the space previously used by the snapshots. Complete the following steps:
1. Disable any schedules on the snap tree. Select the snap tree on the Snap Trees panel, select
Modify, and remove the Frequency settings on the Modify Snap Tree dialog box.
2. Delete the existing snapshots of the snap tree. See “Deleting snapshots” (page 244)
3. Reclaim the space used by the snapshots. See “Reclaiming file system space previously used
for snapshots” (page 245).
4. Delete the snap tree. On the Snap Trees panel, select the appropriate snap tree, click Delete,
and confirm the operation.
To disable snapshots on a directory tree using the CLI, run the following command:
ibrix_snap -m -U -f FSNAME -P SnapTreePath
Managing software snapshots 247
Moving files between snap trees
Files created on, copied, or moved to a snap tree directory can be moved to any other snap tree
or non-snap tree directory on the same file system, provided they are not snapped. After a snapshot
is taken and the files have become part of that snapshot, they cannot be moved to any other snap
tree or directory on the same file system. However, the files can be moved to any snap tree or
directory on a different file system.
Backing up snapshots
Snapshots are stored in a .snapshot directory under the directory tree. For example:
# ls -alR /fs2/dir.tst
/fs2/dir.tst:
drwxr-xr-x 4 root root 4096 Feb 8 09:11 dir.dir
-rwxr-xr-x 1 root root 99999999 Jan 31 09:33 file.0
-rwxr-xr-x 1 root root 99999999 Jan 31 09:33 file.1
drwxr-xr-x 2 root root 4096 Apr 6 15:55 .snapshot
/fs2/dir.tst/.snapshot:
lrwxrwxrwx 1 root root 15 Apr 6 15:39 2011-04-06T15:39:57_ -> ../.@1302118797
lrwxrwxrwx 1 root root 15 Apr 6 15:55 2011-04-06T15:55:07_tst1 -> ../.@1302119707 /fs2/dir.tst/dir.dir: -rwxr-xr-x
1 root root 99999999 Jan 31 09:34 file.1
NOTE: The links beginning with .@ are used internally by the snapshot software and cannot be
accessed.
To back up the snapshots, use the procedure corresponding to your backup method.
Backups using NDMP
By default, NDMP does not back up the .snapshot directory. For example, if you specify a
backup of the /fs2/snapdir directory, NDMP backs up the directory but excludes /fs2/
snapdir/.snapshot and its contents.
To back up the snapshot of the directory , specify the path /fs2/snapdir/.snapshot/
2011-04-06T15:55:07_tst1. Now you can use the snapshot (a point in time copy) to restore
its associated directory. For example use /fs2/snapdir/.snapshot/
2011-04-06T15:55:07_tst1 to restore /fs2/snapdir.
Backups without NDMP
DMA applications cannot back up a snapshot directory tree using a path such as /fs2/snapdir/
.snapshot/time-stamp-name. Instead, mount the snapshot using the mount -t -o bind,ro
options and then back up the mount point. For example, using a mount point such as /newmount,
use the following command to mount the snapshot:
mount -t ibrix -o bind,ro /fs2/snapdir/.snapshot/time-stamp-name /newmount
Then configure the DMA to back up /newmount.
Backups with the tar utility
The tar symbolic link (h) option can copy snapshots. For example, the following command copies
the /snapfs1/test3 directory associated with the point-in-time snapshot.
tar -cvfh /snapfs1/test3/.snapshot/2011-07-01T044500_hourly
248 Creating StoreAll software snapshots
18 Creating block snapshots
The block snapshot feature allows you to capture a point-in-time copy of a file system for online
backup purposes and to simplify recovery of files from accidental deletion. The snapshot replicates
all file system entities at the time of capture and is managed exactly like any other file system.
NOTE: You can use either the software method or the block method to take snapshots on a file
system. Using both snapshot methods simultaneously on the same file system is not supported.
The block snapshot feature is supported as follows:
•
HP 9320 Storage: supported on the HP P2000 G3 MSA Array System or HP 2000 Modular
Smart Array G2 provided with the platform.
•
HP 9300 Storage Gateway: supported on the HP P2000 G3 MSA Array System; HP 2000
Modular Smart Array G2; HP P4000 G2 Models; HP 3PAR F200, F400, T400 and T800s
Storage Systems (OS version 2.3.1 (MU3); and Dell EqualLogic storage array (no arrays are
provided with the 9300 system).
•
HP X9720/9730 Storage: no support.
The block snapshot feature uses the copy-on-write method to preserve the snapshot regardless of
changes to the origin file system. Initially, the snapshot points to all blocks that the origin file system
is using (B in the following diagram). When a block in the origin file system is overwritten with
additions, edits, or deletions, the original block (prior to changes) is copied to the snapshot store,
and the snapshot points to the copied block (C in the following diagram). The snapshot continues
to point to the origin file system contents from the point in time that the snapshot was executed.
To create a block snapshot, first provision or register the snapshot store. You can then create a
snapshot from type-specific storage resources. The snapshot is active from the moment it is created.
You can take snapshots via the StoreAll software block snapshot scheduler or manually, whenever
necessary. Each snapshot maintains its origin file system contents until deleted from the system.
Snapshots can be made visible to users, allowing them to access and restore files (based on
permissions) from the available snapshots.
NOTE: By default, snapshots are read only. HP recommends that you do not allow writes to any
snapshots.
249
Setting up snapshots
This section describes how to configure the cluster to take snapshots.
Preparing the snapshot partition
The block snapshot feature does not require any custom settings for the partition. However, HP
recommends that you provide sufficient storage capacity to support the snapshot partition.
NOTE: If the snapshot store is too small, the snapshot will eventually exceed the available space
(unless you detect this and manually increase storage). If this situation occurs, the array software
deletes the snapshot resources and the StoreAll software snapshot feature invalidates the snapshot
file system.
Although you can monitor the snapshot and manually increase the snapshot store as needed, the
safest policy is to initially provision enough space to last for the expected lifetime of the snapshot.
The optimum size of the snapshot store depends on usage patterns in the origin file system and
the length of time you expect the snapshot to be active. Typically, a period of trial and error is
required to determine the optimum size.
See the array documentation for procedures regarding partitioning and allocating storage for file
system snapshots.
Registering for snapshots
After setting up the snapshot partition, you can register the partition with the Fusion Manager. You
will need to provide a name for the storage location and specify access parameters (IP address,
user name, and password).
The following command registers and names the array’s snapshot partition on the Fusion Manager.
The partition is then recognized as a repository for snapshots.
ibrix_vs -r -n STORAGENAME -t { msa | lefthand | 3PAR | eqlogic} -I IP(s) -U USERNAME
[-P PASSWORD]
To remove the registration information from the configuration database, use the following command.
The partition will then no longer be recognized as a repository for snapshots.
ibrix_vs -d -n STORAGENAME
Discovering LUNs in the array
After the array is registered, use the -a option to map the physical storage elements in the array
to the logical representations used by StoreAll software. The software can then manage the
movement of data blocks to the appropriate snapshot locations on the array.
Use the following command to map the storage information for the specified array:
ibrix_vs -a [-n STORAGENAME]
Reviewing snapshot storage allocation
Use the following command to list all of the array storage that is registered for snapshot use:
ibrix_vs -l
To see detailed information for named snapshot partitions on either a specific array or all arrays,
use the following command:
ibrix_vs -i [-n STORAGENAME]
Automated block snapshots
If you plan to take a snapshot of a file system on a regular basis, you can automate the snapshots.
To do this, first define an automated snapshot scheme, and then apply the scheme to the file system
and create a schedule.
250 Creating block snapshots
A snapshot scheme specifies the number of snapshots to keep and the number of snapshots to
mount. You can create a snapshot scheme from either the Management Console or the CLI.
The type of storage array determines the maximum number of snapshots you can keep and mount
per file system.
Array
Maximum number of snapshots to
keep
Maximum number of snapshots to mount
P2000 G3 MSA System/MSA2000 32 snapshots per file system
G2 array
7 snapshots per file system
EqualLogic array
8 snapshots per file system
7 snapshots per file system
P4000 G2 storage system
32 snapshots per file system
7 snapshots per file system
3PAR storage system
32 snapshots per file system
7 snapshots per file system
For the P2000 G3 MSA System/MSA2000, the storage array itself also limits the total number of
snapshots that can be stored. Arrays count the number of LUNs involved in each snapshot. For
example, if a file system has four LUNs, taking two snapshots of the file system increases the total
snapshot LUN count by eight. If a new snapshot will cause the snapshot LUN count limit to be
exceeded, an error will be reported, even though the file system limits may not be reached. The
snapshot LUN count limit on P2000 G3 MSA System/MSA2000 arrays is 255.
The 3PAR storage system allows you to make a maximum of 500 virtual copies of a base volume.
Up to 256 virtual copies can be read/write copies.
Creating automated snapshots using the Management Console
Select the file system where the snapshots will be taken, and then select Block Snapshots from the
lower Navigator. On the Block Snapshots panel, click New to display the Create Snapshot dialog
box. On the General tab, select Recurring as the Snapshot Type.
Automated block snapshots
251
Under Snapshot Configuration, select New to create a new snapshot scheme. The Create Snapshot
Scheme dialog box appears.
252 Creating block snapshots
On the General tab, enter a name for the strategy and then specify the number of snapshots to
keep and mount on a daily, weekly, and monthly basis. Keep in mind the maximums allowed for
your array type.
Daily means that one snapshot is kept per day for the specified number of days. For example, if
you enter 6 as the daily count, the snapshot feature keeps 1 snapshot per day through the 6th day.
On the 7th day, the oldest snapshot is deleted. Similarly, Weekly specifies the number of weeks
that snapshots are retained, and Monthly specifies the number of months that snapshots are retained.
On the Advanced tab, you can create templates for naming the snapshots and mountpoints. This
step is optional.
Automated block snapshots 253
For either template, enter one or more of the following variables. The variables must be enclosed
in braces ({ }) and separated by underscores (_). The template can also include text strings. When
a snapshot is created using the templates, the variables are replaced with the following values.
Variable
Value
fsname
File system name
shortdate
yyyy_mm_dd
fulldate
yyyy_mm_dd_HHmmz + GMT
When you have completed the scheme, it appears in the list of snapshot schemes on the Create
Snapshot dialog box. To create a snapshot schedule using this scheme, select it on the Create
Snapshot dialog box and go to the Schedule tab. Click Schedule this task, set the frequency of the
snapshots, and schedule when they should occur. You can also set start and end dates for the
schedule. When you click OK, the snapshot scheduler will begin taking snapshots according to
the specified snapshot strategy and schedule.
254 Creating block snapshots
Creating an automated snapshot scheme from the CLI
You can create an automated snapshot scheme with the ibrix_vs_snap_strategy command.
However, you will need to use the Management Console to create a snapshot schedule.
To define a snapshot scheme, execute the ibrix_vs_snap_strategy command with the -c
option:
ibrix_vs_snap_strategy -c -n NAME -k KEEP -m MOUNT [-N NAMESPEC] [-M MOUNTSPEC]
The options are:
-n NAME
The name for the snapshot scheme.
-k KEEP
The number of snapshots to keep per file system. For the P2000 G3 MSA System/MSA2000
G2 array, the maximum is 32 snapshots per file system. For P4000 G2 storage systems, the
maximum is 32 snapshots per file system. For P4000 G2 storage systems, the maximum is 32
snapshots per file system. For Dell EqualLogic arrays, the maximum is eight snapshots per file
system.
Enter the number of days, weeks, and months to retain snapshots. The numbers must be separated
by commas, such as -k 2,7,28.
NOTE: One snapshot is kept per day for the specified number of days. For example, if you
enter 6 as the daily count, the snapshot feature keeps 1 snapshot per day through the 6th day.
On the 7th day, the oldest snapshot is deleted. Similarly, the weekly count specifies the number
of weeks that snapshots are retained, and the monthly count specifies the number of months that
snapshots are retained.
-m MOUNT
The number of snapshots to mount per file system. The maximum number of snapshots is 7 per
file system.
Enter the number of snapshots to mount per day, week, and month. The numbers must be
separated by commas, such as -m 2,2,3. The sum of the numbers must be less than or equal
to 7.
-N NAMESPEC
Snapshot name template. The template specifies a scheme for creating unique names for the
snapshots. Use the variables listed below for the template.
-M MOUNTSPEC
Snapshot mountpoint template. The template specifies a scheme for creating unique mountpoints
for the snapshots. Use the variables listed below for the template.
Variables for snapshot name and mountpoint templates.
fulldate
yyyy_mm_dd_HHmmz + GMT
shortdate
yyyy_mm_dd
fsname
File system name
You can specify one or more of these variables, enclosed in braces ({ }) and separated by
underscores (_). The template can also include text strings. Two sample templates follow. When a
snapshot is created using one of these templates, the variables will be replaced with the values
listed above.
{fsname}_snap_{fulldate}
snap_{shortdate}_{fsname}
Other automated snapshot procedures
Use the following procedures to manage automated snapshots.
Modifying an automated snapshot scheme
A snapshot scheme can be modified only from the CLI. Use the following command:
ibrix_vs_snap_strategy -e -n NAME -k KEEP -m MOUNT [-N NAMESPEC] [-M MOUNTSPEC]
Automated block snapshots 255
Viewing automated snapshot schemes
On the Management Console, you can view snapshot schemes on the Create Snapshot dialog
box. Select Recurring as the Snapshot Type, and then select a snapshot scheme. A description of
that scheme will be displayed.
To view all automated snapshot schemes or all schemes of a specific type using the CLI, execute
the following command:
ibrix_vs_snap_strategy -l [-T TYPE]
To see details about a specific automated snapshot scheme, use the following command:
ibrix_vs_snap_strategy -i -n NAME
Deleting an automated snapshot scheme
A snapshot scheme can be deleted only from the CLI. Use the following command:
ibrix_vs_snap_strategy -d -n NAME
Managing block snapshots
This section describes how to manage individual snapshots.
Creating an on-demand snapshot
To take an on-demand snapshot from the Management Console, select the file system where the
snapshot will be taken, and then select Block Snapshots from the lower Navigator. On the Block
Snapshots panel, click New to display the Create Snapshot dialog box. On the General tab, select
Once as the Snapshot Type and click OK.
Use the following command to create a snapshot from the CLI
ibrix_vs_snap -c -n SNAPFSNAME -f ORIGINFSNAME
For example, to create a snapshot named ifs1_snap for file system ifs1:
ibrix_vs_snap -c -n ifs1_snap -f ifs1
Mounting or unmounting a snapshot
On the Management Console, select Block Snapshots from the lower Navigator, select the snapshot
on the Block Snapshots panel, and click Mount or Unmount.
Include the -M option to the create command to automatically mount the snapshot file system after
creating it. This makes the snapshot visible to authorized users. HP recommends that you do not
allow writes to any snapshot file system.
ibrix_vs_snap -c -M -n SNAPFSNAME -f ORIGINFSNAME
For example, to create and mount a snapshot named ifs1_snap for file system ifs1:
ibrix_vs_snap -c -M -n ifs1_snap -f ifs1
Recovering system resources on snapshot failure
If a snapshot encounters insufficient resources when attempting to update its contents due to changes
in the origin file system, the snapshot fails and is marked invalid. Data is no longer accessible in
the snapshot. To clean up records in the configuration database for an invalid snapshot, use the
following command from the CLI:
ibrix_vs_snap -r -f SNAPFSLIST
For example, to clean up database records for a failed snapshot named ifs1_snap:
ibrix_vs_snap -r -f ifs1_snap
On the Management Console, select the snapshot on the Block Snapshots panel and click Cleanup.
256 Creating block snapshots
Deleting snapshots
Delete snapshots to free up resources when the snapshot is no longer needed or to create a new
snapshot when you have already created the maximum allowed for your storage system.
On the Management Console, select the snapshot on the Block Snapshots panel and click Delete.
On the CLI, use the following command:
ibrix_vs_snap -d -f SNAPFSLIST
For example, to delete snapshots ifs0_snap and ifs1_snap:
ibrix_vs_snap -d -f ifs0_snap,ifs1_snap
Viewing snapshot information
Use the following commands to view snapshot information from the CLI.
Listing snapshot information for all hosts
The ibrix_vs_snap -l command displays snapshot information for all hosts. Sample output
follows:
ibrix_vs_snap -l
NAME
----snap1
NUM_SEGS
-------3
MOUNTED?
-------No
GEN
--6
TYPE
---msa
CREATETIME
---------Wed Oct 7 15:09:50 EDT 2009
The following table lists the output fields for ibrix_vs_snap -l.
Field
Description
NAME
Snapshot name.
NUM_SEGS
Number of segments in the snapshot.
MOUNTED?
Snapshot mount state.
GEN
Number of times the snapshot configuration has been changed in the configuration database.
TYPE
Snapshot type, based on the underlying storage system.
CREATETIME
Creation timestamp.
Listing detailed information about snapshots
Use the ibrix_vs_snap -i command to monitor the status of active snapshots. You can use
the command to ensure that the associated snapshot stores are not full.
ibrix_vs_snap -i
To list information about snapshots of specific file systems, use the following command:
ibrix_vs_snap -i [-f SNAPFSLIST]
The ibrix_vs_snap -i command lists the same information as ibrix_fs -i, plus information
fields specific to snapshots. Include the -f SNAPFSLIST argument to restrict the output to specific
snapshot file systems.
The following example shows only the snapshot-specific fields. To view an example of the common
fields, see “Viewing file system information” (page 39).
SEGMENT
FREE(GB)
-------------1
0.00
2
0.00
3
OWNER
LV_NAME
AVAIL(GB) FILES FFREE USED%
-------- ----------------------------- ----- ----- ----ib50-243 ilv11_msa_snap9__snap
0.00
0
0
0
ib50-243 ilv12_msa_snap9__snap
0.00
0
0
0
ib50-243 ilv13_msa_snap9__snap
STATE
BLOCK_SIZE CAPACITY(GB)
BACKUP TYPE
TIER LAST_REPORTED
--------------- ---------- ----------------- ----- ---- ------------OK, SnapUsed=4%
4,096
0.00
MIXED
7 Hrs 56 Mins 46 Secs ago
OK, SnapUsed=6%
4,096
0.00
MIXED
7 Hrs 56 Mins 46 Secs ago
OK, SnapUsed=6%
4,096
0.00
Managing block snapshots 257
0.00
4
0.00
5
0.00
6
0.00
0.00
ib50-243
0.00
ib50-243
0.00
ib50-243
0.00
0
0
0
ilv14_msa_snap9__snap
0
0
0
ilv15_msa_snap9__snap
0
0
0
ilv16_msa_snap9__snap
0
0
0
MIXED
OK, SnapUsed=8%
MIXED
OK, SnapUsed=6%
MIXED
OK, SnapUsed=5%
MIXED
7 Hrs 56
4,096
7 Hrs 56
4,096
7 Hrs 56
4,096
7 Hrs 56
Mins 46 Secs ago
0.00
Mins 46 Secs ago
0.00
Mins 46 Secs ago
0.00
Mins 46 Secs ago
NOTE: For P4000 G2 storage systems, the state is reported as OK, but the SnapUsed field
always reports 0%.
The following table lists the output fields for ibrix_vs_snap -i.
Field
Description
SEGMENT
Snapshot segment number.
OWNER
The file serving node that owns the snapshot segment.
LV_NAME
Logical volume.
STATE
State of the snapshot.
BLOCK_SIZE
Block size used for the snapshot.
CAPACITY (GB)
Size of this snapshot file system, in GB.
FREE (GB)
Free space on this snapshot file system, in GB.
AVAIL (GB)
Space available for user files, in GB.
FILES
Number of files that can be created in this snapshot file system.
FFREE
Number of unused file inodes available in this snapshot file system.
USED%
Percentage of total storage occupied by user files.
BACKUP
Backup host name.
TYPE
Segment type. Mixed means the segment can contain both directories and files.
TIER
Tier to which the segment was assigned.
Last Reported
Last time the segment state was reported.
Accessing snapshot file systems
By default, snapshot file systems are mounted in two locations on the file serving nodes:
•
/<snapshot_name>
•
/<original_file_system>/.<snapshot_name>
For example, if you take a snapshot of the fs1 file system and name the snapshot fs1_snap1,
it will be mounted at /fs1_snap1 and at /fs1/.fs1_snap1.
The StoreAll clients must mount the snapshot file system (/<snapshot_name>) to access the
contents of the snapshot.
NFS and SMB clients can access the contents of the snapshot through the original file system (such
as /fs1/.fs1_snap1) or they can mount the snapshot file system (in this example, /fs1_snap1).
The following window shows an NFS client browsing the snapshot file system .fs1_snap2 in the
fs1_nfs file system.
258 Creating block snapshots
The next window shows an SMB client accessing the snapshot file system .fs1_snap1. The
original file system is mapped to drive X.
Accessing snapshot file systems 259
Troubleshooting block snapshots
Snapshot reserve is full and the MSA2000 is deleting snapshot volumes
When the snapshot reserve is full, the MSA2000 will delete snapshot volumes on the storage array,
leaving the device entries on the file serving nodes. To correct this situation, take the following
steps:
1. Stop I/O or any applications that are reading or writing to the snapshot file systems.
2. Log on to the active Fusion Manager.
3. Unmount all snapshot file systems.
4. Delete all snapshot file systems to recover space in the snapshot reserve.
CIFS clients receive an error when creating a snapshot
CIFS clients might see the following error when attempting to create a snapshot:
Make sure you are connected to the network and try again
This error is generated when the snapshot creation takes longer than the CIFS timeout, causing the
CIFS client to determine that the server has failed or the network is disconnected. To avoid this
situation, do not take snapshots during periods of high CIFS activity.
Cannot create 32 snapshots on an MSA system
MSA systems or arrays support a maximum of 32 snapshots per file system. If snapshot creation
is failing before you reach 32 snapshots, check the following:
•
Verify the version of the MSA firmware.
•
If the cluster has been rebuilt, use the MSA GUI or CLI to check for old snapshots that were
not deleted before the cluster was rebuilt. The CLI command is show snapshots.
•
Verify the virtual disk and LUN layout.
260 Creating block snapshots
19 Using data tiering
A data tier is a logical grouping of file system segments. After creating tiers containing the segments
in the file system, you can use the data tiering migration process to move files from the segments
in one tier to the segments in another tier.
For example, you could create a primary data tier for SAS storage and another tier for SATA
storage. You could then migrate specific data from the SAS tier to the lower-cost SATA tier. Other
configurations might be based on the type of file being stored, such as storing all streaming files
in a tier or moving all files over a certain size to a specific tier.
You can create any number of data tiers. A tier cannot be on tape or on a location external to the
StoreAll file system, such as an NFS share.
StoreAll data tiering is transparent to users and applications and is compatible with StoreAll
software file system snapshots and other StoreAll data services.
IMPORTANT: Migration is a storage and file system intensive process which, in some
circumstances, can take days to complete. Migration tasks are best to be run at a time when clients
are not generating significant load. Migration is not suitable for environments where there are no
quiet times to run migration tasks.
IMPORTANT: Data tiering has a cool-down period of approximately 10 minutes. If a file was
last accessed during the cool-down period, the file will not be moved.
Data tiering policy
A tiering policy specifies migration rules for the file system. One tiering policy can be defined per
file system and the policy must have at least one rule. Rules in the policy can migrate files between
any two tiers in the file system. For example, rule1 could move files between Tier1 and Tier2, rule2
could migrate files from Tier2 to Tier1, and rule3 could migrate files between Tier1 and Tier3.
A file is migrated according to the first rule that it matches. You can narrow the scope of rules by
combining directives using logical operators.
For example, you could create a policy that has three simple rules:
•
Migrate all files that have not been modified for 30 minutes from Tier1 to Tier2. (This rule is
not valid for production, but is a good rule for testing.)
•
Migrate all files larger than 5 MB from Tier1 to Tier2.
•
Migrate all mpeg4 files from Tier1 to Tier 2.
Changing the tiering configuration
The following restrictions apply when changing the configuration:
•
You cannot modify the tiering configuration for a file system while an active migration task is
running.
•
You cannot move segments between tiers, assign them to new tiers, or unassign them from
tiers while an active migration task is running or while any rules exist that apply to the segments.
Creating and managing data tiers
You can use the Data Tiering Wizard to create and manage data tiers. Select the file system on
the Management Console, and then select Active Tasks > Data Tiering in the lower Navigator. On
the Task Summary panel, click Data Tiering to open the wizard.
Creating and managing data tiers
261
For a new tier, on the Manage Tier dialog box, choose Create New Tier, enter a name for the tier,
and select one or more segments to be included in the tier. To modify an existing tier, choose Use
Existing Tier, select the tier, and make any changes to the segments included in the tier.
Segments not currently included in a tier are specified as Unassigned. If you select a segment that
is already mapped to a tier, the segment will be unassigned from that tier and reassigned to the
tier you specified. If you remove a segment from a tier, that segment becomes unassigned.
262 Using data tiering
You can work on only one tier at a time. However, when you click Next, you will be asked if you
want to manage more tiers. If you answer Yes, the Manage Tier dialog box will be refreshed and
you can work on another tier.
All new files are written to the primary tier. On the Primary Tier dialog box, select the tier that
should receive these files. You can also select cluster servers and any StoreAll clients whose I/O
operations should be redirected to the primary tier.
The tiering policy consists of rules that specify the data to be migrated from one tier to another.
The parameters and directives used in the migration rules include actions based on file access
Creating and managing data tiers 263
patterns (such as access and modification times), file size, and file type. Rules can be constrained
to operate on files owned by specific users and groups and to specific paths. Logical operators
can be used to combine directives.
The Tiering Policy dialog box displays the existing tiering policy for the file system.
To add a new tiering policy, click New. On the New Data Tiering Policy dialog box, select the
source and destination tiers. Initially RuleSet1 is empty. Select a rule name, and the other fields
will appear according to the rule you selected.
NOTE: LDAP and AD users cannot be selected from the menu under RuleSet. If you want to
include users in a ruleset, you can select only local users.
Click + to specify the and/or operators and another rule. Click New to open another ruleset. The
following example shows two new rulesets.
264 Using data tiering
To delete a ruleset, check the box in the rule set and click Delete.
The Tiering Schedule dialog box lists all executed and running migration tasks. Click New to add
a new schedule, click Edit to reschedule the selected task, or click Delete to delete the selected
schedules.
Use the Enabled and Disabled buttons to enable or disable the selected schedule. When a schedule
is enabled, it is put in a runnable state. When a schedule is disabled, it is put in a paused state.
To run a migration task now, select the task and click Run Now.
Creating and managing data tiers 265
When you click New to create a new schedule, the default frequency for migration tasks is
displayed. For an existing schedule, the current frequency is displayed. To change the frequency,
click Modify.
On the Data Tiering Schedule Wizard dialog box, select a time to run the migration task.
266 Using data tiering
Viewing tier assignments and managing segments
On the Management Console, select Filesystems from the Navigator and select a file system in the
Filesystems panel. In the lower Navigator, select Segments. The Segments panel displays the
segments in the file system and specifies whether they are assigned to a tier.
Viewing tier assignments and managing segments 267
You can assign, reassign, or unassign segments from tiers using the Data Tiering Wizard. The
Management Console also provides additional options to perform these tasks.
Assign or reassign a segment: On the Segments panel, select the segments you are assigning and
click Assign to Tier. On the Assign to Tier dialog box, specify whether you are assigning the
segment to an existing tier or a new tier and specify the tier.
When you click OK, the segment is assigned to the tier and the information on the Segments panel
is updated.
Usassign a segment from a tier: Select the file system from the Filesystems panel and expand
Segments in the lower Navigator to list the tiers in the file system. Select the tier containing the
segment. On the Tier Segments panel, select the segment and click Unassign.
Viewing data tiering rules
On the Management Console, select Filesystems from the Navigator and then select a file system
in the Filesystems panel. In the lower Navigator, select Active Tasks > Data Tiering > Rules.
The Data Tiering Rules panel lists the existing rules for the file system. You can also create a new
rule from this panel; however, it is simpler to use the Data Tiering Wizard to create rules. To create
a rule from the Data Tiering Rules panel, click Create. On the Create Data Tiering Rule dialog box,
select the source and destination tier and then define a rule. The rule can move files between any
two tiers.
268 Using data tiering
When you click OK, the rule is checked for correct syntax. If the syntax is correct, the rule is saved
and appears on the Data Tiering Rules panel. The following example shows the three rules created
for the example.
You can delete rules if necessary. Select the rule on the Data Tiering Rules panel and click Delete.
Additional rule examples
The following rule migrates all files from Tier2 to Tier1:
name="*"
The following rule migrates all files in the subtree beneath the path. The path is relative to the
mountpoint of the file system.
path=testdata2
Viewing data tiering rules 269
The next example migrates all mpeg4 files in the subtree. A logical “and” operator combines the
rules:
path=testdata4 and name="*mpeg4"
The next example narrows the scope of the rule to files owned by users in a specific group. Note
the use of parentheses.
gname=users and (path=testdata4 and name="*mpeg4")
For more examples and detailed information about creating rules, see “Writing tiering rules”
(page 275).
Running a migration task
You can use the Data Tiering Wizard to schedule and run migration tasks. You can also start or
stop migration tasks from the Data Tiering Task Summary panel. Only one migration task can run
on a file system at any time. The task is not restarted on failure, and cannot be paused and later
resumed. However, a migration task can be started when a server is in the InFailover state.
To start a migration task, select the file system from the Filesystems panel and then select Data
Tiering in the lower Navigator. Click New on the Task Summary panel. The counters on the panel
are updated periodically while the task is running.
If necessary, click Stop to stop the data tiering task. There is no pause/resume function. When the
task is complete, it appears on the Management Console under Inactive Tasks for the file system.
You can check the exit status there.
Click Details to see summary information about the task.
270 Using data tiering
Configuring tiers and migrating data using the CLI
Use the ibrix_tier command to manage tier assignments and to list information about tiers.
Use the ibrix_migrator command to create or delete rules defining migration policies, to start
or stop migration tasks, and to list information about rules and migrator tasks.
Assigning segments to tiers
First determine the segments in the file system and then assign them to tiers. Use the following
command to list the segments:
ibrix_fs -f FSNAME -i
For example (the output is truncated):
[root@ibrix01a ~]# ibrix_fs -f ifs1 -i
.
.
SEGMENT
-------
OWNER
--------
LV_NAME
-------
STATE
-----
BLOCK_SIZE
----------
CAPACITY(GB)
------------
Configuring tiers and migrating data using the CLI
271
1
2
3
4
.
.
ibrix01b
ibrix01a
ibrix01b
ibrix01a
ilv1
ilv2
ilv3
ilv4
OK
OK
OK
OK
4,096
4,096
4,096
4,096
3,811.11
3,035.67
3,811.11
3,035.67
. . .
Use the following command to assign segments to a tier. The tier is created if it does not already
exist.
ibrix_tier -a -f FSNAME -t TIERNAME -S SEGLIST
For example, the following command creates Tier 1 and assigns segments 1 and 2 to it:
[root@ibrix01a ~]# ibrix_tier -a -f ifs1 -t Tier1 -S 1,2
Assigned segment: 1 (ilv1) to tier Tier1
Assigned segment: 2 (ilv2) to tier Tier1
Command succeeded!
NOTE: Be sure to spell the name of the tier correctly when you add segments to an existing tier.
If you spell the name incorrectly, a new tier is created with the incorrect tier name, and no error
is recognized.
Displaying information about tiers
Use the following command to list the tiers in a file system. The -t option displays information for
a specific tier.
ibrix_tier -l -f FSNAME [-t TIERNAME]
For example:
[root@ibrix01a ~]# ibrix_tier -i -f ifs1
Tier: Tier1
===========
FS Name
Segment Number
Tier
----------------------ifs1
1
Tier1
ifs1
2
Tier1
Tier: Tier2
===========
FS Name
Segment Number
-------------------ifs1
3
ifs1
4
Tier
---Tier2
Tier2
Defining the primary tier
All new files are written to the primary tier, which is typically the tier built on the fastest storage.
Use the following command to define the primary tier:
ibrix_fs_tune -f FILESYSTEM -h SERVERS -t TIERNAME
The following example specifies Tier1 as the primary tier:
ibrix_fs_tune -f ifs1 -h ibrix1a,ibrix1b -t Tier1
This policy takes precedence over any other file allocation polices defined for the file system.
NOTE: This example assumes users access the files over CIFS, NFS, FTP, or HTTP. If StoreAll
clients are used, the allocation policy must be applied to the clients. (Use -h to specify the clients.)
Creating a tiering policy
To create a rule for migrating data from a source tier to a destination tier, use the following
command:
ibrix_migrator -A -f FSNAME -r RULE -S SOURCE_TIER -D DESTINATION_TIER
272 Using data tiering
The following rule migrates all files that have not been modified for 30 minutes from Tier1 to Tier2:
[root@ibrix01a ~]# ibrix_migrator -A -f ifs1 -r 'mtime older than 30 minutes' -S Tier1 -D Tier2
Rule: mtime<now - 0-0-0-0:30:0
Command succeeded!
Listing tiering rules
To list all of the rules in the tiering policy, use the following command:
ibrix_migrator -l [-f FSNAME] -r
The output lists the file system name, the rule ID (IDs are assigned in the order in which rules are
added to the configuration database), the rule definition, and the source and destination tiers. For
example:
[root@ibrix01a
HsmRules
========
FS Name
Id
-------ifs1
9
ifs1
10
ifs1
11
~]# ibrix_migrator -l -f ifs1 -r
Rule
--------------------------mtime older than 30 minutes
name = "*.mpeg4"
size > 4M
Source Tier
----------Tier1
Tier1
Tier1
Destination Tier
---------------Tier2
Tier2
Tier2
Running a migration task
To start a migration task, use the following command:
ibrix_migrator -s -f FSNAME
For example:
[root@ibrix01a ~]# ibrix_migrator -s -f ifs1
Submitted Migrator operation to background. ID of submitted task: Migrator_163
Command succeeded!
NOTE:
The ibrix_migrator command cannot be run at the same time as ibrix_rebalance.
To list the active migration task for a file system, use the ibrix_migrator -i option. For example:
[root@ibrix01a ~]# ibrix_migrator -i -f ifs1
Operation: Migrator_163
=======================
Task Summary
============
Task Id
: Migrator_163
Type
: Migrator
File System
: ifs1
Submitted From
: root from Local Host
Run State
: STARTING
Active?
: Yes
EXIT STATUS
:
Started At
: Jan 17, 2012 10:32:55
Coordinator Server
: ibrix01b
Errors/Warnings
:
Dentries scanned
: 0
Number of Inodes moved
: 0
Number of Inodes skipped
: 0
Avg size (kb)
: 0
Avg Mb Per Sec
: 0
Number of errors
: 0
Configuring tiers and migrating data using the CLI 273
To view summary information after the task has completed, run the ibrix_migrator -i command
again and include the -n option, which specifies the task ID. (The task ID appears in the output
from ibrix migrator -i.)
[root@ibrix01a testdata1]# ibrix_task -i -n Migrator_163
Operation: Migrator_163
=======================
Task Summary
============
Task Id
: Migrator_163
Type
: Migrator
File System
: ifs1
Submitted From
: root from Local Host
Run State
: STOPPED
Active?
: No
EXIT STATUS
: OK
Started At
: Jan 17, 2012 10:32:55
Coordinator Server
: ibrix01b
Errors/Warnings
:
Dentries scanned
: 1025
Number of Inodes moved
: 1002
Number of Inodes skipped
: 1
Avg size (kb)
: 525
Avg Mb Per Sec
: 16
Number of errors
: 0
Stopping a migration task
To stop a migration task, use the following command:
ibrix_migrator -k -t TASKID [-F]
Changing the tiering configuration with the CLI
The following restrictions apply when changing the configuration:
•
You cannot modify the tiering configuration for a file system while an active migration task is
running.
•
You cannot move segments between tiers, assign them to new tiers, or unassign them from
tiers while an active migration task is running or while any rules exist that apply to the segments.
Moving a segment to another tier
Use the following command to assign a segment to another tier:
ibrix_tier -a -f FSNAME -t TIERNAME -S SEGLIST
Removing a segment from a tier
The following command removes segments from a tier. If you do not specify a segment list, all
segments in the file system are unassigned.
ibrix_tier -u -f FSNAME [-S SEGLIST]
The following example removes segments 3 and 4 from their current tier assignment:
[root@ibrix01a ~]# ibrix_tier -u -f ifs1 -S 3,4
Deleting a tier
Before deleting a tier, take these steps:
274
•
Delete all policy rules defined for the tier.
•
Allow any active tiering jobs to complete.
Using data tiering
To unassign all segments and delete the tier, use the following command:
ibrix_tier -d -f FSNAME -t TIERNAME
Deleting a tiering rule
Before deleting a rule, run the ibrix_migrator -l [-f FSNAME] -r command and note
the ID assigned to the rule. Then use the following command to delete the rule:
ibrix_migrator -d -f FSNAME -r RULE_ID
The -r option specifies the rule ID. For example:
[root@ibrix01a ~]# ibrix_migrator -d -f ifs2 -r 2
Writing tiering rules
A tiering policy consists of one or more rules that specify how data is migrated from one tier to
another. You can write rules using the Management Console, or you can write them directly to the
configuration database using the ibrix_migrator -A command.
Rule attributes
Each rule identifies file attributes to be matched. It also specifies the source tier to scan and the
destination tier where files that meet the rule’s criteria will be moved and stored.
Note the following:
•
Tiering rules are based on individual file attributes.
•
All rules are executed when the tiering policy is applied during execution of the
ibrix_migrator command.
•
It is important that different rules do not target the same files, especially if different destination
tiers are specified. If tiering rules are ambiguous, the final destination for a file is not
predictable. See “Ambiguous rules” (page 277), for more information.
The following are examples of attributes that can be specified in rules. All attributes are listed in
“Rule keywords” (page 276). You can use AND and OR operators to create combinations.
Access time
•
File was last accessed x or more days ago
•
File was accessed in the last y days
Modification time
•
File was last modified x or more days ago
File size—greater than n K
File name or File type—jpg, wav, exe (include or exclude)
File ownership—owned by user(s) (include or exclude)
Use of the tiering assignments or policy on any file system is optional. Tiering is not assigned by
default; there is no “default” tier.
Operators and date/time qualifiers
Valid rules operators are <, <=, =, !=, >, >=, and boolean and and or.
Use the following qualifiers for fixed times and dates:
•
Time: Enter as three pairs of colon-separated integers using a 24-hour clock. The format is
hh:mm:ss (for example, 15:30:00).
•
Date: Enter as yyyy-mm-dd [hh:mm:ss], where time of day is optional (for example,
2008-06-04 or 2008-06-04 15:30:00). Note the space separating the date and time.
Writing tiering rules 275
When specifying an absolute date and/or time, the rule must use a compare type operator (< |
<= | = | != | > | >=). For example:
ibrix_migrator -A -f ifs2 -r "atime > '2010-09-23' " -S TIER1 -D TIER2
Use the following qualifiers for relative times and dates:
•
Relative time: Enter in rules as year or years, month or months, week or weeks, day or
days, hour or hours.
•
Relative date: Use older than or younger than. The rules engine uses the time the
ibrix_migrator command starts execution as the start time for the rule. It then computes
the required time for the rule based on this start time. For example, ctime older than 4
weeks refers to that time period more that 4 weeks before the start time.
The following example uses a relative date:
ibrix_migrator -A -f ifs2 -r "atime older than 2 days " -S TIER1 -D
TIER2
Rule keywords
The following keywords can be used in rules.
Keyword
Description
atime
Access time, used in a rule as a fixed or relative time.
ctime
Change time, used in a rule as a fixed or relative time.
mtime
Modification time, used in a rule as a fixed or relative time
gid
An integer corresponding to a group ID.
gname
A string corresponding to a group name. Enclose the name string in double quotes.
uid
An integer corresponding to a user ID.
uname
A string corresponding to a user name, where the user is the owner of the file. Enclose the name
string in double quotes.
type
File system entity the rule operates on. Currently, only the file entity is supported.
size
In size-based rules, the threshold value for determining migration. Value is an integer specified in
K (KB), M (MB), G (GB), and T (TB). Do not separate the value from its unit (for example, 24K).
name
Regular expression. A typical use of a regular expression is to match file names. Enclose a regular
expression in double quotes. The * wildcard is valid (for example, name = "*.mpg").
A name cannot contain a / character. You cannot specify a path; only a filename is allowed.
path
Path name that allows these wild cards: *, ?, /. For example, if the mountpoint for the file system
is /mnt, path=ibfs1/mydir/* matches the entire directory subtree under /mnt/ibfs1/mydir.
(A path cannot start with a /).
strict_path
Path name that rigidly conforms to UNIX shell file name expansion behavior. For example,
strict_path=/mnt/ibfs1/mydir/* matches only the files that are explicitly in the mydir
directory, but does not match any files in subdirectories of mydir.
Migration rule examples
When you write a rule, identify the following components:
•
File system (-f)
•
Source tier (-S)
•
Destination tier (-D)
276 Using data tiering
Use the following command to write a rule. The rule portion of the command must be enclosed in
single quotes.
ibrix_migrator -A -f FSNAME -r 'RULE' -S SOURCE_TIER -D DEST_TIER
Examples:
The rule in the following example is based on the file’s last modification time, using a relative time
period. All files whose last modification date is more than one month in the past are moved.
# ibrix_migrator -A -f ifs2 -r 'mtime older than 1 month' -S T1 -D T2
In the next example, the rule is modified to limit the files being migrated to two types of graphic
files. The or expression is enclosed in parentheses, and the * wildcard is used to match filename
patterns.
# ibrix_migrator -A -f ifs2 -r 'mtime older than 1 month and
( name = "*.jpg" or name = "*.gif" )' -S T1 -D T2
In the next example, three conditions are imposed on the migration. Note that there is no space
between the integer and unit that define the size threshold (10M):
# ibrix_migrator -A -f ifs2 -r 'ctime older than 1 month and type = file
and size >= 10M' -S T1 -D T2
The following example uses the path keyword. It moves files greater than or equal to 5M that are
under the directory /ifs2/tiering_test from TIER1 to TIER2:
ibrix_migrator -A -f ifs2 -r "path = tiering_test and size >= 5M" -S
TIER1 -D TIER2
Rules can be group- or user-based as well as time- or data-based. In the following example, files
associated with two users are migrated to T2 with no consideration of time. The names are quoted
strings.
# ibrix_migrator -A -f ifs2 -r 'type = file and ( uname = "ibrixuser"
or uname = "nobody" )' -S T1 -D T2
Conditions can be combined with and and or to create very precise tiering rules, as shown in the
following example.
# ibrix_migrator -A -f ifs2 -r ' (ctime older than 3 weeks and ctime younger
than 4 weeks) and type = file and ( name = "*.jpg" or name = "*.gif" )
and (size >= 10M and size <= 25M)' -S T1 -D T2
Ambiguous rules
It is possible to write a set of ambiguous rules, where different rules could be used to move a file
to conflicting destinations. For example, if a file can be matched by two separate rules, there is
no guarantee which rule will be applied in a tiering job.
Ambiguous rules can cause a file to be moved to a specific tier and then potentially moved back.
Examples of two such situations follow.
Example 1:
In the following example, if a .jpg file older than one month exists in tier 1, then the first rule
moves it to tier 2. However, once it is in tier 2, it is matched by the second rule, which then moves
the file back to tier 1.
# ibrix_migrator -A -f ifs2 -r ' mtime older than 1 month ' -S T1 -D T2
# ibrix_migrator -A -f ifs2 -r ' name = "*.jpg" ' -S T2 -D T1
There is no guarantee as to the order in which the two rules will be executed; therefore, the final
destination is ambiguous because multiple rules can apply to the same file.
Example 2:
Rules can cause data movement in both directions, which can lead to issues. In the following
example, the rules specify that all .doc files in tier 1 to be moved to tier 2 and all .jpg files in
tier 2 be moved to tier 1. However, this might not succeed, depending on how full the tiers are.
Writing tiering rules 277
# ibrix_migrator -A -f ifs2 -r ' name = "*.doc" ' -S T1 -D T2
# ibrix_migrator -A -f ifs2 -r ' name = "*.jpg" ' -S T2 -D T1
For example, if tier 1 is filled with .doc files to 70% capacity and tier2 is filled with .jpg files to
80% capacity, then tiering might terminate before it is able to fully "swap" the contents of tier 1
and tier 2. The files are processed in no particular order; therefore, it is possible that more .doc
files will be encountered at the beginning of the job, causing space on tier 2 to be consumed faster
than on tier 1. Once a destination tier is full, obviously no further movement in that direction is
possible.
These rules in these two examples are ambiguous because they give rise to possible conflicting
file movement. It is the user’s responsibility to write unambiguous rules for the data tiering policy
for their file systems.
278 Using data tiering
20 Using file allocation
This chapter describes how to configure and manage file allocation.
Overview
StoreAll software allocates new files and directories to segments according to the allocation policy
and segment preferences that are in effect for a client. An allocation policy is an algorithm that
determines the segments that are selected when clients write to a file system.
File allocation policies
File allocation policies are set per file system on each file serving node and on the StoreAll client.
The policies define the following:
•
Preferred segments. The segments where a file serving node or the StoreAll client creates all
new files and directories.
•
Allocation policy. The policy that a file serving node or the StoreAll client uses to choose
segments from its pool of preferred segments to create new files and directories.
The segment preferences and allocation policy are set locally for the StoreAll client. For NFS, CIFS,
HTTP, and FTP clients (collectively referred to as NAS clients), the allocation policy and segment
preferences must be set on the file serving nodes from which the NAS clients access shares.
Segment preferences and allocation policies can be set and changed at any time, including when
the target file system is mounted and in use.
IMPORTANT: It is possible to set separate allocation policies for files and directories. However,
this feature is deprecated and should not be used unless you are directed to do so by HP support.
NOTE: The StoreAll client accesses segments directly through the owning file serving node and
do not honor the file allocation policy set on file serving nodes.
IMPORTANT:
behavior.
Changing segment preferences and allocation policy will alter file system storage
The following tables list standard and deprecated preference settings and allocation policies.
Overview 279
Standard segment preferences and allocation policies
Name
Description
Comment
ALL
Prefer all of the segments available in the file
system for new files and directories.
This is the default segment preference. It is suitable
for most use cases.
LOCAL
Prefer the file serving node’s local segments for
new files and directories.
No writes are routed between the file serving nodes
in the cluster. This preference is beneficial for
performance in some configurations and for some
workloads, but can cause some segments to be
overutilized.
RANDOM
Allocate files to a randomly chosen segment
among preferred segments.
This is the default allocation policy. It generally
spreads new files and directories evenly (by number
of files, not by capacity) across all of the preferred
segments; however, that is not guaranteed.
ROUNDROBIN Allocate files to preferred segments in segment
This policy guarantees that new files and folders
order, returning to the first segment (or the
are spread evenly across the preferred segments
designated starting segment) when a file or
(by number of files, not by capacity).
directory has been allocated to the last segment.
Deprecated segment preferences and allocation policies
IMPORTANT: HP recommends that you do not use these options. They are currently supported
but will be removed in a future release.
Name
Description
Comment
AUTOMATIC
Lets the StoreAll software select the allocation policy. Should be used only on the advice
of HP support.
DIRECTORY
Allocates files to the segment where its parent
directory is located.
STICKY
Allocates files to one segment until the segment’s
Should be used only on the advice
storage limit is reached, and then moves to the next of HP support.
segment as determined by the AUTOMATIC file
allocation policy.
Should be used only on the advice
of HP support.
HOST_ROUNDROBIN_NB For clusters with more than 16 file serving nodes,
Should be used only on the advice
takes a subset of the servers to be used for file
of HP support.
creation and rotates this subset on a regular, periodic
basis.
NONE
Sets directory allocation policy only. Causes the
directory allocation policy to revert to its default,
which is the policy set for file allocation.
Use NONE only to set file and
directory allocation to the same
policy.
How file allocation settings are evaluated
By default, ALL segments are preferred and file systems use the RANDOM allocation policy. These
defaults are adequate for most StoreAll environments; but in some cases, it may be necessary to
change the defaults to optimize file storage for your system.
280 Using file allocation
A StoreAll client or StoreAll file serving node (referred to as “the host”) uses the following precedence
rules to evaluate the file allocation settings that are in effect:
•
The host uses the default allocation policies and segment preferences: The RANDOM policy
is applied, and a segment is chosen from among ALL the available segments.
•
The host uses a non-default allocation policy (such as ROUNDROBIN) and the default segment
preference: Only the file or directory allocation policy is applied, and a segment is chosen
from among ALL available segments.
•
The host uses a non-default segment preference and a non-default allocation policy (such as
LOCAL/ROUNDROBIN): A segment is chosen according to the following rules:
◦
From the pool of preferred segments, select a segment according to the allocation policy
set for the host, and store the file in that segment if there is room. If all segments in the
pool are full, proceed to the next rule.
◦
Use the AUTOMATIC allocation policy to choose a segment with enough storage room
from among the available segments, and store the file.
When file allocation settings take effect on the StoreAll client
Although file allocation settings are executed immediately on file serving nodes, for the StoreAll
client, a file allocation intention is stored in the Fusion Manager. When StoreAll software services
start on a client, the client queries the Fusion Manager for the file allocation settings that it should
use and then implements them. If the services are already running on a client, you can force the
client to query the Fusion Manager by executing ibrix_client or ibrix_lwhost --a on the
client, or by rebooting the client.
Using CLI commands for file allocation
Follow these guidelines when using CLI commands to perform any file allocation configuration
tasks:
•
To perform a task for NAS clients (NFS, CIFS, FTP, HTTP), specify file serving nodes for the
-h HOSTLIST argument.
•
To perform a task for StoreAll clients, specify individual clients for -h HOSTLIST or specify
a hostgroup for -g GROUPLIST. Hostgroups are a convenient way to configure file allocation
settings for a set of StoreAll clients. To configure file allocation settings for all StoreAll clients,
specify the clients hostgroup.
Setting file and directory allocation policies
You can set a nondefault file or directory allocation policy for file serving nodes and StoreAll
clients. You can also specify the first segment where the policy should be applied, but in practice
this is useful only for the ROUNDROBIN policy.
IMPORTANT: Certain allocation policies are deprecated. See “File allocation policies” (page 279)
for a list of standard allocation policies.
On the Management Console, open the Modify Filesystem Properties dialog box and select the
Host Allocation tab.
Setting file and directory allocation policies
281
Setting file and directory allocation policies from the CLI
Allocation policy names are case sensitive and must be entered as uppercase letters (for example,
RANDOM).
Set a file allocation policy:
ibrix_fs_tune -f FSNAME {-h HOSTLIST|-g GROUPLIST} -s LVNAMELIST -p POLICY [-S STARTSEGNUM]
The following example sets the ROUNDROBIN policy for files only on the file system ifs1 on file
serving node s1.hp.com, starting at segment ilv1:
ibrix_fs_tune -f ifs1 -h s1.hp.com -p ROUNDROBIN -s ilv1
Set a directory allocation policy:
Include the -R option to specify that the command is for a directory.
ibrix_fs_tune -f FSNAME {-h HOSTLIST|-g GROUPLIST} -p POLICY [-S STARTSEGNUM] [-R]
The following example sets the ROUNDROBIN directory allocation policy on the file system ifs1
for file serving node s1.hp.com, starting at segment ilv1:
ibrix_fs_tune -f ifs1 -h s1.hp.com -p ROUNDROBIN -R
Setting segment preferences
There are two ways to prefer segments for file serving nodes, StoreAll clients, or hostgroups:
•
Prefer a pool of segments for the hosts to use.
•
Prefer a single segment for files created by a specific user or group on the clients.
282 Using file allocation
Both methods can be in effect at the same time. For example, you can prefer a segment for a user
and then prefer a pool of segments for the clients on which the user will be working.
On the Management Console, open the Modify Filesystem Properties dialog box and select the
Segment Preferences tab.
Creating a pool of preferred segments from the CLI
A segment pool can consist of individually selected segments, all segments local to a file serving
node, or all segments. Clients will apply the allocation policy that is in effect for them to choose a
segment from the segment pool.
NOTE: Segments are always created in the preferred condition. If you want to have some segments
preferred and others unpreferred, first select a single segment and prefer it. This action unprefers
all other segments. You can then work with the segments one at a time, preferring and unpreferring
as required. By design, the system cannot have zero preferred segments. If only one segment is
preferred and you unprefer it, all segments become preferred.
When preferring multiple pools of segments (for example, one for StoreAll clients and one for file
serving nodes, make sure that no segment appears in both pools.
Use the following command to specify the pool by logical volume name (LVNAMELIST):
ibrix_fs_tune -f FSNAME {-h HOSTLIST|-g GROUPLIST} -s LVNAMELIST
Use the following command and the LOCAL keyword to create a pool of all segments on file serving
nodes. Use the ALL keyword to restore the default segment preferences.
Setting segment preferences 283
ibrix_fs_tune -f FSNAME {-h HOSTLIST|-g GROUPLIST} -S {SEGNUMLIST|ALL|LOCAL}
Restoring the default segment preference
The default is for all file system segments to be preferred. Use the following command to reset the
file system policy to the default value on HOSTLIST:
ibrix_fs_tune -f FSNAME {-h HOSTLIST|-g GROUPLIST} -p -U
Tuning allocation policy settings
To optimize system performance, you can globally change the following allocation policy settings
for a file system:
•
File allocation policy.
IMPORTANT: Certain allocation policies are deprecated. See “File allocation policies”
(page 279) for a list of standard allocation policies.
•
Starting segment number for applying changes.
•
Preallocation: number of KB to preallocate for files.
•
Readahead: number of KB in a file to pre-fetch.
•
NFS readahead: number of KB in a file to pre-fetch on NFS systems.
NOTE: Preallocation, Readahead, and NFS readahead are set to the recommended values
during the installation process. Contact HP Support for guidance if you want to change these
values.
On the Management Console, open the Modify Filesystem Properties dialog box and select the
Allocation tab.
284 Using file allocation
Restore the default file allocation policy:
ibrix_fs_tune -f FSNAME {-h HOSTLIST|-g GROUPLIST} -p -U
Listing allocation policies
Use the following command to list the preferred segments (the -S option) or the allocation policy
(the -P option) for the specified hosts, hostgroups, or file system.
ibrix_fs_tune -l [-S] [-P] [-h HOSTLIST | -g GROUPLIST] [-f FSNAME]
HOSTNAME
mak01.hp.com
FSNAME
ifs1
POLICY
RANDOM
STARTSEG DIRPOLICY DIRSEG
0
NONE
0
SEGBITS
DEFAULT
READAHEAD PREALLOC HWM
DEFAULT
DEFAULT DEFAULT
SWM
DEFAULT
Listing allocation policies 285
21 Support and other resources
Contacting HP
For worldwide technical support information, see the HP support website:
http://www.hp.com/support
Before contacting HP, collect the following information:
•
Product model names and numbers
•
Technical support registration number (if applicable)
•
Product serial numbers
•
Error messages
•
Operating system type and revision level
•
Detailed questions
Related information
The following documents provide related information:
•
HP StoreAll Storage Release Notes
•
HP StoreAll Storage CLI Reference Guide
•
HP StoreAll 9300/9320 Storage Administrator Guide
•
HP IBRIX X9720/StoreAll 9730 Storage Administrator Guide
•
HP StoreAll Storage Installation Guide
•
HP StoreAll Storage Network Best Practices Guide
Related documents are available on the StoreAll Manuals page at http://www.hp.com/support/
StoreAllManuals.
HP websites
For additional information, see the following HP websites:
•
http://www.hp.com
•
http://www.hp.com/go/StoreAll
•
http://www.hp.com/go/storage
•
http://www.hp.com/support/manuals
Subscription service
HP recommends that you register your product at the Subscriber's Choice for Business website:
http://www.hp.com/go/e-updates
After registering, you will receive e-mail notification of product enhancements, new driver versions,
firmware updates, and other product resources.
286 Support and other resources
22 Documentation feedback
HP is committed to providing documentation that meets your needs. To help us improve the
documentation, send any errors, suggestions, or comments to Documentation Feedback
(docsfeedback@hp.com). Include the document title and part number, version number, or the URL
when submitting your feedback.
287
Glossary
ACE
Access control entry.
ACL
Access control list.
ADS
Active Directory Service.
ALB
Advanced load balancing.
BMC
Baseboard Management Configuration.
CIFS
Common Internet File System. The protocol used in Windows environments for shared folders.
CLI
Command-line interface. An interface comprised of various commands which are used to control
operating system responses.
CSR
Customer self repair.
DAS
Direct attach storage. A dedicated storage device that connects directly to one or more servers.
DNS
Domain Name System.
FTP
File Transfer Protocol.
GSI
Global service indicator.
HA
High availability.
HBA
Host bus adapter.
HCA
Host channel adapter.
HDD
Hard disk drive.
IAD
HP 9000 Software Administrative Daemon.
iLO
Integrated Lights-Out.
IML
Initial microcode load.
IOPS
I/Os per second.
IPMI
Intelligent Platform Management Interface.
JBOD
Just a bunch of disks.
KVM
Keyboard, video, and mouse.
LUN
Logical unit number. A LUN results from mapping a logical unit number, port ID, and LDEV ID to
a RAID group. The size of the LUN is determined by the emulation mode of the LDEV and the
number of LDEVs associated with the LUN.
MTU
Maximum Transmission Unit.
NAS
Network attached storage.
NFS
Network file system. The protocol used in most UNIX environments to share folders or mounts.
NIC
Network interface card. A device that handles communication between a device and other devices
on a network.
NTP
Network Time Protocol. A protocol that enables the storage system’s time and date to be obtained
from a network-attached server, keeping multiple hosts and storage devices synchronized.
OA
Onboard Administrator.
OFED
OpenFabrics Enterprise Distribution.
OSD
On-screen display.
OU
Active Directory Organizational Units.
RO
Read-only access.
RPC
Remote Procedure Call.
RW
Read-write access.
SAN
Storage area network. A network of storage devices available to one or more servers.
SAS
Serial Attached SCSI.
288 Glossary
SELinux
Security-Enhanced Linux.
SFU
Microsoft Services for UNIX.
SID
Secondary controller identifier number.
SMB
Server Message Block. The protocol used in Windows environments for shared folders.
SNMP
Simple Network Management Protocol.
TCP/IP
Transmission Control Protocol/Internet Protocol.
UDP
User Datagram Protocol.
UID
Unit identification.
VACM
SNMP View Access Control Model.
VC
HP Virtual Connect.
VIF
Virtual interface.
WINS
Windows Internet Name Service.
WWN
World Wide Name. A unique identifier assigned to a Fibre Channel device.
WWNN
World wide node name. A globally unique 64-bit identifier assigned to each Fibre Channel node
process.
WWPN
World wide port name. A unique 64-bit address used in a FC storage network to identify each
device in a FC network.
289
Index
Symbols
/etc/likewise/vhostmap file, 97
A
Active Directory
configure, 65
configure from CLI, 72
Linux static user mapping, 93
synchronize with NTP server, 96
use with LDAP ID mapping, 61
Antivirus
configure, 226
enable or disable, 228
file exclusions, 230
protocol scan settings, 230
scans, start or schedule, 233
scans, status, 235
statistics, 237
unavailable policy, 229
virus definitions, 228
virus scan engine, 226
add, 227
remove, 228
audit log, 222
authentication
Active Directory, 61
configure from CLI, 72
configure from GUI, 63
Local Users, 61
Authentication Wizard, 63
automated block snapshots
create from CLI, 255
create on GUI, 251
delete snapshot scheme, 256
modify snapshot scheme, 255
snapshot scheme, 250
view snapshot scheme from CLI, 256
B
backups
snapshots, software, 248
C
case-insensitive filenames, 58
CIFS
locking behavior, 98
commands
object mode, 148
contacting HP, 286
D
data retention
audit log, 222
audit log reports, 223
autocommit period, 197
290 Index
backup support, 216
data validation scans, 197
delete files, 207
enable on file system, 198
export custom metadata, 219
file administration, 204
change retention period, 206
delete files, 207
remove retention period, 207
set or remove legal hold, 206
file states, 196
hard links, 215
import metadata, 220
legal holds, 206
metadata service, 217
on-demand data validation scans, 209
remote replication, use with, 215
rentention profile
modify, 202
view, 201
reports, 212
retained file, 196
create, 202
view retention information, 204
retention period
change, 206
remove, 207
retention profile, 196
save audit metadata, 220
schedule data validation scans, 208
troubleshooting, 216
validation scan errors, 211
validation scan results, 210
view retention information, 204
WORM file, 196
data tiering
assign segments, 267
configure, 261
manage from CLI, 271
migration task, 270
primary tier, 272
tiering policy, 268
tiering rules, 275
data validation
compare hash sums, 211
on-demand scans, 209
resolve scan errors, 211
schedule scans, 208
stop or pause, 210
view scan results, 210
data validation scans, 197
directory tree quotas
delete, 35
disk space information, 42
document
related documentation, 286
documentation
providing feedback on, 287
E
Export Control, enable, 20, 27
Express Query
export metadata, 219
HTTP-StoreAll REST API shares., 118
import metadata, 220
save audit metadata, 220
F
file allocation
allocation policies, 279
evaluation of allocation settings, 280
list policies, 285
segment preferences, 279
set file and directory policies, 281
set segment preferences, 282
tune policy settings, 284
file serving nodes
delete, 48
segment management, 11
unmount a file system, 25
view SMB shares, 84
file systems
32-bit compatibility mode, 25
64–mode, 14
allocation policy, 11
case-insensitive filenames, 58
check and repair, 48
components of, 12
create
from CLI, 21
New Filesystem Wizard, 14
options for, 14
data retention and validation, 196
delete, 47
disk space information, 42
enable or disable Export Control, 20
Export Control, enable, 20, 27
extend, 42
file limit, 22
file space reserved, 25
lost+found directory, 41
mount, 22, 25
mounting, 25
mountpoints, , 22
operating principles, 10
performance, 37
quotas, 28
remote replication, 178
segments
defined, 10
rebalance, 43
snapshots, block, 249
snapshots, software, 239
troubleshooting, 49
unmount, 25
view summary information, 39
file-compabile mode
uses, 152
filenames, case-insensitive, 58
FTP, 104
authentication , 61
configuration best practices, 104
configuration profile, 109
FTP share
access, 111
add, 104
start or stop, 110
vsftpd service, 110
H
help
obtaining, 286
HP
technical support, 286
HP websites, 286
HTTP
Apache tunables, 130
authentication , 61
configuration, 114
configuration best practices, 117
configure from the CLI, 131
configure on the GUI, 118
shares, access, 133
start or stop, 133
troubleshooting, 136
WebDAV shares, 135
HTTP shares
creating, 116
share types, 114
HTTP-StoreAll REST API shares, 118
L
LDAP authentication
configuration template, 62
configure, 67
configure from CLI, 72
remote LDAP server, configure, 62
requirements, 62
LDAP ID mapping
configure, 66
configure from CLI, 73
use with Active Directory, 61
Linux static user mapping, 93
Linux StoreAll clients
disk space information, 42
Local Groups authentication, 68
configure from CLI, 74
Local Users authentication, 69
configure from CLI, 74
logical volumes
view information, 39
logs
ibrcfrworker log file, 194
lost+found directory, 41
291
M
mapping
SMB shares, 92
Microsoft Management Console
manage SMB shares, 88
migration, files, 270
mounting, file system, 22, 25
mountpoints
create from CLI, 24
delete, 24
view, 22, 25
N
New Filesystem Wizard, 14
NFS
case-insensitive filenames, 58
configure NFS server threads, 55
export file systems, 55
support, 55
unexport file systems, 58
O
object mode
commands, 148
data retention, 117
finding hash name, 146
finding object ID, 145
terminology, 138
tutorial, 139
uses, 138
viewing container contents, 144
viewing containers, 143
obtaining
sample client application, 116
P
physical volumes
delete, 47
view information, 38
Q
quotas
configure email notifications, 35
delete, 35
enable, 28
export to file, 33
import from file, 32
online quota check, 34
operation of, 28
quotas file format, 33
SMB shares, 98
troubleshoot, 36
user and group, 29
R
rebalancing segments, 43
stop tasks, 47
track progress, 46
view task status, 47
292 Index
related documentation, 286
remote replication
configure target export, 180
continuous, 178
defined, 178
failover/failback, 193
identify host and NIC designations, 181
intercluster, 179
intracluster, 180
pause or resume task, 190, 192
register clusters, 181
remote cluster, 180
replicate a snapshot, 188
run-once, 179
start intracluster replication, 191
start replication task, 187, 190
start run-once task, 192
stop task, 190, 192
troubleshooting, 195
view tasks, 184, 192
WORM/retained files, 193
REST API shares
file-compatible mode, 152
retention, data
audit log, 222
audit log reports, 223
autocommit period, 197
backup support, 216
data validation scans, 197
enable on file system, 198
export custom metadata, 219
file administration, 204
change retention period, 206
delete files, 207
remove retention period, 207
remove WORM attribute, 207
set or remove legal hold, 206
file states, 196
hard links, 215
import metadata, 220
legal holds, 206
metadata service, 217
on-demand data validation scans, 209
remote replication, use with, 215
rentention profile
modify, 202
view, 201
reports, 212
retained file, 196
create, 202
view retention information, 204
retention period
change, 206
remove, 207
retention profile, 196
save audit metadata, 220
schedule data validation scans, 208
troubleshooting, 216
validation scan errors, 211
validation scan results, 210
view retention information, 204
WORM file, 196
S
SegmentNotAvailable alert, 51
SegmentRejected alert, 52
segments
defined, 10
delete, 47
rebalance, 43
stop tasks, 47
track progress, 46
view task status, 47
SMB
Active Directory domain, configure, 93
activity statistics per node, 76
authentication, 61
configure nodes, 76
Linux permissions, 87
Linux static user mapping, 93
monitor SMB services, 77
permissions management, 100
RFC2037 support, 93
shadow copy, 98
share administrators, 71
SMB server consolidation, 96
SMB signing, 84
start or stop SMB service, 76
troubleshooting, 102
SMB server consolidation, 96
SMB shares
add with MMC, 90
configure with GUI, 79
delete with MMC, 92
manage with CLI, 85
manage with MMC, 88
mapping shares, 92
quotas information, 98
SMB signing, 83
view share information, 84
SMB signing, 84
snapshots, block
access snapshot file systems, 258
automated, 250
create from CLI, 255
create on GUI, 251
delete snapshot scheme, 256
modify snapshot scheme, 255
view snapshot scheme from CLI, 256
clear invalid snapshot, 256
create, 256
defined, 249
delete, 257
discover LUNs, 250
list storage allocation, 250
mount, 256
register the snapshot partition, 250
set up the snapshot partition, 250
troubleshooting, 260
view information about, 257
snapshots, software
access, 242
backup, 248
defined, 239
delete, 244
on-demand snapshots, 241
reclaim file system space, 245
replicate, 188
restore files, 244
schedule, 240
snap trees
configure, 239
move files, 248
remove snapshot authorization, 247
schedule snapshots, 240
space usage, 242
view on GUI, 241
SSL certificates
add to cluster, 176
create, 174
delete, 177
export, 177
StoreAll clients
delete, 48
limit file access, 26
locally mount a file system, 26
locally unmount file system, 26
StoreAll REST API
best practices, 115
client application, 116
creating shares, 116
data retention, 117
features, 115
object mode, 138
share types, 114
uses, 115
Subscriber's Choice, HP, 286
T
technical support
HP, 286
service locator website, 286
tiering, data
assign segments, 267
configure, 261
migration task, 270, 271
primary tier, 272
tiering policy, 268
tiering rules, 275
U
unmounting, file systems, 25
V
validation scans, 197
validation, data
compare hash sums, 211
293
on-demand scans, 209
resolve scan errors, 211
schedule scans, 208
stop or pause, 210
view scan results, 210
volume groups
delete, 47
view information, 38
W
websites
HP Subscriber's Choice for Business, 286
294 Index