Veritas 7.3 Access Administrator's Guide

Veritas 7.3 Access Administrator's Guide

Veritas Access 7.3 is a software-defined scale-out network-attached storage (NAS) solution for unstructured data that works on commodity hardware. It is a highly resilient and scalable storage solution that delivers multi-protocol access, data movement, and data protection to and from the public or private cloud based on policies.

advertisement

Assistant Bot

Need help? Our chatbot has already read the manual and is ready to assist you. Feel free to ask any questions about the device, but providing details will make the conversation more productive.

Veritas Access 7.3 Administrator's Guide | Manualzz
Veritas Access 7.3
Administrator's Guide
Linux
7.3
September 2017
Veritas Access Administrator's Guide
Last updated: 2017-09-04
Document version: 7.3 Rev 1
Legal Notice
Copyright © 2017 Veritas Technologies LLC. All rights reserved.
Veritas, the Veritas Logo, Veritas InfoScale, and NetBackup are trademarks or registered
trademarks of Veritas Technologies LLC or its affiliates in the U.S. and other countries. Other
names may be trademarks of their respective owners.
This product may contain third party software for which Veritas is required to provide attribution
to the third party (“Third Party Programs”). Some of the Third Party Programs are available
under open source or free software licenses. The License Agreement accompanying the
Software does not alter any rights or obligations you may have under those open source or
free software licenses. Refer to the third party legal notices document accompanying this
Veritas product or available at:
https://www.veritas.com/about/legal/license-agreements
The product described in this document is distributed under licenses restricting its use, copying,
distribution, and decompilation/reverse engineering. No part of this document may be
reproduced in any form by any means without prior written authorization of Veritas Technologies
LLC and its licensors, if any.
THE DOCUMENTATION IS PROVIDED "AS IS" AND ALL EXPRESS OR IMPLIED
CONDITIONS, REPRESENTATIONS AND WARRANTIES, INCLUDING ANY IMPLIED
WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR
NON-INFRINGEMENT, ARE DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH
DISCLAIMERS ARE HELD TO BE LEGALLY INVALID. VERITAS TECHNOLOGIES LLC
SHALL NOT BE LIABLE FOR INCIDENTAL OR CONSEQUENTIAL DAMAGES IN
CONNECTION WITH THE FURNISHING, PERFORMANCE, OR USE OF THIS
DOCUMENTATION. THE INFORMATION CONTAINED IN THIS DOCUMENTATION IS
SUBJECT TO CHANGE WITHOUT NOTICE.
The Licensed Software and Documentation are deemed to be commercial computer software
as defined in FAR 12.212 and subject to restricted rights as defined in FAR Section 52.227-19
"Commercial Computer Software - Restricted Rights" and DFARS 227.7202, et seq.
"Commercial Computer Software and Commercial Computer Software Documentation," as
applicable, and any successor regulations, whether delivered by Veritas as on premises or
hosted services. Any use, modification, reproduction release, performance, display or disclosure
of the Licensed Software and Documentation by the U.S. Government shall be solely in
accordance with the terms of this Agreement.
Veritas Technologies LLC
500 E Middlefield Road
Mountain View, CA 94043
http://www.veritas.com
Technical Support
Technical Support maintains support centers globally. All support services will be delivered
in accordance with your support agreement and the then-current enterprise technical support
policies. For information about our support offerings and how to contact Technical Support,
visit our website:
https://www.veritas.com/support
You can manage your Veritas account information at the following URL:
https://my.veritas.com
If you have questions regarding an existing support agreement, please email the support
agreement administration team for your region as follows:
Worldwide (except Japan)
[email protected]
Japan
[email protected]
Documentation
Make sure that you have the current version of the documentation. Each document displays
the date of the last update on page 2. The document version appears on page 2 of each
guide. The latest documentation is available on the Veritas website:
https://sort.veritas.com/documents
Documentation feedback
Your feedback is important to us. Suggest improvements or report errors or omissions to the
documentation. Include the document title, document version, chapter title, and section title
of the text on which you are reporting. Send feedback to:
[email protected]
You can also see documentation information or ask a question on the Veritas community site:
http://www.veritas.com/community/
Veritas Services and Operations Readiness Tools (SORT)
Veritas Services and Operations Readiness Tools (SORT) is a website that provides information
and tools to automate and simplify certain time-consuming administrative tasks. Depending
on the product, SORT helps you prepare for installations and upgrades, identify risks in your
datacenters, and improve operational efficiency. To see what services and tools SORT provides
for your product, see the data sheet:
https://sort.veritas.com/data/support/SORT_Data_Sheet.pdf
Contents
Section 1
Introducing Veritas Access
Chapter 1
Introducing Veritas Access
................................. 16
............................................. 17
About Veritas Access ....................................................................
Accessing the Veritas Access CLI ...................................................
Navigating the Veritas Access CLI ...................................................
Getting help using the Veritas Access command-line interface ...............
Displaying the command history ......................................................
Using the more command ..............................................................
Section 2
Configuring Veritas Access
Chapter 2
Adding users or roles
17
22
23
24
26
27
................................ 28
....................................................... 29
Adding Master, System Administrator, and Storage Administrator users
........................................................................................... 29
About user roles and privileges ....................................................... 31
About the naming requirements for adding new users .......................... 31
Chapter 3
Configuring the network
.................................................. 33
About configuring the Veritas Access network ....................................
About bonding Ethernet interfaces ...................................................
Bonding Ethernet interfaces ............................................................
Configuring DNS settings ...............................................................
About the IP addresses for the Ethernet interfaces ..............................
About Ethernet interfaces ...............................................................
Displaying current Ethernet interfaces and states ................................
Configuring IP addresses ...............................................................
Configuring Veritas Access to use jumbo frames .................................
Configuring VLAN interfaces ...........................................................
Configuring NIC devices ................................................................
Swapping network interfaces ..........................................................
Excluding PCI IDs from the cluster ...................................................
About configuring routing tables ......................................................
34
34
35
37
39
39
40
40
42
43
44
45
47
48
Contents
Configuring routing tables .............................................................. 48
Changing the firewall settings ......................................................... 51
Chapter 4
Configuring authentication services ............................. 52
About configuring LDAP settings .....................................................
Configuring LDAP server settings ....................................................
Administering the Veritas Access cluster's LDAP client .........................
Configuring the NIS-related settings .................................................
Configuring NSS lookup order .........................................................
52
53
56
57
58
Section 3
Managing Veritas Access storage ................. 60
Chapter 5
Configuring storage
.......................................................... 61
About storage provisioning and management ....................................
About Flexible Storage Sharing .......................................................
Displaying information for all disk devices associated with the nodes
in a cluster ............................................................................
About configuring storage pools ......................................................
Configuring storage pools ..............................................................
About quotas for usage ..................................................................
About quotas for CIFS home directories ............................................
Workflow for configuring and managing storage using the Veritas
Access CLI ...........................................................................
Displaying WWN information ..........................................................
Initiating host discovery of LUNs .....................................................
Importing new LUNs forcefully for new or existing pools .......................
Increasing the storage capacity of a LUN ..........................................
About configuring disks ..................................................................
Configuring disks ..........................................................................
Formatting or reinitializing a disk ......................................................
Removing a disk ..........................................................................
Chapter 6
62
62
63
65
65
68
69
70
71
72
73
73
74
74
75
76
Configuring data integrity with I/O fencing ................. 77
About I/O fencing .........................................................................
Configuring disk-based I/O fencing ...................................................
Replacing an existing coordinator disk ........................................
Disabling I/O fencing ...............................................................
Destroying the coordinator pool .................................................
Using majority-based fencing ..........................................................
77
78
79
79
80
80
5
Contents
Chapter 7
Configuring ISCSI
............................................................ 82
About iSCSI ................................................................................
Configuring the iSCSI initiator .........................................................
Configuring the iSCSI initiator name .................................................
Configuring the iSCSI devices .........................................................
Configuring discovery on iSCSI .......................................................
Configuring the iSCSI targets ..........................................................
Modifying tunables for iSCSI ...........................................................
82
83
83
84
86
92
97
Section 4
Managing Veritas Access file access
services ..................................................................... 103
Chapter 8
Configuring your NFS server ........................................ 104
About using NFS server with Veritas Access ....................................
Using the kernel-based NFS server ................................................
Using the NFS-Ganesha server .....................................................
Switching between NFS servers ....................................................
Recommended tuning for NFS-Ganesha version 3 and version 4 .........
Accessing the NFS server ............................................................
Displaying and resetting NFS statistics ............................................
Configuring Veritas Access for ID mapping for NFS version 4 ..............
Configuring the NFS client for ID mapping for NFS version 4 ...............
About authenticating NFS clients ...................................................
Setting up Kerberos authentication for NFS clients ............................
Adding and configuring Veritas Access to the Kerberos realm
....................................................................................
Chapter 9
104
105
105
105
106
108
109
110
111
111
111
112
Using Veritas Access as a CIFS server .................... 114
About configuring Veritas Access for CIFS .......................................
About configuring CIFS for standalone mode ....................................
Configuring CIFS server status for standalone mode ..........................
Changing security settings ............................................................
Changing security settings after the CIFS server is stopped .................
About Active Directory (AD) ..........................................................
Configuring entries for Veritas Access DNS for authenticating to
Active Directory (AD) .......................................................
Joining Veritas Access to Active Directory (AD) ...........................
Verifying that Veritas Access has joined Active Directory (AD)
successfully ...................................................................
About configuring CIFS for Active Directory (AD) domain mode ............
115
116
117
118
118
118
118
120
121
121
6
Contents
Configuring CIFS for the AD domain mode .................................
Using multi-domain controller support in CIFS ............................
About leaving an AD domain ...................................................
Changing domain settings for AD domain mode ..........................
Removing the AD interface .....................................................
Setting NTLM .............................................................................
About setting trusted domains .......................................................
Specifying trusted domains that are allowed access to the CIFS
server ...........................................................................
Allowing trusted domains access to CIFS when setting an IDMAP
backend to rid ................................................................
Allowing trusted domains access to CIFS when setting an IDMAP
backend to ldap ..............................................................
Allowing trusted domains access to CIFS when setting an IDMAP
backend to hash .............................................................
Allowing trusted domains access to CIFS when setting an IDMAP
backend to ad ................................................................
About configuring Windows Active Directory as an IDMAP backend
for CIFS ........................................................................
Configuring the Active Directory schema with CIFS-schema
extensions ....................................................................
Configuring the LDAP client for authentication using the CLI ..........
Configuring the CIFS server with the LDAP backend ....................
Setting Active Directory trusted domains ....................................
About storing account information ..................................................
Storing user and group accounts ...................................................
Reconfiguring the CIFS service .....................................................
About mapping user names for CIFS/NFS sharing .............................
About the mapuser commands ......................................................
Adding, removing, or displaying the mapping between CIFS and NFS
users .................................................................................
Automatically mapping of UNIX users from LDAP to Windows users
..........................................................................................
About managing home directories ..................................................
Setting the home directory file systems .....................................
Setting up home directories ....................................................
Displaying home directory usage information ..............................
Deleting home directories and disabling creation of home
directories .....................................................................
About CIFS clustering modes ........................................................
About switching the clustering mode .........................................
About migrating CIFS shares and home directories ............................
122
124
124
125
126
127
128
128
129
131
132
133
134
135
139
139
140
141
143
145
147
148
149
150
150
150
152
154
154
155
156
156
7
Contents
Migrating CIFS shares and home directories from normal to ctdb
clustering mode ..............................................................
Migrating CIFS shares and home directories from ctdb to normal
clustering mode ..............................................................
Setting the CIFS aio_fork option ....................................................
About managing local users and groups ..........................................
Creating a local CIFS user ......................................................
Configuring a local group .......................................................
Enabling CIFS data migration ........................................................
Chapter 10
157
158
159
159
160
161
Configuring Veritas Access to work with Oracle
Direct NFS ................................................................... 162
About using Veritas Access with Oracle Direct NFS ...........................
About the Oracle Direct NFS architecture ........................................
About Oracle Direct NFS node or storage connection failures ..............
Configuring an Oracle Direct NFS storage pool .................................
Configuring an Oracle Direct NFS file system ...................................
Configuring an Oracle Direct NFS share ..........................................
Best practices for improving Oracle database performance .................
Chapter 11
157
162
163
165
165
166
167
168
Configuring an FTP server ............................................ 170
About FTP .................................................................................
Creating the FTP home directory ...................................................
Using the FTP server commands ...................................................
About FTP server options .............................................................
Customizing the FTP server options ...............................................
Administering the FTP sessions .....................................................
Uploading the FTP logs ................................................................
Administering the FTP local user accounts .......................................
About the settings for the FTP local user accounts .............................
Configuring settings for the FTP local user accounts ..........................
170
171
171
172
175
176
176
177
178
179
Section 5
Managing the Veritas Access Object
Store server ........................................................... 181
Chapter 12
Using Veritas Access as an Object Store server
.......................................................................................... 182
About the Object Store server ....................................................... 182
Use cases for configuring the Object Store server ............................. 183
Configuring the Object Store server ................................................ 183
8
Contents
About buckets and objects ............................................................ 187
File systems used for objectstore buckets ........................................ 189
Multi-protocol support for NFS with S3 ............................................ 189
Section 6
Monitoring and troubleshooting ..................... 192
Chapter 13
Monitoring events and audit logs
................................ 193
About event notifications ..............................................................
About severity levels and filters ......................................................
Configuring an email group ...........................................................
Configuring a syslog server ...........................................................
Displaying events on the console ...................................................
Exporting events in syslog format to a given URL ..............................
About SNMP notifications .............................................................
Configuring an SNMP management server ......................................
Configuring events for event reporting .............................................
193
194
195
198
200
200
201
201
204
Section 7
Provisioning and managing Veritas
Access file systems .......................................... 206
Chapter 14
Creating and maintaining file systems
...................... 207
About creating and maintaining file systems .....................................
About scale-out file systems ..........................................................
Considerations for creating a file system ..........................................
Best practices for creating file systems ............................................
About striping file systems ............................................................
About FastResync ......................................................................
About creating a tuned file system for a specific workload ...................
About scale-out fsck ....................................................................
About managing application I/O workloads using maximum IOPS
settings ...............................................................................
About setting retention in files ........................................................
Section 8
Configuring cloud storage
Chapter 15
Configuring the cloud gateway
207
208
211
211
213
216
217
218
219
220
................................. 221
.................................... 222
About the cloud gateway .............................................................. 222
Configuring the cloud gateway ...................................................... 223
9
Contents
Chapter 16
Configuring cloud as a tier
............................................ 224
Configuring the cloud as a tier feature for scale-out file systems ...........
Moving files between tiers in a scale-out file system ...........................
About policies for scale-out file systems ..........................................
About pattern matching for data movement policies .....................
About schedules for running policies .........................................
Creating and scheduling a policy for a scale-out file system ..........
Obtaining statistics on data usage in the cloud tier in scale-out file
systems ..............................................................................
Workflow for moving on-premises storage to cloud storage for NFS
shares ................................................................................
224
226
229
230
231
232
236
238
Section 9
Provisioning and managing Veritas
Access shares ...................................................... 240
Chapter 17
Creating shares for applications .................................. 241
About file sharing protocols ...........................................................
About concurrent access ..............................................................
Sharing directories using CIFS and NFS protocols .............................
About concurrent access with NFS and S3 .......................................
Chapter 18
Creating and maintaining NFS shares
...................... 247
About NFS file sharing .................................................................
Displaying file systems and snapshots that can be exported ................
Exporting an NFS share ..............................................................
Displaying exported directories ......................................................
About managing NFS shares using netgroups ..................................
Unexporting a directory or deleting NFS options ................................
Exporting an NFS share for Kerberos authentication ..........................
Mounting an NFS share with Kerberos security from the NFS client
..........................................................................................
Exporting an NFS snapshot ..........................................................
Chapter 19
241
242
243
246
247
248
248
252
253
253
255
256
258
Creating and maintaining CIFS shares ..................... 259
About managing CIFS shares ........................................................ 260
Exporting a directory as a CIFS share ............................................ 260
Configuring a CIFS share as secondary storage for an Enterprise Vault
store .................................................................................. 261
10
Contents
Exporting the same file system/directory as a different CIFS share
..........................................................................................
About the CIFS export options .......................................................
Setting share properties ...............................................................
Hiding system files when adding a CIFS normal share ........................
Displaying CIFS share properties ...................................................
Allowing specified users and groups access to the CIFS share .............
Denying specified users and groups access to the CIFS share .............
Exporting a CIFS snapshot ...........................................................
Deleting a CIFS share .................................................................
Modifying a CIFS share ................................................................
Making a CIFS share shadow copy aware .......................................
Chapter 20
Using Veritas Access with OpenStack
261
262
266
267
267
268
269
270
270
271
272
...................... 273
About the Veritas Access integration with OpenStack .........................
About the Veritas Access integration with OpenStack Cinder ...............
About the Veritas Access integration with OpenStack Cinder
architecture ...................................................................
Configuring Veritas Access with OpenStack Cinder ......................
About the Veritas Access integration with OpenStack Manila ...............
OpenStack Manila use cases ..................................................
Configuring Veritas Access with OpenStack Manila ......................
Creating a new share backend on the OpenStack controller node
....................................................................................
Creating an OpenStack Manila share type .................................
Creating an OpenStack Manila file share ...................................
Creating an OpenStack Manila share snapshot ...........................
273
274
275
276
283
284
285
286
287
288
291
Section 10 Managing Veritas Access storage
services ..................................................................... 292
Chapter 21
Deduplicating data ........................................................... 293
About data deduplication ..............................................................
Best practices for using the Veritas Access deduplication feature ..........
Setting up deduplication ...............................................................
Configuring deduplication .............................................................
Manually running deduplication .....................................................
Scheduling deduplication .............................................................
Setting deduplication parameters ...................................................
Removing deduplication ...............................................................
Verifying deduplication .................................................................
293
296
297
298
302
304
307
309
310
11
Contents
Chapter 22
Compressing files
............................................................ 312
About compressing files ...............................................................
About the compressed file format ............................................
About the file compression attributes ........................................
About the file compression block size .......................................
Use cases for compressing files ....................................................
Best practices for using compression ..............................................
Compression tasks .....................................................................
Compressing files .................................................................
Scheduling compression jobs ..................................................
Listing compressed files .........................................................
Showing the scheduled compression job ...................................
Uncompressing files ..............................................................
Modifying the scheduled compression .......................................
Removing the specified schedule .............................................
Stopping the schedule for a file system .....................................
Removing the pattern-related rule for a file system ......................
Removing the modified age related rule for a file system ...............
Chapter 23
Configuring SmartTier
312
313
313
313
314
314
314
315
315
317
317
317
318
319
320
320
320
.................................................... 321
About Veritas Access SmartTier ....................................................
How Veritas Access uses SmartTier ...............................................
Adding tiers to a file system ..........................................................
Adding or removing a column from a secondary tier of a file system
..........................................................................................
Configuring a mirror to a tier of a file system .....................................
Listing all of the files on the specified tier .........................................
Displaying a list of SmartTier file systems ........................................
About tiering policies ...................................................................
About configuring the policy of each tiered file system ........................
Configuring the policy of each tiered file system ................................
Best practices for setting relocation policies .....................................
Relocating a file or directory of a tiered file system .............................
Displaying the tier location of a specified file .....................................
About configuring schedules for all tiered file systems ........................
Configuring schedules for tiered file systems ....................................
Displaying the files that may be moved or pruned by running a policy
..........................................................................................
Allowing metadata information on the file system to be written on the
secondary tier ......................................................................
Restricting metadata information to the primary tier only .....................
Removing a tier from a file system ..................................................
322
323
324
326
327
329
329
330
330
332
334
335
336
336
337
338
339
339
340
12
Contents
Chapter 24
Configuring SmartIO
....................................................... 341
About SmartIO for solid-state drives ...............................................
About configuring SmartIO ............................................................
About SmartIO read caching for applications running on Veritas Access
file systems .........................................................................
About SmartIO writeback caching for applications running on Veritas
Access file systems ..............................................................
Chapter 25
Configuring replication
Using snapshots
342
343
................................................... 345
About Veritas Access file-level replication ........................................
How Veritas Access replication works .............................................
About Veritas Access sync replication .............................................
How Veritas Access sync replication works ......................................
Starting Veritas Access replication ..................................................
Setting up communication between the source and the destination
clusters ...............................................................................
Setting up the file systems to replicate ............................................
Setting up files to exclude from a replication unit ...............................
Scheduling the replication ............................................................
Defining what to replicate .............................................................
About the maximum number of parallel replication jobs .......................
Managing a replication job ............................................................
Replicating compressed data ........................................................
Displaying replication job information and status ...............................
Synchronizing a replication job ......................................................
Behavior of the file systems on the replication destination target ...........
Accessing file systems configured as replication destinations ...............
Creating a recovery point objective (RPO) report ...............................
Replication job failover and failback ................................................
Process summary .................................................................
Overview of the planned failover process ...................................
Overview of the planned failback process ..................................
Overview of the unplanned failover process ...............................
Overview of the unplanned failback process ...............................
Chapter 26
341
342
346
347
348
348
349
351
355
357
359
361
363
363
367
367
368
368
369
369
370
371
371
372
373
373
.............................................................. 375
About snapshots ........................................................................
Creating snapshots .....................................................................
Displaying snapshots ..................................................................
Managing disk space used by snapshots .........................................
Bringing snapshots online or taking snapshots offline .........................
375
376
377
379
381
13
Contents
Restoring a snapshot ..................................................................
About snapshot schedules ............................................................
Configuring snapshot schedules ....................................................
Managing automated snapshots ...................................................
Chapter 27
Using instant rollbacks
382
383
384
387
................................................... 389
About instant rollbacks ................................................................. 389
Chapter 28
Configuring Veritas Access with the NetBackup
client ..............................................................................
About Veritas Access as a NetBackup client .....................................
Prerequisites for configuring the NetBackup client .............................
About the NetBackup Snapshot Client ............................................
About NetBackup snapshot methods ..............................................
About NetBackup instant recovery ..................................................
Enabling or disabling the NetBackup SAN client ................................
Workflow for configuring Veritas Access for NetBackup ......................
Registering a NetBackup master server, an EMM server, or adding an
optional media server ...........................................................
Displaying the excluded files from backup ........................................
Displaying the included and excluded files for backups .......................
Adding or deleting patterns to the list of files in backups ......................
Configuring or resetting the virtual IP address used by NetBackup
..........................................................................................
Configuring the virtual name of NetBackup .......................................
Displaying the status of NetBackup services .....................................
Configuring backup operations using NetBackup or other third-party
backup applications ..............................................................
Performing a backup or restore of a Veritas Access file system over a
NetBackup SAN client ...........................................................
Performing a backup or restore of a snapshot ...................................
Installing or uninstalling the NetBackup client ...................................
Configuring Veritas Access for NetBackup cloud storage ....................
391
392
393
393
393
394
394
395
396
398
398
399
400
400
401
403
404
405
405
409
Section 11 Reference ...................................................................... 412
Appendix A
Veritas Access documentation
.................................... 413
Using the Veritas Access product documentation .............................. 413
About accessing the online man pages ........................................... 414
14
Contents
Appendix B
Veritas Access tuning
..................................................... 416
File system mount-time memory usage ........................................... 416
Index
.................................................................................................................. 421
15
Section
1
Introducing Veritas Access
â– 
Chapter 1. Introducing Veritas Access
Chapter
1
Introducing Veritas Access
This chapter includes the following topics:
â– 
About Veritas Access
â– 
Accessing the Veritas Access CLI
â– 
Navigating the Veritas Access CLI
â– 
Getting help using the Veritas Access command-line interface
â– 
Displaying the command history
â– 
Using the more command
About Veritas Access
Veritas Access is a software-defined scale-out network-attached storage (NAS)
solution for unstructured data that works on commodity hardware. Veritas Access
provides resiliency, multi-protocol access, and data movement to and from the
public or private cloud based on policies.
You can use Veritas Access in any of the following ways.
Table 1-1
Interfaces for using Veritas Access
Interface
Description
GUI
Centralized dashboard with operations for managing your storage.
See the GUI and the online Help for more information.
RESTful APIs
Enables automation using scripts, which run storage administration
commands against the Veritas Access cluster.
See the Veritas Access RESTful API Guide for more information.
Introducing Veritas Access
About Veritas Access
Table 1-1
Interfaces for using Veritas Access (continued)
Interface
Description
Command-line
interface (CLI or
CLISH)
Single point of administration for the entire cluster.
See the manual pages for more information.
Table 1-2 describes the features of Veritas Access.
Table 1-2
Veritas Access key features
Feature
Description
Multi-protocol access
Veritas Access includes support for the following protocols:
â– 
â– 
â– 
â– 
â– 
â– 
â– 
â– 
WORM Storage for
Enterprise Vault
Archiving
Amazon S3
See “About the Object Store server” on page 182.
CIFS
See “About configuring Veritas Access for CIFS” on page 115.
FTP
See “About FTP” on page 170.
iSCSI target
See “Configuring the iSCSI targets” on page 92.
NFS
See “About using NFS server with Veritas Access” on page 104.
Oracle Direct NFS
See “About using Veritas Access with Oracle Direct NFS”
on page 162.
SMB 3
See “About the CIFS export options” on page 262.
NFS with S3
See “Multi-protocol support for NFS with S3” on page 189.
Veritas Access can be configured as WORM primary storage for
archival by Enterprise Vault.
Veritas Access 7.3 is certified as a CIFS primary WORM storage
for Enterprise Vault 12.1.
For more information, see the Veritas Access Enterprise Vault
Solutions Guide.
18
Introducing Veritas Access
About Veritas Access
Table 1-2
Veritas Access key features (continued)
Feature
Description
Creation of Partition
A Partition Secure Notification (PSN) file is created at a source
Secure Notification
partition after the successful backup of the partition at the remote
(PSN) file for Enterprise site.
Vault Archiving
For more information, see the Veritas Access Enterprise Vault
Solutions Guide.
Managing application
The MAXIOPS limit determines the maximum number of I/Os
I/O workloads using
processed per second collectively by the storage underlying the
maximum IOPS settings file system.
See “About managing application I/O workloads using maximum
IOPS settings” on page 219.
Flexible Storage Sharing Enables cluster-wide network sharing of local storage.
(FSS)
See “About Flexible Storage Sharing” on page 62.
Scale-out file system
The following functionality is provided for a scale-out file system:
â– 
â– 
Cloud as a tier for a
scale-out file system
File system that manages a single namespace spanning over
both on-premises storage as well as cloud storage, which
provides better fault tolerance for large data sets.
See “About scale-out file systems” on page 208.
Highly available NFS and S3 shares.
You use scale-out file systems if you want to store a large
capacity of data in a single namespace (3 PB is the maximum
file system size).
See “Using the NFS-Ganesha server” on page 105.
Veritas Access supports adding a cloud service as a storage tier
for a scale-out file system. You can move data between the tiers
based on file name patterns and when the files were last accessed
or modified. Use scheduled policies to move data between the
tiers on a regular basis.
Veritas Access moves the data from the on-premises tier to
Amazon S3, Amazon Glacier, Amazon Web Services (AWS),
GovCloud (US), Azure, Google cloud, Alibaba, Veritas Access S3,
IBM Cloud Object Storage, and any S3-compatible storage provider
based on automated policies. You can also retrieve data archived
in Amazon Glacier.
See “Configuring the cloud as a tier feature for scale-out file
systems” on page 224.
19
Introducing Veritas Access
About Veritas Access
Table 1-2
Veritas Access key features (continued)
Feature
Description
SmartIO
Veritas Access supports both read and writeback caching on solid
state drives (SSDs) for applications running on Veritas Access file
systems.
See “About SmartIO for solid-state drives ” on page 341.
SmartTier
Veritas Access's built-in SmartTier feature can reduce the cost of
storage by moving data to lower-cost storage. Veritas Access
storage tiering also facilitates the moving of data between different
drive architectures and on-premises.
See “About Veritas Access SmartTier ” on page 322.
Snapshot
Veritas Access supports snapshots for recovering from data
corruption. If files, or an entire file system, are deleted or become
corrupted, you can replace them from the latest uncorrupted
snapshot.
See “About snapshots” on page 375.
Deduplication
You can run post-process periodic deduplication in a file system,
which eliminates duplicate data without any continuous cost.
See “About data deduplication” on page 293.
Compression
You can compress files to reduce the space used, while retaining
the accessibility of the files and having the compression be
transparent to applications. Compressed files look and behave
almost exactly like uncompressed files: the compressed files have
the same name, and can be read and written as with
uncompressed files.
See “About compressing files” on page 312.
NetBackup integration
Built-in NetBackup client for backing up your file systems to a
NetBackup master or media server. Once data is backed up, a
storage administrator can delete unwanted data from Veritas
Access to free up expensive primary storage for more data.
See “About Veritas Access as a NetBackup client” on page 392.
OpenDedup integration Integration with OpenDedup for deduplicating your data to
on-premises or cloud storage for long-term data retention.
See the Veritas Access NetBackup Solutions Guide for more
information.
20
Introducing Veritas Access
About Veritas Access
Table 1-2
Veritas Access key features (continued)
Feature
Description
OpenStack plug-in
Integration with OpenStack:
â– 
â– 
Quotas
OpenStack Cinder integration that allows OpenStack instances
to use the storage hosted by Veritas Access.
See “About the Veritas Access integration with OpenStack
Cinder” on page 274.
OpenStack Manila integration that lets you share Veritas
Access file systems with virtual machines on OpenStack
Manila.
See “About the Veritas Access integration with OpenStack
Manila” on page 283.
Support for setting file system quotas, user quotas, and hard
quotas.
See “About quotas for usage” on page 68.
Replication
Periodic replication of data over IP networks.
See “About Veritas Access file-level replication” on page 346.
See the replication(1) man page for more information.
Synchronous replication of data over IP networks
See “About Veritas Access sync replication” on page 348.
See the sync(1) man page for more information.
Support for LDAP, NIS,
and AD
Veritas Access uses the Lightweight Directory Access Protocol
(LDAP) for user authentication.
See “About configuring LDAP settings” on page 52.
Partition Directory
With support for partitioned directories, directory entries are
redistributed into various hash directories. These hash directories
are not visible in the name-space view of the user or operating
system. For every new create, delete, or lookup, this feature
performs a lookup for the respective hashed directory and performs
the operation in that directory. This leaves the parent directory
inode and its other hash directories unobstructed for access, which
vastly improves file system performance.
By default this feature is not enabled. See the storage_fs(1)
manual page to enable this feature.
21
Introducing Veritas Access
Accessing the Veritas Access CLI
Table 1-2
Veritas Access key features (continued)
Feature
Description
Isolated storage pools
Enables you to create an isolated storage pool with a
self-contained configuration. An isolated storage pool protects the
pool from losing the associated metadata even if all the
configuration disks in the main storage pool fail.
See “About configuring storage pools” on page 65.
Performance and tuning Workload-based tuning for the following workloads:
â– 
â– 
â– 
Media server - Streaming media represents a new wave of rich
Internet content. Recent advancements in video creation,
compression, caching, streaming, and other content delivery
technology have brought audio and video together to the
Internet as rich media. You can use Veritas Access to store
your rich media, videos, movies, audio, music, and photos.
Virtual machine support
See “About creating a tuned file system for a specific workload”
on page 217.
Other workloads
Accessing the Veritas Access CLI
To access the Veritas Access CLI
1
After installation, connect to the management console using the console IP
address you assigned during the installation.
2
Log on to the management console node using the following credentials:
User name: master
Default password: master
You are prompted to change your password after the initial logon.
3
For subsequent logons, use the user name master with the new password that
you created.
You can add additional users at any time.
The End User License Agreement (EULA) is displayed the first time you log
on to the Veritas Access CLI.
22
Introducing Veritas Access
Navigating the Veritas Access CLI
Navigating the Veritas Access CLI
All of the Veritas Access CLI commands are organized in different command modes
depending on the operation you want to perform. You can get a list of the different
command modes with descriptions of all the available modes by typing a question
mark (?) at the CLI prompt.
If you are using the support account to log on to Veritas Access, you can use su
- master in the terminal of the console IP to access the Veritas Access CLI.
To navigate the Veritas Access CLI
1
After logging on to the Veritas Access CLI, type a question mark (?) to see the
available command modes.
2
Enter the Storage> mode by typing storage for example.
You can see that you are in the Storage> mode because the cluster name
now displays with the command mode.
clustername.Storage
23
Introducing Veritas Access
Getting help using the Veritas Access command-line interface
Getting help using the Veritas Access
command-line interface
You can enter Veritas Access commands on the system console or from any host
that can access Veritas Access through a session using Secure Shell (ssh).
Veritas Access provides the following features to help you when you enter
commands on the command line:
24
Introducing Veritas Access
Getting help using the Veritas Access command-line interface
â– 
Auto-completion
The following keys both perform auto-completion for the current command line.
If the command prefix is not unique, then the bell rings and a subsequent repeat
of the key displays possible completions.
â– 
[enter] - Auto-completes, syntax-checks then executes a command. If there
is a syntax error, then the offending part of the command line is highlighted
and explained.
â– 
[space] - Auto-completes, or if the command is already resolved inserts a
space.
â– 
Command-line help
Type a question mark at the command line to display context-sensitive Help.
This is either a list of possible command completions with summaries, or the
full syntax of the current command. A subsequent repeat of this key, when a
command has been resolved, displays a detailed reference.
â– 
Keyboard shortcut keys
Move the cursor within the command line or delete text from the command line.
â– 
Command-line manual pages
Typeman and the name of the command.
â– 
Error reporting
The ^ (caret) indicates a syntax error occurred in the preceding command
statement. The location of a caret in the command statement indicates the
location of the syntax error.
â– 
Escape sequences
Substitute the command line for a previous entry.
Table 1-3
Conventions used in the Veritas Access online command-line
man pages
Symbol
Description
| (pipe)
Indicates you must choose one of elements on either side of the pipe.
[ ] (brackets)
Indicates that the element inside the brackets is optional.
{ } (braces)
Indicates that the element inside the braces is part of a group.
<>
Indicates a variable for which you need to supply a value.
25
Introducing Veritas Access
Displaying the command history
Table 1-4
Veritas Access command-line keyboard shortcut keys for
deletions
Shortcut key
Description
[CTRL-C]
Delete the whole line.
[CTRL-U]
Delete up to the start of the line from the current position.
[CTRL-W]
Delete one word to the left from the current position.
[ALT-D]
Delete one word to the right from the current position.
[CTRL-D]
Delete the character to the right on the insertion point.
[CTRL-K]
Delete all the characters to the right of the insertion point.
[CTRL-T]
Swap the last two characters.
[backspace]
Delete the character to the left of the insertion point.
[Del]
Delete one character from the current position.
Table 1-5
Escape sequences
Escape
sequence
Description
!!
Substitute the last command line.
!N
Substitute the Nth command line (you can find the Nth command from
using the history command).
!-N
Substitute the command line entered N lines before (the number is
relative to the command you are entering).
Note: Most of the Veritas Access commands are executed asynchronously, so
control may be returned to the command prompt before the operation is fully
completed. For critical commands, you should verify the status of the command
before proceeding. For example, after starting a CIFS resource, verify that the
service is online.
Displaying the command history
The history command displays the commands that you have executed. You can
also view commands executed by another user.
26
Introducing Veritas Access
Using the more command
In addition to the commands that users execute with the CLISH, the history
command displays internal commands that were executed by Veritas Access.
You must be logged in to the system to view the command history.
To display command history
â—†
To display the command history, enter the following:
CLISH> history [username] [number_of_lines]
username
Displays the command history for a particular user.
number_of_lines
Displays the number of lines of history you want to view.
The information displayed from using the history command is:
Time
Displays the time stamp as MM-DD-YYYY HH:MM
Status
Displays the status of the command as Success, Error, or Warning.
Message
Displays the command description.
Command
Displays the actual commands that were executed by you or
another user.
Using the more command
The System> more command enables, disables, or checks the status of the more
filter. The default setting is enable, which lets you page through the text one screen
at a time.
To modify and view the more filter setting
â—†
To modify and view the more filter setting, enter the following:
System> more enable|disable|status
enable
Enables the more filter on all of the nodes in the cluster.
disable
Disables the more filter on all of the nodes in the cluster.
status
Displays the status of the more filter.
27
Section
2
Configuring Veritas Access
â– 
Chapter 2. Adding users or roles
â– 
Chapter 3. Configuring the network
â– 
Chapter 4. Configuring authentication services
Chapter
2
Adding users or roles
This chapter includes the following topics:
â– 
Adding Master, System Administrator, and Storage Administrator users
â– 
About user roles and privileges
â– 
About the naming requirements for adding new users
Adding Master, System Administrator, and
Storage Administrator users
The following administrator roles are included with Veritas Access:
â– 
Master
â– 
System Administrator
â– 
Storage Administrator
You can add additional users with these roles. To add the different administrator
roles, you must have master privilege.
Note: When adding a new user, you must assign a password.
To add a Master user
â—†
Enter the following:
Admin> user add username master
Adding users or roles
Adding Master, System Administrator, and Storage Administrator users
To add a System Administrator user
â—†
Enter the following:
Admin> user add username system-admin
To add a Storage Administrator user
â—†
Enter the following:
Admin> user add username storage-admin
To change a user's password
1
Enter the following command to change the password for the current user:
Admin> passwd
You are prompted to enter your old password first. If the password matches,
then you are prompted to enter the new password for the current user.
2
Enter the following command to change the password for a user other than
the current user:
Admin> passwd [username]
You are prompted to enter your old password first. If the password matches,
then you are prompted to enter the new password for the user.
To display a list of current users
1
Enter the following to display the current user:
Admin> show [username]
2
Enter the following to display a list of all the current users:
Admin> show
Enter the following to display the details of the administrator with the user name
master:
Admin> show master>
30
Adding users or roles
About user roles and privileges
To delete a user from Veritas Access
1
Enter the following if you want to display the list of all the current users before
deleting a user:
Admin> show
2
Enter the following to delete a user from Veritas Access:
Admin> user delete username
About user roles and privileges
Your privileges within Veritas Access are based on what user role (Master, System
Administrator, or Storage Administrator) you have been assigned.
The following table provides an overview of the user roles within Veritas Access.
Table 2-1
User roles within Veritas Access
User role
Description
Master
Masters are responsible for adding or deleting users, displaying users,
and managing passwords. Only the Masters can add or delete other
administrators.
System
Administrator
System Administrators are responsible for configuring and maintaining
the file system, NFS sharing, networking, clustering, setting the current
date/time, and creating reports.
Storage
Administrator
Storage Administrators are responsible for provisioning storage and
exporting and reviewing reports.
The Support account is reserved for Technical Support use only, and it cannot be
created by administrators.
About the naming requirements for adding new
users
The following table provides the naming requirements for adding new Veritas Access
users.
31
Adding users or roles
About the naming requirements for adding new users
Table 2-2
Naming requirements for adding new users
Guideline
Description
Starts with
Letter or an underscore (_)
Must begin with an alphabetic character and
the rest of the string should be from the
following POSIX portable character set:
([A-Za-z_][A-Za-z0-9_-.]*[A-Za-z0-9_-.$]).
Length
Can be up to 31 characters. If user names
are greater than 31 characters, you will
receive the error, "Invalid user name."
Case
Veritas Access CLI commands are
case-insensitive (for example, the user
command is the same as the USER
command). However, user-provided variables
are case-sensitive (for example, the
username Master1 is not the same as the
username MASTER1).
Can contain
Hyphens (-) and underscores (_) are allowed.
32
Chapter
Configuring the network
This chapter includes the following topics:
â– 
About configuring the Veritas Access network
â– 
About bonding Ethernet interfaces
â– 
Bonding Ethernet interfaces
â– 
Configuring DNS settings
â– 
About the IP addresses for the Ethernet interfaces
â– 
About Ethernet interfaces
â– 
Displaying current Ethernet interfaces and states
â– 
Configuring IP addresses
â– 
Configuring Veritas Access to use jumbo frames
â– 
Configuring VLAN interfaces
â– 
Configuring NIC devices
â– 
Swapping network interfaces
â– 
Excluding PCI IDs from the cluster
â– 
About configuring routing tables
â– 
Configuring routing tables
â– 
Changing the firewall settings
3
Configuring the network
About configuring the Veritas Access network
About configuring the Veritas Access network
Veritas Access has the following types of networks:
â– 
Private network
The network between the nodes of the cluster itself. The private network is not
accessible to Veritas Access client nodes.
â– 
Public network
The public network is visible to all clients. Veritas Access uses static IP address
for its public interface networking. Veritas Access does not support DHCP for
public network configuration
Veritas Access supports the following operations to manage the networking settings:
â– 
create bond
â– 
vlan
â– 
change IP addresses
â– 
add or remove network interfaces
â– 
swap or interchange existing network interfaces
About bonding Ethernet interfaces
Bonding associates a set of two or more Ethernet interfaces with one IP address.
The association improves network performance on each Veritas Access cluster
node by increasing the potential bandwidth available on an IP address beyond the
limits of a single Ethernet interface. Bonding also provides redundancy for higher
availability.
For example, you can bond two 1-gigabit Ethernet interfaces together to provide
up to 2 gigabits per second of throughput to a single IP address. Moreover, if one
of the interfaces fails, communication continues using the single Ethernet interface.
When you create a bond, you need to specify a bonding mode. In addition, for the
following bonding modes: 802.3ad, balance-rr, balance-xor, broadcast,
balance-tlb, and balance-alb, make sure that the base network interface driver
is configured correctly for the bond type. For type 802.3ad, the switch must be
configured for link aggregation.
Consult your vendor-specific documentation for port aggregation and switch set
up. You can use the -s option in the Linux ethtool command to check if the base
driver supports the link speed retrieval option. The balance-alb bond mode type
works only if the underlying interface network driver enables you to set a link
address.
34
Configuring the network
Bonding Ethernet interfaces
Note: An added IPv6 address may go into a TENTATIVE state while bonding
Ethernet interfaces with balance-rr, balance-xor, or broadcast bond modes.
While bonding with those modes, Veritas Access requires the switch to balance
incoming traffic across the ports, and not deliver looped back packets or duplicates.
To work around this issue, enable EtherChannel on your switch, or avoid using
these bond modes.
Table 3-1
Bonding mode
Index Bonding mode
Fault
tolerance
Load
Switch
balancing setup
Ethtool/base
driver support
0
balance-rr
yes
yes
yes
no
1
active-backup
yes
no
no
no
2
balance-xor
yes
yes
yes
no
3
broadcast
yes
no
yes
no
4
802.3ad
yes
yes
yes
yes (to retrieve
speed)
5
balance-tlb
yes
yes
no
yes (to retrieve
speed)
6
balance-alb
yes
yes
no
yes (to retrieve
speed)
Note: When you create or remove a bond, SSH connections with Ethernet interfaces
involved in that bond may be dropped. When the operation is complete, you must
restore the SSH connections.
Bonding Ethernet interfaces
The Network> bond create and Network> bond remove operations involve bringing
down the interface first and then bringing them back up. This may cause the SSH
connections that are hosted over those interfaces to terminate. Use the physical
console of the client rather than SSH when performing Network> bond create
and Network> bond remove operations.
35
Configuring the network
Bonding Ethernet interfaces
To display a bond
â—†
To display a bond and the algorithm that is used to distribute traffic among the
bonded interfaces, enter the following:
Network> bond show
To create a bond
â—†
To create a bond between sets of two or more Ethernet interfaces on all Veritas
Access cluster nodes, enter the following:
Network> bond create interfacelist mode option
interfacelist
Specifies a comma-separated list of public Ethernet interfaces to
bond.
mode
Specifies how the bonded Ethernet interfaces divide the traffic.
option
Specifies a comma-separated option string.
Available only when the bond mode is 2 (balance-xor) or 4
(802.3ad)
xmit_hash_policy - specifies the transmit hash policy to use for
slave selection in balance-xor and 802.3ad modes.
If the option is not specified correctly, you get an error.
You can specify a mode either as a number or a character string, as follows:
0
balance-rr
This mode provides fault tolerance and load balancing. It
transmits packets in order from the first available slave
through the last.
1
active-backup
Only one slave in the bond is active. If the active slave fails,
a different slave becomes active. To avoid confusing the
switch, the bond's MAC address is externally visible on only
one port (network adapter).
2
balance-xor
Transmits based on the selected transmit hash policy.
The default policy is a simple.
This mode provides load balancing and fault tolerance.
3
broadcast
Transmits everything on all slave interfaces and provides
fault tolerance.
36
Configuring the network
Configuring DNS settings
4
802.3ad
Creates aggregation groups with the same speed and duplex
settings. It uses all slaves in the active aggregator based on
the 802.3ad specification.
5
balance-tlb
Provides channel bonding that does not require special switch
support. The outgoing traffic is distributed according to the
current load (computed relative to the speed) on each slave.
The current slave receives incoming traffic. If the receiving
slave fails, another slave takes over its MAC address.
6
balance-alb
Includes balance-tlb plus Receive Load Balancing (RLB) for
IPV4 traffic. This mode does not require any special switch
support. ARP negotiation load balances the receive.
To remove a bond
â—†
To remove a bond from all of the nodes in a cluster, enter the following:
Network> bond remove bondname
where bondname is the name of the bond configuration.
Private bonding
1
The Network> bond priv-create command creates the bond for the private
interfaces (priveth0 and priveth1) with mode 0 (balance-rr). Veritas Access
supports only mode 0 for the private interfaces. To get the advantage of private
network bonding, all private interfaces must be connected by a switch or a hub.
The switch or hub ensures that if one of the NIC goes down, the communication
continues with the other NIC. All services are brought offline before a bonded
interface is created. This command has to be run using the server console.
2
The Network> bond priv-remove command removes the bonding of private
interfaces (priveth0 and priveth1) for the cluster. All services are brought offline
before a bonded interface is removed. This command has to be run using the
server console.
Configuring DNS settings
The Domain Name System (DNS) service resolves names to IP addresses. You
can configure Veritas Access to use DNS to look up domain names and IP
addresses. You enable the DNS service for the cluster, then specify up to three
DNS servers.
37
Configuring the network
Configuring DNS settings
38
To display DNS settings
â—†
To display DNS settings, enter the following:
Network> dns show
To enable DNS service
â—†
To enable Veritas Access hosts to do DNS lookups, enter the following
commands:
Network> dns enable
You can verify using the dns show command.
To disable DNS settings
â—†
To disable DNS settings, enter the following:
Network> dns disable
You can verify using the dns show command.
To specify the IP addresses of the DNS name servers
â—†
To specify the IP addresses of the DNS name servers used by the Veritas
Access DNS service, enter the following commands:
Network> dns set nameservers nameserver1 [nameserver2] [nameserver3]
You can verify using the dns show command.
To remove the name servers list used by DNS
â—†
To remove the name servers list used by DNS, enter the following commands:
Network> dns clear nameservers
You can verify using the dns show command.
To set the domain name for the DNS server
â—†
To set the domain name for the DNS server, enter the following:
Network> dns set domainname domainname
where domainname is the domain name for the DNS server.
You can verify using the dns show command.
Configuring the network
About the IP addresses for the Ethernet interfaces
To allow multiple DNS search domains
â—†
To allow multiple DNS search domains, enter the following:
Network> dns set searchdomains searchdomain1[,searchdomain2]
[,searchdomain3]
where searchdomain1 is the first DNS search domain to be searched. Specify
the search domains in the order in which the search domains should be used.
To configure multiple DNS search domains that have already been entered
â—†
To configure multiple DNS search domains that have already been entered,
add the existing domain name with the new domain name as comma-separated
entries.
Network> dns set searchdomains domain1.access.com,domain2.access.com.
You can verify using the dns show command.
To remove the domain name used by DNS
â—†
To remove the domain name used by DNS, enter the following:
Network> dns clear domainname
You can verify using the dns show command.
About the IP addresses for the Ethernet interfaces
Internet Protocol (IP) commands configure your routing tables, Ethernet interfaces,
and IP addresses, and display the settings.
To configure the Ethernet interfaces:
See “About Ethernet interfaces” on page 39.
About Ethernet interfaces
Each Ethernet interface must have a physical IP address associated with it. These
are usually supplied when the Veritas Access software is installed.
Each Ethernet interface can be configured with a virtual IP address for clustering
purposes in Veritas Access. This does not imply that each interface must have a
virtual IP to communicate with the network.
The physical address must be present before adding a virtual address. To add an
IPv6 address on an IPv4 cluster, you have to configure the IPv6 physical address
and then add the virtual address for the given interface.
39
Configuring the network
Displaying current Ethernet interfaces and states
Displaying current Ethernet interfaces and states
To display current Ethernet interfaces and states
â—†
To display current configurations, enter the following:
Network> ip link show [nodename] [device]
nodename
Specifies which node of the cluster to display the attributes.
Enter all to display all the IP links.
device
Specifies which Ethernet interface on the node to display the
attributes.
To display all configurations, enter the following:
Network> ip link show
Configuring IP addresses
During installation, you specified a range of public IP addresses to be used for
physical interfaces. You also specified a range for virtual interfaces. You can see
which of these addresses are assigned to each node. You can use this procedure
to verify the IP addresses in your configuration. You can add additional IP addresses
if you want to add additional nodes and no other IP addresses are available.
To display all the IP addresses for the cluster
â—†
To display all of a cluster's IP addresses, enter the following:
Network> ip addr show
The output headings are:
IP
Displays the IP addresses for the cluster.
Netmask
Displays the netmask for the IP address. Netmask is used for IPv4
addresses.
Specify an IPv4 address in the format AAA.BBB.CCC.DDD, where
each number ranges from 0 to 255.
Prefix
Displays the prefix used for IPv6 addresses. The value is an integer
in the range 0-128.
Device
Displays the name of the Ethernet interface for the IP address.
40
Configuring the network
Configuring IP addresses
Node
Displays the node name associated with the interface.
Type
Displays the type of the IP address: physical or virtual.
Status
Displays the status of the IP addresses:
â– 
ONLINE
â– 
ONLINE (console IP)
â– 
OFFLINE
â– 
FAULTED
A virtual IP can be in the FAULTED state if it is already being used.
It can also be in the FAULTED state if the corresponding device
is not working on all nodes in the cluster (for example, a
disconnected cable).
To add an IP address to a cluster
â—†
To add an IP address to a cluster, enter the following:
Network> ip addr add ipaddr netmask | prefix type
[device] [nodename]
ipaddr
Specifies the IP address to add to the cluster.
Do not use physical IP addresses to access the Veritas Access cluster.
In case of failure, the IP addresses cannot move between nodes. A
failure could be either a node failure, an Ethernet interface failure, or a
storage failure.
You can specify either an IPv4 address or an IPv6 address.
netmask
Specifies the netmask for the IP address. Netmask is used for IPv4
addresses.
prefix
Specifies the prefix for the IPv6 address. The accepted range is 0-128
integers.
type
Specifies the IP address type, either virtual or physical.
If type is virtual, the device is used to add new IP address on that device.
If type is physical, the IP address gets assigned to given node on given
device. In this case, you have to specify the nodename.
device
Only use this option if you entered virtual for the type.
nodename
Any node of the cluster
41
Configuring the network
Configuring Veritas Access to use jumbo frames
To change an IP address to online on a specified node
â—†
To change an IP address to online on a specified node, enter the following:
Network> ip addr online ipaddr nodename
ipaddr
Specifies the IP address that needs to be brought online. You can
specify either an IPv4 address or an IPv6 address.
nodename
Specifies the nodename on which the IP address needs to be
brought online. If you do not want to enter a specific nodename,
enter any with the IP address.
To modify an IP address
â—†
To modify an IP address, enter the following:
Network> ip addr modify oldipaddr newipaddr netmask | prefix
oldipaddr
Specifies the old IP address to be modified, as either an IPv4
address or an IPv6 address. The specified oldipaddr must be
assigned to the cluster.
newipaddr
Specifies the new IP address, as either an IPv4 address or an
IPv6 address. The new IP address must be available.
netmask
Specifies the netmask for the new IP address. Netmask is used
for IPv4 addresses.
prefix
Specifies the prefix for the IPv6 address. The value is an integer
in the range 0-128.
To remove an IP address from the cluster
â—†
To remove an IP address from the cluster, enter the following:
Network> ip addr del ipaddr
where ipaddr is either an IPv4 address or an IPv6 address.
Configuring Veritas Access to use jumbo frames
You can display and change the public Ethernet interfaces (for example, pubeth0
and pubeth1) whether a link is up or down, and the Ethernet interface's Maximum
Transmission Unit (MTU) value.
42
Configuring the network
Configuring VLAN interfaces
The MTU value controls the maximum transmission unit size for an Ethernet frame.
The standard maximum transmission unit size for Ethernet is 1500 bytes (without
headers). In supported environments, the MTU value can be set to larger values
up to 9000 bytes. Setting a larger frame size on an interface is commonly referred
to as using jumbo frames. Jumbo frames help reduce fragmentation as data is sent
over the network and in some cases, can also provide better throughput and reduced
CPU usage. To take advantage of jumbo frames, the Ethernet cards, drivers, and
switching must all support jumbo frames.
Configuring VLAN interfaces
The virtual LAN (VLAN) feature lets you create VLAN interfaces on the Veritas
Access nodes and administer them as any other VLAN interfaces. The VLAN
interfaces are created using Linux support for VLAN interfaces.
Use the Network> vlan commands to view, add, or delete VLAN interfaces.
Note: To use VLAN, your network must have VLAN-supported switches.
To display the VLAN interfaces
â—†
To display the VLAN interfaces, enter the following:
Network> vlan show
To add a VLAN interface
â—†
To add a VLAN interface, enter the following:
Network> vlan add device vlan_id
device
Specifies the VLAN interface on which the VLAN interfaces will
be added.
vlan_id
Specifies the VLAN ID which the new VLAN interface uses. Valid
values range from 1 to 4095.
To delete a VLAN interface
â—†
To delete a VLAN interface, enter the following:
Network> vlan del vlan_device
where the vlan_device is the VLAN name from the Network> vlan show
command.
43
Configuring the network
Configuring NIC devices
Configuring NIC devices
To list NIC devices on a specified node
â—†
To list NIC devices on a specified node, enter the following:
Network> device list nodename
where nodename is the specified node for which bus IDs and MAC addresses
for all devices are listed.
To add a NIC device to a Veritas Access cluster
â—†
To add a NIC device to a Veritas Access cluster, enter the following:
Network> device add devicename
where devicename is the name of the device that you want to add.
When any eth# device is added, the eth# device gets a new pubeth# in the
Veritas Access cluster.
To remove a NIC device from a Veritas Access cluster
â—†
To remove a NIC device from a Veritas Access cluster, enter the following:
Network> device remove devicename
where devicename is the name of the device you want to remove.
When a device is removed, all the physical IP addresses and virtual IP
addresses that are associated with the device are deleted from the specified
NIC device. All physical IP addresses are kept in a free list and will be available
for reuse; virtual IP addresses are not available for reuse. You need to re-add
the NIC device in cases of reuse.
You can use the Network> ip addr show command to display the list of IP
addresses associated with the device. You can see an UNUSED status beside
the IP addresses that are free (not used).
To rename a NIC device
â—†
To rename a NIC device, enter the following:
Network> device rename old_name with new_name nodename
Only devices with the prefix eth can be renamed. NIC devices with new names
should not be present on all nodes of the Veritas Access cluster. In cases of
mismatches in names of newly-added NICs in the Veritas Access cluster, you
can rename those devices, and then add the devices to the Veritas Access
cluster.
44
Configuring the network
Swapping network interfaces
To identify a NIC device
â—†
To identify a NIC device, enter the following:
device identify devicename nodename [timeout]
devicename
Specify the name of the device you want to identify.
nodename
Specify the node on which the device is located.
timeout
By default, the timeout value is 120 seconds.
To replace a NIC device from a Veritas Access cluster
1
Delete all the VIP addresses that are related to the NIC that you want to replace
using the ippr addr del command. Enter the following:
Network> ip addr del <virtual IP>
2
Find out the name that is related to the NIC that is to be replaced by using the
device list command.
3
Remove the device from the Veritas Access configuration using the device
remove command
4
Shut down the target node and replace the target NIC hardware. Then restart
the system.
5
If new NIC name is not the same as the original device name, rename the new
device name to the original device name.
6
Add the new NIC device.
Network> device add <NIC device>
7
Add the VIP back to the device.
Network> ip addr add <virtual IP>
Swapping network interfaces
The Network> swap command can be used for swapping two network interfaces
of a node in a cluster. This command helps set up the cluster properly in cases
where the first node of a cluster cannot be pinged.
Figure 3-1 describes a scenario whereby using the Network> swap command, you
can use the more powerful 10G network interfaces to carry the public network load.
45
Configuring the network
Swapping network interfaces
Figure 3-1
Scenario for using Network> swap for network interfaces
Node 1
Gateway
Public Network
10G
priveth1
10G
priveth0
Private Network
1G pubeth1
1G pubeth0
Connections:
Before swap
After swap
A System Administrator can use the Network> swap command in the following
ways:
â– 
Multi-node cluster: You can swap one public interface with another.
â– 
Single-node cluster: You can swap a private interface with a public interface,
swap two public interfaces, or swap two private interfaces.
If input to the Network> swap command contains one public and one private
interface, and there are two separate switches for the private and the public network,
then before you run the Network> swap command, the System Administrator has
to exchange cable connections between these interfaces.
Running the Network> swap command requires stopping the given interfaces, which
causes the following:
â– 
After you run the Network> swap command, all SSH connection(s) hosted on
the input interfaces terminate.
â– 
If a public interface is involved when issuing the Network> swap command, all
Virtual IP addresses (VIPs) hosted on that interface are brought down first, and
are brought back up after Network> swap is complete.
â– 
If the Network> swap command is run remotely, due to SSH connection
termination, its end status may not be visible to the end user. You can check
46
Configuring the network
Excluding PCI IDs from the cluster
the status of the Network> swap command under history, by reconnecting to
the cluster.
Note: Veritas Access recommends not to use the Network> swap command when
active I/O load is present on the cluster.
To use the swap command
â—†
To use the Network> swap command, enter the following:
Network> swap interface1 interface2 [nodename]
interface1
Indicates the name of the first network interface.
interface2
Indicates the name of the second network interface.
nodename
Indicates the name of the node. If nodename is not provided, the
Network> swap command is executed on the current node in
the cluster.
Excluding PCI IDs from the cluster
During the initial Veritas Access software installation on the first node, you can
exclude certain PCI IDs in your cluster to reserve them for future use. You may
want to exclude additional PCD IDs when you add additional nodes to the cluster.
You can add the PCI IDs to the exclusion list. The interface cards for which PCI
ID's have been added in the PCI exclusion list are not used as private or public
interfaces for the subsequent cluster node install. During a new node install, the
remaining PCI bus interfaces are searched and added as public or private interfaces.
The Network> pciexclusion command can be used with different options:
â– 
The Network> pciexclusion show command displays the PCI IDs that have
been selected for exclusion. It also provides information about whether it has
been excluded or not by displaying y(yes) or n(no) symbols corresponding to
the node name. If the node is in the INSTALLED state, it displays the UUID of
the node.
â– 
The Network> pciexclusion add pcilist command allows an administrator
to add specific PCI ID(s) for exclusion. These values must be provided before
the installation. The command excludes the PCI from the second node
installation.
pcilist is a comma-separated list of PCI IDs.
47
Configuring the network
About configuring routing tables
â– 
The Network> pciexclusion delete pci command allows an administrator
to delete a given PCI ID from exclusion. This command must be used before
the installation for it to take effect. The command is effective for the next node
install
The PCI ID bits format is hexadecimal (XXXX:XX:XX.X).
About configuring routing tables
Sometimes a Veritas Access cluster must communicate with network services (for
example, LDAP) using specific gateways in the public network. In these cases, you
must define routing table entries.
These entries consist of the following:
â– 
The target network node's IP address and accompanying netmask.
â– 
Gateway’s IP address.
â– 
Optionally, a specific Ethernet interface via which to communicate with the target.
This is useful, for example, if the demands of multiple remote clients are likely
to exceed a single gateway’s throughput capacity.
Configuring routing tables
To display the routing tables of the nodes in the cluster
â—†
To display the routing tables of the nodes in the cluster, enter the following:
Network> ip route show [nodename]
where nodename is the node whose routing tables you want to display. To see
the routing table for all of the nodes in the cluster, enter all.
For example:
Network> ip route show all
Destination
Displays the destination network or destination host for which the route
is defined.
Gateway
Displays a network node equipped for interfacing with another network.
Genmask
Displays the netmask.
48
Configuring the network
Configuring routing tables
Flags
The flags are as follows:
U - Route is up
H - Target is a host
G - Use gateway
MSS
Displays maximum segment size. The default is 0. You cannot modify
this attribute.
Window
Displays the maximum amount of data the system accepts in a single
burst from the remote host. The default is 0. You cannot modify this
attribute.
irtt
Displays the initial round trip time with which TCP connections start.
The default is 0. You cannot modify this attribute.
Iface
Displays the interface. On UNIX systems, the device name lo refers
to the loopback interface.
To add to the route table
â—†
To add a route entry to the routing table of nodes in the cluster, enter the
following:
Network> ip route add nodename ipaddr netmask
| prefix via gateway [dev device]
nodename
Specifies the node to whose routing table the route is to be added.
To add a route path to all the nodes, use all in the nodename
field.
If you enter a node that is not a part of the cluster, an error
message is displayed.
ipaddr
Specifies the destination of the IP address.
You can specify either an IPv4 address or an IPv6 address.
If you enter an invalid IP address, then a message notifies you
before you fill in other fields.
netmask
Specifies the netmask associated with the IP address that is
entered for the ipaddr field.
Use a netmask value of 255.255.255.255 for the netmask to add
a host route to ipaddr.
prefix
Specifies the prefix for the IPv6 address. Accepted ranges are
0-128 integers.
49
Configuring the network
Configuring routing tables
via
This is a required field. You must type in the word.
gateway
Specifies the gateway IP address used for the route.
If you enter an invalid gateway IP address, then an error message
is displayed.
To add a route that does not use a gateway, enter a value of
0.0.0.0.
dev
Specifies the route device option. You must type in the word.
device
Specifies which Ethernet interface on the node the route path is
added to. This variable is optional.
You can specify the following values:
â– 
any - Default
â– 
pubeth0 - Public Ethernet interface
â– 
pubeth1 - Public Ethernet interface
The Ethernet interface field is required only when you specify dev
in the dev field.
If you omit the dev and device fields, Veritas Access uses a
default Ethernet interface.
50
Configuring the network
Changing the firewall settings
To delete route entries from the routing tables of nodes in the cluster
â—†
To delete route entries from the routing tables of nodes in the cluster, enter
the following:
Network> ip route del nodename ipaddr
netmask | prefix
nodename
Specify the node from which the node is deleted.
To delete the route entry from all nodes, use the all option in this
field.
ipaddr
Specifies the destination IP address of the route entry to be
deleted.
You can specify either an IPv4 address or an IPv6 address.
If you enter an invalid IP address, a message notifies you before
you enter other fields.
netmask
Specifies the IP address to be used. Netmask is used for IPv4
addresses.
prefix
Specifies the prefix for the IPv6 address. Accepted ranges are
0-128 integers.
Changing the firewall settings
The Network> firewall command is used to view or change the firewall settings.
The firewall setting is enabled by default. If you want to allow connections on any
port from any IP, you have to disable the firewall setting. Applied rules do not work
when the firewall setting is disabled.
# firewall disable
The firewall setting can be enabled again to allow specific IPs to connect to the
ports while blocking other connections.
# firewall enable
The current status of the firewall setting (whether it is enabled or disabled) can be
displayed using the firewall status command.
# firewall status
See the Network> firewall man page for detailed examples.
51
Chapter
4
Configuring authentication
services
This chapter includes the following topics:
â– 
About configuring LDAP settings
â– 
Configuring LDAP server settings
â– 
Administering the Veritas Access cluster's LDAP client
â– 
Configuring the NIS-related settings
â– 
Configuring NSS lookup order
About configuring LDAP settings
The Lightweight Directory Access Protocol (LDAP) is the protocol used to
communicate with LDAP servers. The LDAP servers are the entities that perform
the service. In Veritas Access, the most common use of LDAP is for user
authentication.
For sites that use an LDAP server for access or authentication, Veritas Access
provides a simple LDAP client configuration interface.
Before you configure Veritas Access LDAP settings, obtain the following LDAP
configuration information from your system administrator:
â– 
IP address or host name of the LDAP server. You also need the port number
of the LDAP server.
â– 
Base (or root) distinguished name (DN), for example:
cn=employees,c=us
Configuring authentication services
Configuring LDAP server settings
LDAP database searches start here.
â– 
Bind distinguished name (DN) and password, for example:
ou=engineering,c=us
This allows read access to portions of the LDAP database to search for
information.
â– 
Base DN for users, for example:
ou=users,dc=com
This allows access to the LDAP directory to search for and authenticate users.
â– 
Base DN for groups, for example:
ou=groups,dc=com
This allows access to the LDAP database, to search for groups.
â– 
Base DN for Netgroups, for example:
ou=netgroups,dc=com
This allows access to the LDAP database, to search for Netgroups.
â– 
Root bind DN and password. This allows write access to the LDAP database,
to modify information, such as changing a user's password.
â– 
Secure Sockets Layer (SSL). Configures a cluster to use the Secure Sockets
Layer (SSL) protocol to communicate with the LDAP server.
â– 
Password hash algorithm, for example, md5, if a specific password encryption
method is used with your LDAP server.
See “Configuring LDAP server settings” on page 53.
See “Administering the Veritas Access cluster's LDAP client” on page 56.
Configuring LDAP server settings
You can set the LDAP base Distinguished Name (base DN). LDAP records are
structured in a hierarchical tree. You access records through a particular path, in
this case, a Distinguished Name, or DN. The base DN indicates where in the LDAP
directory hierarchy you want to start your search.
Note: For Veritas Access to access an LDAP directory service, you must specify
the LDAP server DNS name or IP address.
53
Configuring authentication services
Configuring LDAP server settings
To set the base DN for the LDAP server
â—†
To set the base DN for the LDAP server, enter the following:
Network> ldap set basedn value
where value is the LDAP base DN in the following format:
dc=yourorg,dc=com
To set the LDAP server hostname or IP address
â—†
To set the LDAP server hostname or IP address, enter the following:
Network> ldap set server value
where value is the LDAP server hostname or IP address.
To set the LDAP server port number
â—†
To set the LDAP server port number, enter the following:
Network> ldap set port value
where value is the LDAP server port number.
To set Veritas Access to use LDAP over SSL
â—†
To set Veritas Access to use LDAP over SSL, enter the following:
Network> ldap set ssl {on|off}
To set the bind DN for the LDAP server
â—†
To set the bind DN for the LDAP server, enter the following:
Network> ldap set binddn value
where value is the LDAP bind DN in the following format:
cn=binduser,dc=yourorg,dc=com
The value setting is mandatory.
You are prompted to supply a password. You must use your LDAP server
password.
54
Configuring authentication services
Configuring LDAP server settings
To set the root bind DN for the LDAP server
â—†
To set the root bind DN for the LDAP server, enter the following:
Network> ldap set rootbinddn value
where value is the LDAP root bind DN in the following format:
cn=admin,dc=yourorg,dc=com
You are prompted to supply a password. You must use your LDAP server
password.
To set the LDAP users, groups, or netgroups base DN
â—†
To set the LDAP users, groups, or netgroups base DN, enter the following:
Network> ldap set users-basedn value
Network> ldap set groups-basedn value
Network> ldap set netgroups-basedn value
users-basedn
value
Specifies the value for the users-basedn. For example:
ou=users,dc=example,dc=com (default)
groups-basedn
value
Specifies the value for the groups-basedn. For example:
ou=groups,dc=example,dc=com (default)
netgroups-basedn Specifies the value for the netgroups-basedn. For example:
value
ou=netgroups,dc=example,dc=com (default)
For example:
Network> ldap set users-basedn ou=Users,dc=example,dc=com
Changes would be applicable after re-enable of LDAP service.
Command successfully completed.
To set the password hash algorithm
â—†
To set the password hash algorithm, enter the following:
Network> ldap set password-hash {clear|crypt|md5}
55
Configuring authentication services
Administering the Veritas Access cluster's LDAP client
To display the LDAP configured settings
â—†
To display the LDAP configured settings, enter the following:
Network> ldap get {server|port|basedn|binddn|ssl|rootbinddn|
users-basedn|groups-basedn|netgroups-basedn|password-hash}
To clear the LDAP settings
â—†
To clear the previously configured LDAP settings, enter the following:
Network> ldap clear {server|port|basedn|binddn|ssl|rootbinddn|
users-basedn|groups-basedn|netgroups-basedn|password-hash}
Administering the Veritas Access cluster's LDAP
client
You can display the Lightweight Directory Access Protocol (LDAP) client
configurations. LDAP clients use the LDAPv3 protocol to communicate with the
server.
To display the LDAP client configuration
â—†
To display the LDAP client configuration, enter the following:
Network> ldap show [users|groups|netgroups]
users
Displays the LDAP users that are available in the Name Service
Switch (NSS) database.
groups
Displays the LDAP groups that are available in the NSS database.
netgroups
Displays the LDAP netgroups that are available in the NSS
database.
If you do not include one of the optional variables, the command displays all
the configured settings for the LDAP client.
56
Configuring authentication services
Configuring the NIS-related settings
To enable the LDAP client configuration
â—†
To enable the LDAP client configuration, enter the following:
Network> ldap enable
LDAP clients use the LDAPv3 protocol for communicating with the server.
Enabling the LDAP client configures the Pluggable Authentication Module
(PAM) files to use LDAP. PAM is the standard authentication framework for
Linux.
To disable the LDAP client configuration
â—†
To disable the LDAP client configuration, enter the following:
Network> ldap disable
LDAP clients use the LDAPv3 protocol for communicating with the server. This
command configures the PAM configuration files so that they do not use LDAP.
Configuring the NIS-related settings
Veritas Access supports Network Information Service (NIS), implemented in a NIS
server, as an authentication authority. You can use NIS to authenticate computers.
If your environment uses NIS, enable the NIS-based authentication on the Veritas
Access cluster.
Note: IPv6 addresses are not supported for NIS.
To display NIS-related settings
â—†
To display NIS-related settings, enter the following:
Network> nis show [users|groups|netgroups]
users
Displays the NIS users that are available in the Veritas Access
cluster's NIS database.
groups
Displays the NIS groups that are available in the Veritas Access
cluster's NIS database.
netgroups
Displays the NIS netgroups that are available in the Veritas Access
cluster's NIS database.
57
Configuring authentication services
Configuring NSS lookup order
To set the NIS domain name on all nodes in the cluster
â—†
To set the NIS domain name on the cluster nodes, enter the following:
Network> nis set domainname [domainname]
where domainname is the domain name.
To set NIS server name on all nodes in the cluster
â—†
To set the NIS server name on all cluster nodes, enter the following:
Network> nis set servername servername
where servername is the NIS server name. You can use the server's name or
IP address.
To enable NIS clients
â—†
To enable NIS clients, enter the following:
Network> nis enable
To view the new settings, enter the following:
Network> nis show
To disable NIS clients
â—†
To disable NIS clients, enter the following:
Network> nis disable
Configuring NSS lookup order
Name Service Switch (NSS) is a Veritas Access cluster service that provides a
single configuration location to identify the services (such as NIS or LDAP) for
network information such as hosts, groups, netgroups, passwords, and shadow
files.
For example, host information may be on an NIS server. Group information may
be in an LDAP database.
The NSS configuration specifies which network services the Veritas Access cluster
should use to authenticate hosts, users, groups, and netgroups. The configuration
also specifies the order in which multiple services should be queried.
58
Configuring authentication services
Configuring NSS lookup order
To display the current value set on NSS for all groups, hosts, netgroups,
passwd, and shadow files
â—†
To display the current value set on nsswitch for all groups, hosts, netgroups,
passwd, and shadow files
Network> nsswitch show
To change the order of group items
â—†
To configure the NSS lookup order, enter the following:
Network> nsswitch conf {group|hosts|netgroups|passwd|shadow}
value1 [[value2]] [[value3]] [[value4]]
group
Selects the group file.
hosts
Selects the hosts file.
netgroups
Selects the netgroups file.
passwd
Selects the password.
shadow
Selects the shadow file.
value
Specifies the following NSS lookup order with the following values:
â– 
value1 (required)- { files/nis/winbind/ldap }
â– 
value2 (optional) - { files/nis/winbind/ldap }
â– 
value3 (optional) - { files/nis/winbind/ldap }
â– 
value4 (optional) - { files/nis/winbind/ldap }
For example:
Network> nsswitch conf group nis files
Network> nsswitch show
To select DNS, you must use the following command:
Network> nsswitch conf hosts
59
Section
Managing Veritas Access
storage
â– 
Chapter 5. Configuring storage
â– 
Chapter 6. Configuring data integrity with I/O fencing
â– 
Chapter 7. Configuring ISCSI
3
Chapter
5
Configuring storage
This chapter includes the following topics:
â– 
About storage provisioning and management
â– 
About Flexible Storage Sharing
â– 
Displaying information for all disk devices associated with the nodes in a cluster
â– 
About configuring storage pools
â– 
Configuring storage pools
â– 
About quotas for usage
â– 
About quotas for CIFS home directories
â– 
Workflow for configuring and managing storage using the Veritas Access CLI
â– 
Displaying WWN information
â– 
Initiating host discovery of LUNs
â– 
Importing new LUNs forcefully for new or existing pools
â– 
Increasing the storage capacity of a LUN
â– 
About configuring disks
â– 
Configuring disks
â– 
Formatting or reinitializing a disk
â– 
Removing a disk
Configuring storage
About storage provisioning and management
About storage provisioning and management
When you provision storage, you want to be able to assign the appropriate storage
for the particular application. Veritas Access supports a variety of storage types.
To help the users that provision the storage to select the appropriate storage, you
classify the storage into groups called storage pools. A storage pool is a user-defined
way to group the disks that have similar characteristics.
Veritas Access supports a wide variety of storage arrays, direct attached storage
as well as in-server SSDs and HDDs. During the initial configuration, you add the
disks to the Veritas Access nodes. For a storage array, a disk is a LUN from the
storage array. For best performance and resiliency, each LUN should be provisioned
to all Veritas Access nodes. Local disks and fully shared disks have unique names,
but partially shared disks across nodes may have the same name. Make sure that
you do not assign LUNs from the same enclosure to different nodes partially.
Before you can provision storage to Veritas Access, the physical LUNs must be set
up and zoned for use with the Veritas Access cluster. The storage array administrator
normally allocates and zones the physical storage.
Veritas Access does not support thin reclamation disks.
After the disks are correctly discovered by Veritas Access, you assign the disks to
storage pools. You create a file system on one or more storage pools. You can
mirror across different pools. You can also create tiers on different pools, and use
SmartTier to manage file system data across those tiers.
You can also use local disks that are shared over the network. Both DAS disks and
SAN disks (LUNs) can be used by the same cluster, and you can have a mix of
DAS and SAN disks in the same storage pool.
See “About Flexible Storage Sharing” on page 62.
About Flexible Storage Sharing
Flexible Storage Sharing (FSS) enables network sharing of local storage, cluster
wide. You can use both DAS disks and SAN disks (LUNs) in any storage pool that
you define. Multiple storage pools can have DAS disks, and any storage pool can
have a mix of DAS and SAN disks. FSS allows network shared storage to co-exist
with physically shared storage, and file systems can be created using both types
of storage.
Note: For FSS to work properly, ensure that the DAS disks in the servers are
compliant with SCSI standards, which guarantees having a unique disk identifier
(UDID). If you do not have unique UDIDs, you may run in to unexpected behavior.
62
Configuring storage
Displaying information for all disk devices associated with the nodes in a cluster
63
Use the following CLISH command to list all of the disks and their unique UDIDs.
The UDID is displayed under the ID column.
Storage> disk list detail
Disk Pool Enclosure Array Type Size (Use%) Transport ID Serial Number
Displaying information for all disk devices
associated with the nodes in a cluster
You can display disk information for the disk devices associated with the nodes in
the Veritas Access cluster. If local disks are present, the information includes entries
for the local disks.
See the storage_disk(1) man page for the detailed examples.
The information displayed depends on the form of the command that you use. The
following information is available:
Disk
Indicates the disk name.
Serial Number
Indicates the serial number for the disk.
Enclosure
Indicates the type of storage enclosure.
Size
Indicates the size of the disk.
Use%
Indicates the percentage of the disk that is being used.
Transport
Indicates transport protocol values like SCSI, FC, and other values.
Configuring storage
Displaying information for all disk devices associated with the nodes in a cluster
ID
ID column consists of the following four fields. A ":" separates these
fields.
â– 
â– 
â– 
â– 
Array Type
VendorID - Specifies the name of the storage vendor, for example,
HITACHI, IBM, EMC, and so on.
ProductID - Specifies the ProductID based on vendor. Each vendor
manufactures different products. For example, HITACHI has
HDS5700, HDS5800, and HDS9200 products. These products have
ProductIDs such as DF350, DF400, and DF500.
TargetID - Specifies the TargetID. Each port of an array is a target.
Two different arrays or two ports of the same array have different
TargetIDs. TargetIDs start from 0.
LunID - Specifies the ID of the LUN. This should not be confused
with the LUN serial number. LUN serial numbers uniquely identify
a LUN in a target. Whereas a LunID uniquely identifies a LUN in an
initiator group (or host group). Two LUNS in the same initiator group
cannot have the same LunID. For example, if a LUN is assigned to
two clusters, then the LunID of that LUN can be different in different
clusters, but the serial number is the same.
Indicates the type of storage array and can contain any one of the three
values: Disk for JBODs, Active-Active, and Active-Passive.
To display a list of disks and nodes
â—†
To display a list of disks and nodes, enter the following:
Storage> disk list
This form of the command displays local disk information for all nodes in the
cluster.
To display the disk information
â—†
To display the disk information, enter the following:
Storage> disk list detail
This form of the command displays local disk information from all the nodes in
the cluster.
To display the disk list paths
â—†
To display the disks multiple paths, enter the following:
Storage> disk list paths
This form of the command displays local disk information from all the nodes in
the cluster.
64
Configuring storage
About configuring storage pools
About configuring storage pools
Veritas Access uses storage pools to provision storage. Pools are more a logical
construct rather than an architectural component. Pools are loosely collections of
disks.
In the Veritas Access context, a disk is a LUN provisioned from a storage array.
Each LUN should be provisioned to all Veritas Access nodes. Disks must be added
to pools prior to use.
During the initial configuration, you create storage pools, to discover disks, and to
assign them to pools. Disk discovery and pool assignment are done once. Veritas
Access propagates disk information to all the cluster nodes.
You must first create storage pools that can be used to build file systems on.
By default, all of the storage pools in Veritas Access share the same configuration.
Copies of the configuration reside on disks in the storage pools. The first storage
pool you create uses the default configuration. You can create additional storage
pools to be part of that default configuration or to be isolated. An isolated storage
pool protects the pool from losing the associated metadata even if all configuration
disks in the main storage pool fail. If isolated storage pools exist, you cannot remove
the disks in the non-isolated pool or destroy the last non-isolated pool.
Configuring storage pools
A storage pool is a group of disks that Veritas Access uses for allocation. Before
creating a file system, you must create a storage pool.
65
Configuring storage
Configuring storage pools
66
To create the storage pool used to create a file system
1
List all of the available disks, and identify which ones you want to assign to
which pools.
Storage> disk list
2
To create a storage pool, enter the following:
Storage> pool create pool_name disk1[,disk2,...] [isolated=yes|no]
pool_name
Specifies what the created storage pool will be named. The
storage pool name should be a string.
disk1, disk2,...
Specifies the disks to include in the storage pool. If the
specified disk does not exist, an error message is displayed.
Use the Storage> disk list command to view the
available disks.
Each disk can only belong to one storage pool. If you try to
add a disk that is already in use, an error message is
displayed.
To specify additional disks to be part of the storage pool, use
a comma with no space in between.
isolated=yes|no
Optional. Specifies whether or not the storage pool is isolated
from other storage pools. Isolating the storage pool means
that the configuration information is not shared. By default,
storage pools are not isolated.
To list your pools
â—†
To list your pools, enter the following:
Storage> pool list
If a node is down, the Storage> pool list command shows local disks of
that node.
Configuring storage
Configuring storage pools
To rename a pool
â—†
To rename a pool, enter the following:
Storage> pool rename old_name new_name
old_name
Specifies the name for the existing pool that will be changed. If
the old name is not the name of an existing pool, an error message
is displayed.
new_name
Specifies the new name for the pool. If the specified new name
for the pool is already being used by another pool, an error
message is displayed.
To destroy a storage pool
1
Because you cannot destroy an Unallocated storage pool, you need to remove
the disk from the storage pool using the Storage> pool rmdisk command
prior to trying to destroy the storage pool.
See “Configuring disks” on page 74.
If you want to move the disk from the unallocated pool to another existing pool,
you can use the Storage> pool mvdisk command.
See “Configuring disks” on page 74.
2
To destroy a storage pool, enter the following:
Storage> Storage> pool destroy pool_name
Where pool_name specifies the storage pool to delete. If the specified
pool_name is not an existing storage pool, an error message is displayed.
If a node is down temporarily, it is not a good practice to destroy a storage pool
that contains local disks of that node.
Note: You cannot destroy the last non-isolated pool if isolated pools exist.
67
Configuring storage
About quotas for usage
To list free space for pools
â—†
To list free space for your pool, enter the following:
Storage> pool free [pool_name]
Where pool_name specifies the pool for which you want to display free space
information.
If a specified pool does not exist, an error message is displayed.
If pool_name is omitted, the free space for every pool is displayed, but
information for specific disks is not displayed.
About quotas for usage
Disk quotas limit the usage for users or user groups. You can configure disk quotas
for file systems or for CIFS home directories.
Note: Quotas work over NFS, but quota reporting and quota details are not visible
over NFS.
Users and groups visible through different sources of name service lookup
(nsswitch), local users, LDAP, NIS, and Windows users can be configured for file
systems or CIFS home directory quotas.
There are two types of disk quotas:
â– 
Usage quota (numspace) - limits the amount of disk space that can be used on
a file system.
The numspace quota value must be an integer with a unit. The minimum unit
is KB. VxFS calculates numspace quotas based on the number of KBs. The
range for numspace is from 1K to 9007199254740991(2^53 - 1)K.
â– 
Inode quota (numinodes) - limits the number of inodes that can be created on
a file system.
An inode is a data structure in a UNIX or UNIX-like file system that describes
the location of some or all of the disk blocks allocated to the file.
The numinodes quota value must be an integer without a unit, and the range is
from 1 to 9999999999999999999(19bit).
0 is valid for numspace and numinodes, which means the quota is infinite.
Veritas Access supports disk quota limits greater than 2 TB.
In addition to setting a limit on disk quotas, you can also define a warning level, or
soft quota, whereby the Veritas Access administrator is informed that they are
nearing their limit, which is less than the effective limit, or hard quota. Hard quota
68
Configuring storage
About quotas for CIFS home directories
limits can be set so that a user is strictly not allowed to cross quota limits. A soft
quota limit must be less than a hard quota limit for any type of quota.
Note: The alert for when a hard limit quota or a soft limit quota is reached in Veritas
Access is not sent out immediately. The hard limit quota or soft limit quota alert is
generated by a cron job scheduled to run daily at midnight.
About quotas for CIFS home directories
You use Storage> quota cifshomedir commands to configure quotas for CIFS
home directories. Users and groups visible through different sources of name service
lookup (nsswitch), local users, LDAP, NIS, and Windows users can be configured
for CIFS home directory quotas.
Default values are entered in a configuration file only. The actual application of the
quota is done with the set and setall commands using the default values provided.
When a CIFS home directory file system is changed, quota information for a user's
home directory is migrated from the existing home directory file system to the new
home directory file system.
Quota migration results are based on the following logic:
â– 
Case 1:
In the case where the existing home directory file system is NULL, you can set
the new home directory file system to be multiple file systems (for example, fs1,
fs2). If the multiple file systems previously had different quota values, the quota
status and values from the first file system are migrated to other file systems in
the new home directory. The first file system is the template. Only the user/group
quota values that existed on the first file system are migrated. Other user/group
quota values remain the same on the other file system.
For example, assume the following:
â– 
The new home directory file systems are fs1 and fs2.
â– 
user1, user2, and user3 have quota values on fs1.
â– 
user2, user3, and user4 have quota values on fs2.
For the migration, user/group quota values for user1, user2, and user3 are
migrated from fs1 to fs2. Quota values for user4 are kept the same on fs2, and
user4 has no quota values on fs1.
â– 
Case 2:
When the existing home directory file systems are already set, and you change
the file systems for the home directory, the quota status and values need to be
migrated from the existing home directory file systems to the new file systems.
69
Configuring storage
Workflow for configuring and managing storage using the Veritas Access CLI
For this migration, the first file system in the existing home directory acts as the
template for migrating quota status and values.
For example, if the existing home directory file systems are fs1 and fs2, and the
file systems are changed to fs2, fs3, and fs4, then the user/group quota values
on fs1 are migrated to fs3 and fs4. Other user/group values on fs3 and fs4
remain the same.
Workflow for configuring and managing storage
using the Veritas Access CLI
Figure 5-1 describes configuring and managing storage using the Veritas Access
CLI.
See the Veritas Access manual pages for the detailed syntax for completing the
operations.
70
Configuring storage
Displaying WWN information
Figure 5-1
Workflow for configuring and managing Veritas Access storage
using the CLI
Create a storage pool
Storage> pool create
Create a file system
Storage> fs create
Configure disks
Storage> pool adddisk
Create a CIFS, NFS, or S3 share
<Protocol name> share add
Add parameters to your share
<Protocol name> share add
Displaying WWN information
The Storage> hba (Host Bus Adapter) command displays World Wide Name
(WWN) information for all of the nodes in the cluster. If you want to find the WWN
information for a particular node, specify the node name (host name).
To display WWN information
â—†
To display the WWN information, enter the following:
Storage> hba [host_name]
71
Configuring storage
Initiating host discovery of LUNs
where you can use the host_name variable if you want to find WWN information
for a particular node.
To display WWN information for all the running nodes in the cluster, enter the
following:
Storage> hba
To display WWN information for a particular node, enter the following:
Storage> hba [host_name]
HBA_Node_Name
Displays the node name for the Host Bus Adapter
(HBA).
WWN
Displays World Wide Name (WWN) information.
State
Available values include:
â– 
online
â– 
offline
Speed
Displays the speed per second.
Support_Classes
Displays the class value from
/sys/class/fc_host/${host}/supported_classes.
Transmitted_FC_Frames
Displays a value equal to the number of total transmitted
serial attached SCSI frames across all protocols.
Received_FC_frames
Displays a value equal to the number of total received
serial attached SCSI frames across all protocols.
Link_Failure_Count
Displays a value equal to the value of the LINK
FAILURE COUNT field of the Link Error Status.
Initiating host discovery of LUNs
The Storage> scanbus command scans all of the SCSI devices connected to all
of the nodes in the cluster. When you add new storage to your devices, you must
scan for new SCSI devices. You only need to issue the command once and all of
the nodes discover the newly added disks. The scanbus command updates the
device configurations without interrupting the existing I/O activity. The scan does
not inform you if there is a change in the storage configuration. You can see the
latest storage configuration using the Storage> disk list command.
You do not need to reboot after scanbus has completed.
72
Configuring storage
Importing new LUNs forcefully for new or existing pools
To scan SCSI devices
â—†
To scan the SCSI devices connected to all of the nodes in the cluster, enter
the following:
Storage> scanbus
Importing new LUNs forcefully for new or existing
pools
The Storage> scanbus force command tries to import Logical Unit Numbers
(LUNs) forcefully. This may help when using Storage> scanbus alone does not
work.
To import LUNs forcefully
â—†
To import LUNs forcefully, enter the following:
Storage> scanbus [force]
Increasing the storage capacity of a LUN
The Storage> disk grow command lets you increase the storage capacity of a
previously created LUN on a storage array disk.
Warning: When increasing the storage capacity of a disk, make sure that the
storage array does not reformat it. This will destroy the data. For help, contact your
Storage Administrator.
To increase the storage capacity of a LUN
1
Increase the storage capacity of the disk on your storage array. Contact your
Storage Administrator for assistance.
2
Run the Veritas Access Storage> scanbus command to make sure that the
disk is connected to the Veritas Access cluster.
See “Initiating host discovery of LUNs ” on page 72.
3
To increase the storage capacity of the LUN, enter the following:
Storage> disk grow disk_name
where disk_name is the name of the disk.
73
Configuring storage
About configuring disks
About configuring disks
Disks and pools can be specified in the same command provided the disks are part
of an existing storage pool.
The pool and disk that are specified first are allocated space before other pools
and disks.
If the specified disk is larger than the space allocated, the remainder of the space
is still utilized when another file system is created spanning the same disk.
Configuring disks
To add a disk
â—†
To add a new disk to an existing pool, enter the following:
Storage> pool adddisk pool_name disk1[,disk2,...]
pool_name
Specifies the pool into which the disk should be added.
disk1,disk2,...
Specifies the disks to be added to the pool.
To add additional disks, use a comma with no spaces between.
A disk can only be added to one pool, so if the entered disk is
already in a pool, an error message is displayed.
74
Configuring storage
Formatting or reinitializing a disk
To move disks from one pool to another
â—†
To move a disk from one pool to another, or from an unallocated pool to an
existing pool, enter the following:
Storage> pool mvdisk src_pool dest_pool disk1[,disk2,...]
src_pool
Specifies the source pool to move the disks from. If the specified
source pool does not exist, an error message is displayed.
dest_pool
Specifies the destination pool to move the disks to. If the specified
destination pool does not exist, a new pool is created with the
specified name. The disk is moved to that pool.
disk1,disk2,...
Specifies the disks to be moved.
To specify multiple disks to be moved, use a comma with no space
in between.
If a specified disk is not part of the source pool or does not exist,
an error message is displayed. If one of the disks to be moved
does not exist, all of the specified disks to be moved will not be
moved.
If all of the disks for the pool are moved, the pool is removed
(deleted from the system), since there are no disks associated
with the pool.
To remove a disk
1
To remove a disk from a pool, enter the following:
Storage> pool rmdisk disk1[,disk2,...]
where disk1,disk2 specifies the disk(s) to be removed from the pool.
An unallocated pool is a reserved pool for holding disks that are removed from
other pools.
2
To remove additional disks, use a comma with no spaces in between.
Storage> pool rmdisk disk1,disk2
Formatting or reinitializing a disk
You can format or reinitialize a disk. If the disk does not belong to any group, the
Storage> disk format command erases the first 100-MB space on the disk(s).
You can format multiple disks at once.
75
Configuring storage
Removing a disk
If a DAS disk is formatted, it is exported to all the nodes in the cluster. DAS disks
cannot be added to storage pools if they are not formatted.
To reformat or reinitialize a disk
â—†
To reformat or reinitialize a disk, enter the following:
Storage> disk format disk1
Where disk1 is the disk that you want to format or reinitialize.
Removing a disk
The Storage> disk remove command allows you to remove disks from a cluster.
This command is helpful in situations when the disk attributes are incorrectly listed
in Veritas Access.
Note: Only the disks that are not a part of a pool can be removed.
The Storage> disk remove command will not destroy the data on the disk, but it
removes the disk from the system's configuration. Rebooting the cluster or running
scanbus will bring back the disk into the system's configuration. To remove the disk
permanently from the system's configuration, you should remove the disk's mapping
from the array.
To remove a disk from a cluster
â—†
To remove a disk from a cluster, enter the following:
Storage> disk remove disk1[,disk2,...]
disk1
Indicates the first disk name that you want to remove from the
cluster.
disk2
Indicates the second disk name that you want to remove from the
cluster.
Disk names are comma-separated without any spaces between
the disk names.
76
Chapter
6
Configuring data integrity
with I/O fencing
This chapter includes the following topics:
â– 
About I/O fencing
â– 
Configuring disk-based I/O fencing
â– 
Using majority-based fencing
About I/O fencing
In the Veritas Access cluster, one method of communication between the nodes is
conducted through heartbeats over private links. If the two nodes cannot
communicate, the two nodes cannot verify each other's state. Neither node can
distinguish if the failed communication is because of a failed link or a failed partner
node. The network breaks into two networks that cannot communicate with each
other but do communicate with the central storage. This condition is referred to as
the "split-brain" condition.
I/O fencing protects data integrity if the split-brain condition occurs. I/O fencing
determines which nodes retain access to the shared storage and which nodes are
removed from the cluster, to prevent possible data corruption.
In Veritas Access, I/O fencing has the following modes:
â– 
Disk-based I/O fencing uses coordinator disks for arbitration in the event of a
network partition. Coordinator disks are standard disks or LUNs that are set
aside for use by the I/O fencing driver. The coordinator disks act as a global
lock device during a cluster reconfiguration. This lock mechanism determines
which node is allowed to fence off data drives from other nodes. A system must
eject a peer from the coordinator disks before it can fence the peer from the
Configuring data integrity with I/O fencing
Configuring disk-based I/O fencing
data drives. Racing for control of coordinator disks is how fencing helps prevent
split-brain. Coordinator disks cannot be used for any other purpose. You cannot
store data on them.
To use the disk-based I/O fencing feature, you enable fencing on each node in
the cluster. Disk-based I/O fencing always requires an odd number of disks
starting with three disks. You must also specify the three disks to use as
coordinator disks. The minimum configuration must be a two-node cluster with
Veritas Access software installed and more than three disks. Three of the disks
are used as coordinator disks and the rest of the disks are used for storing data.
See “Configuring disk-based I/O fencing” on page 78.
â– 
Majority-based I/O fencing provides support for high availability when there are
no additional servers or shared SCSI-3 disks that can act as coordination points.
The cluster must have an odd number of nodes. In case a split-brain condition
occurs, the sub-cluster with more than half of the nodes remains online. If a
sub-cluster has less than half of the nodes, then it panics itself.
For Veritas Access, majority-based fencing is used for Flexible Storage Sharing.
Majority-based I/O fencing is administered only with the CLISH.
See “Using majority-based fencing” on page 80.
Configuring disk-based I/O fencing
To use the disk-based I/O fencing feature, the minimum configuration must be a
two-node cluster with Veritas Access software installed and more than three disks.
Three disks are used as coordinator disks and the rest of the disks are used for
storing data.
Enabling I/O fencing configures disk-based fencing if shared disks are present.
Note: Enabling I/O fencing causes a disruption of Veritas Access services. It is
suggested to bring down the Veritas Access services, enable I/O fencing, and then
resume Veritas Access services.
78
Configuring data integrity with I/O fencing
Configuring disk-based I/O fencing
To configure I/O fencing
1
To check the status of I/O fencing, enter the following:
Storage> fencing status
In the following example, I/O fencing is configured on the three disks
Disk_0,Disk_1 and Disk_2 and the column header Coord Flag On indicates
that these disks are in good condition. If you check the Storage> disk list
output, it will be in the OK state.
2
If there are not three coordinator disks, you must add coordinator disks. You
can add disks and enable fencing at the same time with the following command:
Storage> fencing on disk disk1,disk2,disk3
You may still provide three disks for fencing if three coordinator disks already
exist. This will, however, remove the three coordinator disks previously used
for fencing, and configure I/O fencing on the new disks.
Replacing an existing coordinator disk
You can replace a coordinator disk with another disk. The replacement disk must
not be in a failed state, and must not be in use by an existing pool.
Note: If the disk being replaced is in a failed state, then you must delete the disk
from the array. If the failed disk comes up and works properly, it can lead to an
even number of fencing disks, and this affects the functionality.
To replace an existing coordinator disk
â—†
To replace the existing coordinator disk, enter the following:
Storage> fencing replace src_disk
dest_disk
where src_disk is the source disk and dest_disk is the destination disk.
Disabling I/O fencing
You can disable I/O fencing on all of the nodes. This operation does not free up
the coordinator disks.
79
Configuring data integrity with I/O fencing
Using majority-based fencing
Note: Disabling I/O fencing causes a disruption of Veritas Access services. It is
suggested to bring down the Veritas Access services, disable I/O fencing, and then
resume Veritas Access services.
To disable I/O fencing
â—†
To disable I/O fencing, enter the following:
Storage> fencing off
Destroying the coordinator pool
Destroys the coordinator pool if I/O fencing is disabled.
Note: This operation is not supported for a single-node cluster.
To destroy the coordinator pool
â—†
To destroy the coordinator pool, enter the following:
Storage> fencing destroy
Using majority-based fencing
For Flexible Storage Sharing (FSS), you are prompted for the type of fencing
(majority-based or disk-based) that you want to set. Majority-based fencing does
not require configuring a coordinator pool or coordinator disks. Enabling I/O fencing
configures majority-based fencing if no shared disks are present.
Note: Enabling or disabling I/O fencing causes a disruption of Veritas Access
services. Veritas suggests that you bring down the Veritas Access services, enable
or disable I/O fencing, and then resume Veritas Access services.
To check the status of I/O fencing
â—†
Check the status of I/O fencing while I/O fencing is not enabled.
Storage> fencing status
80
Configuring data integrity with I/O fencing
Using majority-based fencing
To enable majority-based fencing
1
Enable majority-based I/O fencing.
Storage> fencing on majority
2
Check the status of I/O fencing after enabling I/O fencing.
Storage> fencing status
To disable majority-based I/O fencing
â—†
Disable majority-based I/O fencing.
Storage> fencing off
81
Chapter
7
Configuring ISCSI
This chapter includes the following topics:
â– 
About iSCSI
â– 
Configuring the iSCSI initiator
â– 
Configuring the iSCSI initiator name
â– 
Configuring the iSCSI devices
â– 
Configuring discovery on iSCSI
â– 
Configuring the iSCSI targets
â– 
Modifying tunables for iSCSI
About iSCSI
The Internet Small Computer System Interface (iSCSI) is an Internet protocol-based
storage networking standard that links data storage facilities. By carrying SCSI
commands over IP networks, iSCSI facilitates data transfers over Intranets and
manages storage over long distances.
The iSCSI feature allows Veritas Access servers to use iSCSI disks as shared
storage.
Configuring ISCSI
Configuring the iSCSI initiator
Configuring the iSCSI initiator
To display the iSCSI initiator service
â—†
To display the status of the iSCSI initiator service, enter the following:
Storage> iscsi status
For example:
iSCSI Initiator Status on ACCESS_01 : ONLINE
iSCSI Initiator Status on ACCESS_02 : ONLINE
To start the iSCSI initiator service
â—†
To start the iSCSI initiator service, enter the following:
Storage> iscsi start
For example:
Storage> iscsi start
Storage> iscsi status
iSCSI Initiator Status on ACCESS_01 : ONLINE
iSCSI Initiator Status on ACCESS_02 : ONLINE
To stop the iSCSI initiator service
â—†
To stop the iSCSI initiator service, enter the following:
Storage> iscsi stop
For example:
Storage> iscsi stop
Storage> iscsi status
iSCSI Initiator Status on ACCESS_01 : OFFLINE
iSCSI Initiator Status on ACCESS_02 : OFFLINE
Configuring the iSCSI initiator name
Veritas Access generates iSCSI initiator names for each node.
You can set the prefix that Veritas Access uses to generate initiator names. Veritas
Access names each initiator with this prefix followed by the node number of the
node.
83
Configuring ISCSI
Configuring the iSCSI devices
84
To display the iSCSI initiator names
â—†
To display the iSCSI initiator names, enter the following:
Storage> iscsi initiator name list
For example:
Storage> iscsi initiator name list
Node
---ACCESS_01
ACCESS_02
Initiator Name
-------------iqn.2009-05.com.test:test.1
iqn.2009-05.com.test:test.2
To configure the iSCSI initiator name
â—†
To configure the iSCSI initiator name prefix, enter the following:
Storage> iscsi initiator name setprefix initiatorname-prefix
where initiatorname-prefix is a name that conforms to the naming rules for
initiator and target names as specified in RFC3721. Initiator names for nodes
in the cluster are generated by appending the node number to this prefix.
For example:
Storage> iscsi initiator name setprefix iqn.2009-05.com.test:test
Configuring the iSCSI devices
The iSCSI initiator contains a list of network devices (network interfaces) from which
connections are made to targets.
You can add or delete devices from this list.
When you add a device for use with the iSCSI initiator, iSCSI initiator connections
use this device to connect to the target. If there are any existing targets, then the
iSCSI initiator initiates a connection to all targets by using the newly set devices.
When you delete a device from the iSCSI configuration, any existing connections
by way of the device to targets is terminated. If there are existing targets, you
cannot delete the last device in the iSCSI initiator configuration.
Configuring ISCSI
Configuring the iSCSI devices
To display the list of devices
â—†
To display the list of devices, enter the following:
Storage> iscsi device list
For example:
Storage> iscsi device list
Device
-----pubeth0
pubeth1
To add an iSCSI device
â—†
To add an iSCSI device, enter the following:
Storage> iscsi device add device
where device is the device where the operation takes place.
For example:
Storage> iscsi device add pubeth1
Storage> iscsi device list
Device
-----pubeth0
pubeth1
To delete an iSCSI device
â—†
To delete an iSCSI device, enter the following:
Storage> iscsi device delete device
where device is the device where the operation takes place.
For example:
Storage> iscsi device delete pubeth1
Storage> iscsi device list
Device
-----pubeth0
85
Configuring ISCSI
Configuring discovery on iSCSI
Configuring discovery on iSCSI
The iSCSI initiator contains a list of iSCSI target discovery addresses.
To display the iSCSI discovery addresses
â—†
To display the iSCSI discovery addresses, enter the following:
Storage> iscsi discovery list
For example:
Storage> iscsi discovery list
Discovery Address
----------------192.168.2.14:3260
192.168.2.15:3260
86
Configuring ISCSI
Configuring discovery on iSCSI
87
To add a discovery address to the iSCSI initiator
1
To add a discovery address to the iSCSI initiator, enter the following:
Storage> iscsi discovery add discovery-address
where:
discovery-address The target address at which an initiator can request a list of targets
using a SendTargets text request as specified in iSCSI protocol
of RFC3720.
You can specify either an IPv4 address or an IPv6 address.
Optionally, you can specify a port with the IP address. For example:
192.168.0.4
192.168.0.4:3260
2001:c90::211:9ff:feb8:a9e9
[2001:c90::211:9ff:feb8:a9e9]:3260
If no port is specified, the default port 3260 is used. Verify that
your firewall allows you to access the target location through the
port. For example:
# telnet discovery-address 3260
For example:
Storage> iscsi discovery add 192.168.2.15:3260
Discovery CHAP credentials for ACCESS_1:
Outgoing CHAP Username : root
Outgoing CHAP Password : ********
Incoming CHAP Username :
Authentication succeeded.
Discovered Targets
-----------------iqn.2001-04.com.example:storage.disk2.sys3.xyz
iqn.2001-04.com.example:storage.disk3.sys3.xyz
iqn.2001-04.com.example:storage.disk4.sys3.xyz
iqn.2001-04.com.example:storage.disk5.sys3.xyz
Logging
Logging
Logging
Logging
into
into
into
into
target
target
target
target
iqn.2001-04.com.example:storage.disk2.sys3.xyz
iqn.2001-04.com.example:storage.disk3.sys3.xyz
iqn.2001-04.com.example:storage.disk4.sys3.xyz
iqn.2001-04.com.example:storage.disk5.sys3.xyz
Configuring ISCSI
Configuring discovery on iSCSI
2
To verify the addition of the discovery address, display the discovery addresses.
Storage> iscsi discovery list
For example:
Storage> iscsi discovery list
Discovery Address
----------------192.168.2.14:3260
192.168.2.15:3260
88
Configuring ISCSI
Configuring discovery on iSCSI
To delete an iSCSI discovery address
1
To delete the targets discovered using this discovery address, enter the
following:
Storage> iscsi discovery del discovery-address
where:
discovery-address The target address at which an initiator can request a list of targets
using a SendTargets text request as specified in iSCSI protocol
of RFC3720.
You can specify either an IPv4 address or an IPv6 address.
Optionally, you can specify a port with the IP address. For example:
192.168.0.4
192.168.0.4:3260
2001:c90::211:9ff:feb8:a9e9
[2001:c90::211:9ff:feb8:a9e9]:3260
If no port is specified, the default port 3260 is used. Verify that
your firewall allows you to access the target location through the
port. For example:
# telnet discovery-address 3260
For example:
Storage> iscsi discovery del 192.168.2.15:3260
2
To verify the deletion of the discovery address, display the discovery addresses.
Storage> iscsi discovery list
Discovery Address
----------------192.168.2.14:3260
89
Configuring ISCSI
Configuring discovery on iSCSI
90
To rediscover an iSCSI discovery address
â—†
To rediscover an iSCSI discovery address, enter the following:
Storage> iscsi discovery rediscover discovery-address
where:
discovery-address The target address at which an initiator can request a list of targets
using a SendTargets text request as specified in iSCSI protocol
of RFC3720.
You can specify either an IPv4 address or an IPv6 address.
Optionally, you can specify a port with the IP address. For example:
192.168.0.4
192.168.0.4:3260
2001:c90::211:9ff:feb8:a9e9
[2001:c90::211:9ff:feb8:a9e9]:3260
If no port is specified, the default port 3260 is used. Verify that
your firewall allows you to access the target location through the
port. For example:
# telnet discovery-address 3260
For example:
Storage> iscsi discovery rediscover 192.168.2.15:3260
Deleted targets
----------------iqn.2001-04.com.example:storage.disk5.sys3.xyz
New targets
----------------iqn.2001-04.com.example:storage.disk6.sys3.new.xyz
Logging into target iqn.2001-04.com.example:storage.disk6.sys3.new.xyz
Configuring ISCSI
Configuring discovery on iSCSI
To rediscover changes in targets or LUNs at a discovery address
â—†
To rediscover changes in targets or LUNs at a discovery address, enter the
following:
Storage> iscsi discovery rediscover_new discovery-address
where:
discovery-address The target address at which an initiator can request a list of targets
using a SendTargets text request as specified in iSCSI protocol
of RFC3720.
You can specify either an IPv4 address or an IPv6 address.
Optionally, you can specify a port with the IP address. For example:
192.168.0.4
192.168.0.4:3260
2001:c90::211:9ff:feb8:a9e9
[2001:c90::211:9ff:feb8:a9e9]:3260
If no port is specified, the default port 3260 is used. Verify that
your firewall allows you to access the target location through the
port. For example:
# telnet discovery-address 3260
New LUNs or targets discovered at discovery-address will be automatically
added and logged into. This command does not discover any targets that have
been deleted at discovery-address.
For example:
Storage> iscsi discovery rediscover_new 192.168.2.15:3260
14% [|] Checking for new targets
New targets
----------------iqn.2001-04.com.example:storage.disk7.sys3.new.xyz
100% [#] Updating disk list
91
Configuring ISCSI
Configuring the iSCSI targets
Configuring the iSCSI targets
To display the iSCSI targets
â—†
To display the iSCSI targets, enter the following:
Storage> iscsi target list
For example:
Storage> iscsi target list
Target
-----iqn.2001-04.com.example:storage.disk2.sys3.xyz
iqn.2001-04.com.example:storage.disk4.sys3.xyz
iqn.2001-04.com.example:storage.disk5.sys3.xyz
iqn.2001-04.com.example:storage.disk3.sys3.xyz
iqn.2001-04.com.example2:storage.disk2.sys3.xyz
iqn.2001-04.com.example2:storage.disk3.sys3.xyz
iqn.2001-04.com.example2:storage.disk4.sys3.xyz
iqn.2001-04.com.example2:storage.disk5.sys3.xyz
Discovery Address
-----------------192.168.2.14:3260
192.168.2.14:3260
192.168.2.14:3260
State Disk
----- ----ONLINE disk_0
ONLINE disk_2
ONLINE disk_3
192.168.2.14:3260
192.168.2.15:3260
192.168.2.15:3260
192.168.2.15:3260
192.168.2.15:3260
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
disk_1
disk_4
disk_5
disk_6
disk_7
92
Configuring ISCSI
Configuring the iSCSI targets
To display the iSCSI target details
â—†
To display the iSCSI target details, enter the following:
Storage> iscsi target listdetail target
where target is the name of the node you want to display the details for.
This list also shows targets discovered at discovery-address, not only
manually added targets.
For example:
Storage> iscsi target listdetail iqn.2001-04.com.example:
storage.disk2.sys3.xyz
Discovery Address : 192.168.2.14:3260
Connections
===========
Portal Address
ACCESS_01
ACCESS_02
-----------------------------192.168.2.14:3260,1
2
2
93
Configuring ISCSI
Configuring the iSCSI targets
To add an iSCSI target
â—†
To add an iSCSI target, enter the following:
Storage> iscsi target add target-name portal-address
target-name
Name of the iSCSI target at which SCSI LUNs are available.
target-name should conform to the naming rules defined in
RFC3721.
portal-address
The location where the target is accessible.
You can specify either an IPv4 address or an IPv6 address.
For example:
192.168.0.4
192.168.0.4,1
192.168.0.4:3260
192.168.0.4:3260,1
2001:c90::211:9ff:feb8:a9e9
2001:c90::211:9ff:feb8:a9e9,1
[2001:c90::211:9ff:feb8:a9e9]:3260
[2001:c90::211:9ff:feb8:a9e9]:3260,10
For example:
Storage> iscsi target add iqn.2001-04.com.example:
storage.disk2.sys1.xyz 192.168.2.14:3260
Logging into target iqn.2001-04.com.example:
storage.disk2.sys1.xyz
Storage> iscsi target listdetail iqn.2001-04.com.example:
storage.disk2.sys1.xyz
Connections
===========
Portal Address
ACCESS55_01
-----------------------192.168.2.14:3260,1
1
ACCESS55_02
-----------1
94
Configuring ISCSI
Configuring the iSCSI targets
To delete an iSCSI target
â—†
To delete an iSCSI target, enter the following:
Storage> iscsi target del target-name
{discovery-address|portal-address}
target-name
Name of the iSCSI target at which SCSI LUNs are available.
target-name should conform to the naming rules defined in
RFC3721.
discovery-address Target address at which an initiator can request a list of targets
using a SendTargets text request as specified in iSCSI protocol
of RFC3720. If no port is specified with the discovery address,
default port 3260 is used.
portal-address
The location where the target is accessible.
For example:
Storage> iscsi target del iqn.2001-04.com.example:
storage.disk2.sys3.xyz
To login to an iSCSI target
â—†
To log in to an iSCSI target, enter the following:
Storage> iscsi target login target-name
{discovery-address | portal-address}
target-name
Name of the iSCSI target at which SCSI LUNs are available.
target-name should conform to the naming rules defined in
RFC3721.
discovery-address Target address at which an initiator can request a list of targets
using a SendTargets text request as specified in iSCSI protocol
of RFC3720. If no port is specified with the discovery address,
default port 3260 is used.
portal-address
The location where the target is accessible.
For example:
Storage> iscsi target login iqn.2001-04.com.example:
storage.disk2.sys3.xyz
95
Configuring ISCSI
Configuring the iSCSI targets
To logout from an iSCSI target
â—†
To logout from an iSCSI target, enter the following:
Storage> iscsi target logout target-name
{discovery-address | portal-address}
target-name
Name of the iSCSI target at which SCSI LUNs are available.
target-name should conform to the naming rules defined in
RFC3721.
discovery-address Target address at which an initiator can request a list of targets
using a SendTargets text request as specified in iSCSI protocol
of RFC3720. If no port is specified with the discovery address,
default port 3260 is used.
portal-address
The location where the target is accessible.
For example:
Storage> iscsi target logout iqn.2001-04.com.example:
storage.disk2.sys3.xyz
96
Configuring ISCSI
Modifying tunables for iSCSI
97
To rescan targets for new LUNs
â—†
To rescan a target for a new LUN, enter the following:
Storage> iscsi target rescan target-name
where target-name is the name of the iSCSI target that you want to rescan.
You can use the Storage> iscsi target rescan command for both static
targets and discovered targets.
For example:
Storage> iscsi target rescan iqn.2001-04.com.example:storage.disk2
.sys3.xyz
100% [#] Updating disk list
Storage> iscsi target list
Target
-----iqn.2001-04.com.example:storage.disk2.sys3.xyz
iqn.2001-04.com.example:storage.disk4.sys3.xyz
iqn.2001-04.com.example:storage.disk5.sys3.xyz
iqn.2001-04.com.example:storage.disk3.sys3.xyz
iqn.2001-04.com.example2:storage.disk2.sys3.xyz
iqn.2001-04.com.example2:storage.disk3.sys3.xyz
iqn.2001-04.com.example2:storage.disk4.sys3.xyz
iqn.2001-04.com.example2:storage.disk5.sys3.xyz
Discovery Address
----------------192.168.2.14:3260
192.168.2.14:3260
192.168.2.14:3260
192.168.2.14:3260
192.168.2.15:3260
192.168.2.15:3260
192.168.2.15:3260
192.168.2.15:3260
State
----ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
Disk
---disk_0 disk_8 disk_9
disk_2
disk_3
disk_1
disk_4
disk_5
disk_6
disk_7
Modifying tunables for iSCSI
You can set the values of the attributes on the targets. You can set or show the
default values, the values for all targets, or the values for a specific target.
Table 7-1 shows the target attributes that you can modify.
Configuring ISCSI
Modifying tunables for iSCSI
Table 7-1
Attributes for iSCSI targets
Attribute
Description
cmds_max
The maximum number of SCSI commands that the session
will queue. A session is defined as a connection between the
initiator and target portal for accessing a given target.
cmds_max defines the commands per target, which could
be multiple LUNs. Valid values range from 2 to 2048 and
should be a power of 2.
fast_abort
Defines whether initiator should respond to R2Ts (Request
to Transfer) after sending a task management function like
an ABORT_TASK or LOGICAL UNIT RESET. A value of Yes
causes the initiator to stop responding to R2Ts after an
ABORT_TASK request is received. For Equallogic arrays,
the recommended value is No. Valid values are Yes or No.
initial_login_retry_max The maximum number of times that the iSCSI initiator should
try a login to the target during first login. This only affects the
initial login. Valid values range from 1 to 16. During each
login attempt, wait for login_timeout seconds for the login to
succeed.
login_timeout
The amount of time that the iSCSI initiator service should
wait for login to complete. The value of this attribute is in
seconds. Valid values range from 10 to 600.
logout_timeout
The amount of time that the iSCSI initiator service should
wait for logout to complete. The value of this attribute is in
seconds. Valid values range from 10 to 600.
noop_interval
The time to wait between subsequent sending of Nop-out
requests. The value of this attribute is in seconds. Valid
values range from 5 to 600.
noop_timeout
The amount of time that the iSCSI initiator service should
wait for response to a Nop-out request sent to the target,
before failing the connection. Failing the connection causes
the I/O to be failed and retried on any other available path.
The value of this attribute is in seconds. Valid values range
from 5 to 600.
queue_depth
The maximum number of SCSI commands queued per LUN,
belonging to a target. The value for queue_depth cannot be
greater than cmds_max. Valid values range from 1 to 128.
98
Configuring ISCSI
Modifying tunables for iSCSI
Table 7-1
Attributes for iSCSI targets (continued)
Attribute
Description
replacement_timeout
The amount of time to wait for session re-establishment
before failing SCSI commands. The value of this attribute is
in seconds. Valid values range from 10 to 86400.
To display the default value for target attributes
â—†
To display the default value for target attributes, enter the following:
Storage> iscsi target attr showdefault
For example:
Storage> iscsi target attr showdefault
Attribute
--------replacement_timeout
noop_timeout
noop_interval
login_timeout
logout_timeout
cmds_max
queue_depth
initial_login_retry_max
fast_abort
Value
----122
5
13
10
15
128
32
10
No
99
Configuring ISCSI
Modifying tunables for iSCSI
To display values for target attributes of all known targets
â—†
To display values for target attributes of all known targets, enter the following:
Storage> iscsi target attr showall
For example:
Storage> iscsi target attr showall
Attribute
--------replacement_timeout
noop_timeout
noop_interval
login_timeout
logout_timeout
cmds_max
queue_depth
initial_login_retry_max
fast_abort
replacement_timeout
noop_timeout
noop_interval
login_timeout
logout_timeout
cmds_max
queue_depth
Value
----123
5
121
10
15
128
32
5
No
124
5
121
10
15
128
32
Target
-----iqn.1992-08.com.iscsi:sn.84268871
iqn.1992-08.com.iscsi:sn.84268871
iqn.1992-08.com.iscsi:sn.84268871
iqn.1992-08.com.iscsi:sn.84268871
iqn.1992-08.com.iscsi:sn.84268871
iqn.1992-08.com.iscsi:sn.84268871
iqn.1992-08.com.iscsi:sn.84268871
iqn.1992-08.com.iscsi:sn.84268871
iqn.1992-08.com.iscsi:sn.84268871
iqn.2009-01.com.example:storage.disk0.lun0
iqn.2009-01.com.example:storage.disk0.lun0
iqn.2009-01.com.example:storage.disk0.lun0
iqn.2009-01.com.example:storage.disk0.lun0
iqn.2009-01.com.example:storage.disk0.lun0
iqn.2009-01.com.example:storage.disk0.lun0
iqn.2009-01.com.example:storage.disk0.lun0
initial_login_retry_max
fast_abort
10
No
iqn.2009-01.com.example:storage.disk0.lun0
iqn.2009-01.com.example:storage.disk0.lun0
100
Configuring ISCSI
Modifying tunables for iSCSI
101
To display the attribute values for a specific target
â—†
To display the attribute values for a specific target, enter the following:
Storage> iscsi target attr show target-name
where target-name is the name of the iSCSI target to be displayed.
For example:
Storage> iscsi target attr show iqn.1992-08.com.iscsi:sn.84268871
Attribute
--------replacement_timeout
noop_timeout
noop_interval
login_timeout
logout_timeout
cmds_max
queue_depth
initial_login_retry_max
fast_abort
Value
----123
5
121
10
15
128
32
5
No
To set the default value for a target attribute
â—†
To set the default value for a target attribute, enter the following:
Storage> iscsi target attr setdefault attribute value
attribute
The attribute for which to set the value.
value
The default value to be set for the attribute.
The default value is inherited by any new targets that get added.
For example:
Storage> iscsi target attr setdefault login_timeout 10
Sucesss.
Configuring ISCSI
Modifying tunables for iSCSI
To set an attribute value for all known targets
â—†
To set an attribute value for all known targets, enter the following:
Storage> iscsi target attr setall attribute value
attribute
The attribute for which to set the value.
value
The value to be set for the attribute.
This command does not change the default value as shown in the Storage>
iscsi target attr showdefault command. Changes to values are effective
after re-login.
For example:
Storage> iscsi target attr setall logout_timeout 20
Changes would be applicable after next login into the target.
Sucesss.
To set the attribute value for a specific target
â—†
To set the attribute value for a specific target, enter the following:
Storage> iscsi target attr set target-name attribute value
target-name
The name of the specific iSCSI target.
attribute
The attribute of the specific target.
value
The value to be set for the target attribute.
For example:
Storage> iscsi target attr set iqn.1992-08.com.iscsi:sn.84268871 noop_interval 30
Changes would be applicable after next login into the target.
Sucesss.
102
Section
4
Managing Veritas Access
file access services
â– 
Chapter 8. Configuring your NFS server
â– 
Chapter 9. Using Veritas Access as a CIFS server
â– 
Chapter 10. Configuring Veritas Access to work with Oracle Direct NFS
â– 
Chapter 11. Configuring an FTP server
Chapter
8
Configuring your NFS
server
This chapter includes the following topics:
â– 
About using NFS server with Veritas Access
â– 
Using the kernel-based NFS server
â– 
Using the NFS-Ganesha server
â– 
Switching between NFS servers
â– 
Recommended tuning for NFS-Ganesha version 3 and version 4
â– 
Accessing the NFS server
â– 
Displaying and resetting NFS statistics
â– 
Configuring Veritas Access for ID mapping for NFS version 4
â– 
Configuring the NFS client for ID mapping for NFS version 4
â– 
About authenticating NFS clients
â– 
Setting up Kerberos authentication for NFS clients
About using NFS server with Veritas Access
Veritas Access provides file access services to UNIX and Linux client computers
using the Network File System (NFS) protocol. Veritas Access supports NFSv3 and
NFSv4. Veritas Access provides the following NFS server support:
â– 
Kernel-based NFS server
See “Using the kernel-based NFS server” on page 105.
Configuring your NFS server
Using the kernel-based NFS server
â– 
NFS-Ganesha server
See “Using the NFS-Ganesha server” on page 105.
At any time, either NFS-Ganesha or kernel NFS is active. The kernel NFS server
is enabled by default. If required, you can switch the NFS server that you use.
Use the NFS-Ganesha server if you are using a scale-out file system.
See “Switching between NFS servers” on page 105.
Using the kernel-based NFS server
The kernel-based NFS server supports NFS version 3 and version 4. The kernel
NFS server is enabled by default. Kernel NFS supports Active-Active mode serving
NFS version 3 and 4. Veritas recommends that you use the default kernel-based
NFS server.
Using the NFS-Ganesha server
If you plan to use a scale-out filesystem with 'largefs' layout, you must use Veritas
Access with an NFS-Ganesha server.
NFS-Ganesha provides support for both NFS version 3 and NFS version 4.
NFS-Ganesha is a user-space implementation of the NFS server. The use of a
NFS-Ganesha server is optional. NFS-Ganesha is not enabled by default.
For scale-out file systems with largefs layout, an NFS-Ganesha share is always
exported from only one node in the cluster. This node can be any one of the nodes
in the cluster. At the time of share export, the virtual IP address that is used for
accessing the share is displayed. Different shares can be exported from different
nodes. The shares are highly available in case of a node failure.
Certain limitations apply for NFS-Ganesha.
See theVeritas Access Release notes for the limitations.
Since the kernel-based NFS server is the default, switch the NFS server to
NFS-Ganesha.
Switching between NFS servers
If NFS v3 or NFS v4 is your primary use case, then we recommend that you use
the kernel NFS server. Both NFS v3 and NFS v4 support Kerberos authentication.
The NFS-Ganesha server supports both NFS v3 and NFS v4.
105
Configuring your NFS server
Recommended tuning for NFS-Ganesha version 3 and version 4
A CLISH command is provided to switch from kernel NFS server to NFS-Ganesha,
or vice versa. Before you switch between the NFS servers, the NFS server must
be offline.
All of the available NFS shares are moved from the previous NFS server to the new
NFS server; therefore, the operation may be time consuming.
When the NFS server is switched from kernel-based NFS to NFS-Ganesha or vice
versa, the existing NFS mounts on the client are no longer active. The client is
required to remount the exports to access the shares.
To switch between NFS servers
1
Make sure that the NFS server is offline. You can view the status of the NFS
server with the following command:
NFS> server status
2
Use the following command to switch the NFS server:
NFS> server switch
Recommended tuning for NFS-Ganesha version
3 and version 4
Veritas Access supports both the NFS kernel-based server and the NFS-Ganesha
server in a mutually exclusive way. The NFS kernel-based server supports NFS
version 3 and version 4. The NFS-Ganesha server also supports both NFS version
3 and NFS version 4.
See “Using the NFS-Ganesha server” on page 105.
The NFS-Ganesha server does not run in the kernel, instead NFS-Ganesha runs
in user space on the NFS server. This means that the NFS-Ganesha server
processes can be affected by system resource limitations as any other user space
process can be affected. There are some NFS-server operating system tuning
values that you should modify to ensure that the NFS-Ganesha server performance
is not unduly affected. You use the NFS client mount option version to determine
whether NFS version 3 or NFS version 4 is used. On the NFS client, you can select
either the version=3 or the version=4 mount option. The NFS client is unaware
of whether the NFS server is using kernel-based NFS or NFS-Ganesha. Only if
NFS-Ganesha is enabled in Veritas Access, a client can perform an NFS mount
using the mount option of version=4.
106
Configuring your NFS server
Recommended tuning for NFS-Ganesha version 3 and version 4
When you start a system, kswapd_init() calls a kernel thread that is called kswapd,
which continuously executes the function kswapd() in mm/vmscan.c that usually
sleeps. The kswapd daemon is responsible for reclaiming pages when memory is
running low. kswapd performs most of the tasks that are needed to maintain the
page cache correctly, shrink slab caches, and swap out processes if necessary.
kswapd keeps freeing pages until the pages_high watermark is reached. Under
extreme memory pressure, processes do the work of kswapd synchronously by
calling balance_classzone(), which calls the try_to_free_pages_zone().
When there is memory pressure, pages are claimed using two different methods.
â– 
pgscank/s – The kswapd kernel daemon periodically wakes up and claims
(frees) memory in the background when free memory is low. pgscank/s records
this activity.
â– 
pgscand/s – When kswapd fails to free up enough memory, then the memory
is also claimed directly in the process context (thus blocking the user program
execution). pgscand/s records this activity.
â– 
The total pages being claimed (also known as page stealing) is therefore a
combination of both pgscank/s and pgscand/s. pgsteal/s records the total
activity, so (pgsteal/s = pgscank/s + pgscand/s).
The NFS-Ganesha user process can be affected when kswapd fails to free up
enough memory. To alleviate the possibility of the NFS-Ganesha process from
doing the work of kswapd, Veritas recommends increasing the value of the Linux
virtual machine tunable min_free_kbytes.
Example of a default auto-tuned value:
sysctl -a | grep vm.min_free
vm.min_free_kbytes = 90112
You use min_free_kbytes to force the Linux VM (virtual memory management) to
keep a minimum number of kilobytes free. The VM uses this number to compute a
watermark value for each lowmem zone in the system.
Table 8-1
Recommended tuning parameters for NFS version 3 and version
4
Option
Description
NFS mount options
File system mount options for the NFS client:
â– 
version=3/4
â– 
nordirplus
â– 
sharecache
107
Configuring your NFS server
Accessing the NFS server
Table 8-1
Recommended tuning parameters for NFS version 3 and version
4 (continued)
Option
Description
NFS server export
options
NFS server export options:
â– 
rw
â– 
sync
â– 
no_root_squash
Jumbo frames
A jumbo frame is an Ethernet frame with a payload greater than
the standard maximum transmission unit (MTU) of 1,500 bytes.
Enabling jumbo frames improves network performance in I/O
intensive workloads. If jumbo frames are supported by your
network, and if you wish to use jumbo frames, Veritas recommends
using a jumbo frame size of 5000.
min_free_kbytes
On server nodes with 96 GB RAM or more, the recommended
value of min_free_kbytes is 1048576 (=1 GB). On server nodes
using the minimum of 32 GB RAM, the minimum recommended
value of min_free_kbytes is 524288 (=512 MB).
Accessing the NFS server
To check on the NFS server status
â—†
Prior to starting the NFS server, check on the status of the server by entering:
NFS> server status
The output shows the status. The output also indicates whether the NFS server
used is the kernel NFS server or the NFS-Ganesha server.
The states (ONLINE, OFFLINE, and FAULTED) correspond to each Veritas
Access node identified by the node name. The states of the node may vary
depending on the situation for that particular node.
The possible states of the NFS> server status command are:
ONLINE
Indicates that the node can serve NFS protocols to the client.
OFFLINE
Indicates the NFS services on that node are down.
FAULTED
Indicates something is wrong with the NFS service on the node.
You can run the NFS> server start command to restart the NFS services,
and only the nodes where NFS services have problems, are restarted.
108
Configuring your NFS server
Displaying and resetting NFS statistics
To start the NFS server
â—†
To start the NFS server, enter the following:
NFS> server start
You can use the NFS> server start command to clear an OFFLINE state
from the NFS> server status output by only restarting the services that are
offline. You can run the NFS> server start command multiple times without
it affecting the already-started NFS server.
Run the NFS> server status command again to confirm the change.
To stop the NFS server
â—†
To stop the NFS server, enter the following:
NFS> server stop
Displaying and resetting NFS statistics
The NFS statistics shown differs depending on whether the NFS server is the default
kernel NFS server, or the NFS-Ganesha server.
Veritas Access does not support resetting the NFS statistics for the NFS-Ganesha
server.
To display statistics for a specific node or for all the nodes
in the cluster
To display NFS statistics, enter the following:
NFS> stat show [nodename]
where nodename specifies the node name for which you are trying to obtain the
statistical information. If the nodename is not specified, statistics for all the nodes
in the cluster are displayed.
To display the NFS statistics for all the nodes in the cluster for the kernel NFS
server, enter the following:
NFS> stat show all
To display the NFS statistics for all the nodes in the cluster for the NFS-Ganesha
server, enter the following:
NFS> stat show all
109
Configuring your NFS server
Configuring Veritas Access for ID mapping for NFS version 4
To reset NFS statistics for a specific node or for all the nodes in the cluster
to zero
â—†
To reset NFS statistics for the kernel NFS server, enter the following:
NFS> stat reset [nodename]
where nodename specifies the node name for which you want to reset the NFS
statistics to zero. If nodename is not specified, NFS statistics for all the nodes
in the cluster are reset to zero. Statistics are automatically reset to zero after
a reboot of a node, or in the case of NFS-Ganesha, after you reboot the node
or the NFS server restarts.
Configuring Veritas Access for ID mapping for
NFS version 4
If you plan to use NFS version 4, you must configure Veritas Access to map the
user IDs to the required format. In NFS version 3, each user is identified by a
number, the user ID (uid). A UNIX file also identifies the owner of the file by a uid
number. NFS version 4 has a different way of identifying users than that used by
NFS version 3. In NFS version 4, each user is identified by a string, such as
[email protected].
Veritas Access requires a mechanism to map the user strings from NFS version 4
to uids on the server and the client. This process, called ID mapping, uses a file
/etc/idmapd.conf.
NFS version 4 uses the /etc/idmapd.conf file to map the IDs. The Domain field
needs to be set to the DNS domain of the Veritas Access server. If the DNS domain
is not set, the ID mapping maps all of the users on the client to the user 'nobody'.
To configure Veritas Access for ID mapping
â—†
Configure the DNS domain of Veritas Access using the following command:
Network>
dns set domainname domainname
When the NFS server is started, the /etc/idmapd.conf file is updated with
the domain information of the Veritas Access server.
You must also configure the NFS client.
110
Configuring your NFS server
Configuring the NFS client for ID mapping for NFS version 4
Configuring the NFS client for ID mapping for NFS
version 4
For NFS version 4, you must configure the NFS client so that the NFS version 4
user strings can be mapped to the uids. You must also configure the NFS server.
To configure the NFS client for ID mapping
1
For proper ID mapping, set the Domain field in the /etc/idmapd.conf file as
the DNS domain name of the NFS client. Make sure that the DNS domain is
the same for the NFS client and the Veritas Access server.
This setting in the /etc/idmapd.conf file should be updated on the NFS client.
2
Clear the ID mapping cache on the NFS client using the command nfsidmap
-c and restart the ID mapping service.
About authenticating NFS clients
See “About managing NFS shares using netgroups” on page 253.
Both the NFS-Ganesha server and kernel NFS server support Kerberos
authentication.
Setting up Kerberos authentication for NFS clients
Kerberos provides a secure way of authenticating NFS clients. In this configuration,
the Veritas Access server behaves as a Kerberos client. The Kerberos KDC (Key
Distribution Center) server must already be set up and running outside of Veritas
Access. For NFS version 3, when a Veritas Access share is exported with the krb5
security option, the NFS clients have to mount the Veritas Access share with the
krb5 mount option. Otherwise the mount fails with an authentication error. For NFS
version 4, the NFS clients automatically find the security type and mount the Veritas
Access share with the same mount option.
Note: When CIFS security is configured with ads, Kerberos for NFS cannot be
configured. When NFS is configured for Kerberos authentication, CIFS security
cannot be configured with ads.
To configure Veritas Access for authenticating NFS clients using Kerberos, perform
the tasks in the order that is listed in Table 8-2.
111
Configuring your NFS server
Setting up Kerberos authentication for NFS clients
Table 8-2
Task
Tasks for configuring Veritas Access for authenticating NFS
clients using Kerberos
Where to find more information
Add and configure See “Adding and configuring Veritas Access to the Kerberos realm”
Veritas Access to on page 112.
the Kerberos realm
Configure the NFS See “Configuring Veritas Access for ID mapping for NFS version 4”
server for ID
on page 110.
mapping
Configure the NFS See “Configuring the NFS client for ID mapping for NFS version 4”
client for ID
on page 111.
mapping
Exporting an NFS See “Exporting an NFS share for Kerberos authentication” on page 255.
share for Kerberos
authentication
Mount the NFS
share from the
NFS client
See “Mounting an NFS share with Kerberos security from the NFS
client” on page 256.
Adding and configuring Veritas Access to the Kerberos realm
Kerberos authentication support on Veritas Access is available only if the Key
Distribution Center (KDC) server is running on a standalone computer (in a non-AD
(Active Directory) environment), and there is a single KDC server. Before Veritas
Access can be used as a Kerberos client, the NFS service principal of Veritas
Access has to be added to the KDC server. Use the Veritas Access cluster name
(either the short name or the fully qualified domain name) in small letters as the
host name when creating the NFS service principal.
For example, if access_ga_01 and access_ga_02 are two nodes in the Veritas
Access cluster, then access_ga (or the fully qualified domain name
access_ga.example.com) should be used for adding the NFS service principal.
The Domain Name System (DNS) or /etc/hosts is then set up to resolve access_ga
to all the virtual IPs of the Veritas Access cluster.
112
Configuring your NFS server
Setting up Kerberos authentication for NFS clients
To configure the KDC server
1
Create the NFS service principal on the KDC server using the kadmin.local
command.
addprinc -randkey nfs/access_ga
2
Create a keytab file for the NFS service principal on KDC.
ktadd -k /etc/access.keytab nfs/access_ga
3
Copy the created keytab file (/etc/access.keytab) to the Veritas Access
console node.
4
Use the Network> krb standalone set command to set the Kerberos
configuration on Veritas Access.
The Network> krb standalone set command takes the KDC server name,
Kerberos realm, and the location of the keytab that is located on the Veritas
Access console node. This command sets up the Kerberos configuration file
/etc/krb5.conf with the KDC server name and realm on all the nodes of the
Veritas Access cluster. The command then copies the keytab file to
/etc/krb5.keytab on all the nodes of the Veritas Access cluster.
Network> krb standalone set kdc_server TESTKDC.COM
/home/support/krb5.keytab
The Network> krb standalone set command checks for the correct domain
in the /etc/idmapd.conf file. If the domain is not set, the command gives a
warning message saying that the DNS domain name needs to be set.
See “Configuring Veritas Access for ID mapping for NFS version 4” on page 110.
5
Use the Network> krb standalone show command to show the Kerberos
configuration.
6
Use the following commands to stop and restart the NFS-Ganesha service:
NFS> server stop
NFS> server start
7
Use the Network> krb standalone unset command to reset the Kerberos
configuration.
After the KDC server is configured, you can export the NFS shares with Kerberos
authentication options.
113
Chapter
9
Using Veritas Access as a
CIFS server
This chapter includes the following topics:
â– 
About configuring Veritas Access for CIFS
â– 
About configuring CIFS for standalone mode
â– 
Configuring CIFS server status for standalone mode
â– 
Changing security settings
â– 
Changing security settings after the CIFS server is stopped
â– 
About Active Directory (AD)
â– 
About configuring CIFS for Active Directory (AD) domain mode
â– 
Setting NTLM
â– 
About setting trusted domains
â– 
About storing account information
â– 
Storing user and group accounts
â– 
Reconfiguring the CIFS service
â– 
About mapping user names for CIFS/NFS sharing
â– 
About the mapuser commands
â– 
Adding, removing, or displaying the mapping between CIFS and NFS users
â– 
Automatically mapping of UNIX users from LDAP to Windows users
Using Veritas Access as a CIFS server
About configuring Veritas Access for CIFS
â– 
About managing home directories
â– 
About CIFS clustering modes
â– 
About migrating CIFS shares and home directories
â– 
Setting the CIFS aio_fork option
â– 
About managing local users and groups
â– 
Enabling CIFS data migration
About configuring Veritas Access for CIFS
The Common Internet File System (CIFS), also known as the Server Message
Block (SMB), is a network file sharing protocol that is widely used on Microsoft and
other operating systems. Veritas Access supports the SMB3 protocol.
You can specify either an IPv4 address or an IPv6 address.
Veritas Access supports the following clustering modes:
â– 
Normal
â– 
Clustered Trivial Database (CTDB) - a cluster implementation of the TDB (Trivial
database) based on the Berkeley database API
You can configure Active Directory by navigating to Settings > Services
Management > Active Directory.
Veritas Access supports the following CIFS security modes:
â– 
User
â– 
ADS
Each clustering mode supports both of the CIFS security modes. The ctdb clustering
mode is a different clustered implementation of Veritas Access CIFS, which supports
almost all of the features supported by normal clustering mode as well as some
additional features.
Additional features supported in ctdb clustering mode:
â– 
Directory-level share support and also supported in normal clustering mode
â– 
Multi-instance share export of a file system/directory
â– 
Simultaneous access of a share from multiple nodes and therefore better load
balancing
See “About CIFS clustering modes” on page 155.
115
Using Veritas Access as a CIFS server
About configuring CIFS for standalone mode
Veritas Access can be integrated into a network that consists of machines running
Microsoft Windows. You can control and manage the network resources by using
Active Directory (AD) domain controllers.
Before you use Veritas Access with CIFS, you must have administrator-level
knowledge of the Microsoft operating systems, Microsoft services, and Microsoft
protocols (including AD and NT services and protocols).
You can find more information about them at: www.microsoft.com.
When serving the CIFS clients, Veritas Access can be configured to operate in one
of the operating mode environments described in Table 9-1.
Table 9-1
CIFS operating mode environments
Mode
Definition
Standalone
Information about the user and group accounts is stored locally on
Veritas Access. Veritas Access also authenticates users locally using
the Linux password and group files. This mode of operation is provided
for Veritas Access testing and may be appropriate in other cases, for
example, when Veritas Access is used in a small network and is not a
member of a Windows security domain. In this mode of operation, you
must create the local users and groups; they can access the shared
resources subject to authorization control.
Active Directory
(AD)
Veritas Access becomes a member of an AD security domain and is
configured to use the services of the AD domain controller, such as
DNS, LDAP, and NTP. Kerberos, NTLMv2, or NTLM authenticate users.
When Veritas Access operates in the AD domain mode, it acts as a domain member
server and not as the domain controller.
About configuring CIFS for standalone mode
If you do not have an AD server, you can use Veritas Access as a standalone
server. Veritas Access is used in standalone mode when testing Veritas Access
functionality and when it is not a member of a domain.
Before you configure the CIFS service for the standalone mode, do the following:
â– 
Make sure that the CIFS server is not running.
â– 
Set security to user.
â– 
Start the CIFS server.
To make sure that the configuration has changed, do the following:
â– 
Check the server status.
116
Using Veritas Access as a CIFS server
Configuring CIFS server status for standalone mode
â– 
Display the server settings.
Configuring CIFS server status for standalone
mode
To check the CIFS server status
1
To check the status of the CIFS server, enter the following:
CIFS> server status
By default, security is set to user, the required setting for standalone mode.
The following example shows that security was previously set to ads.
2
If the server is running, enter the following:
CIFS> server stop.
To check the security setting
1
To check the current settings before setting security, enter the following:
CIFS> show
2
To set security to user, enter the following:
CIFS> set security user
To start the CIFS service in standalone mode
1
To start the service in standalone mode, enter the following:
CIFS> server start
2
To display the new settings, enter the following:
CIFS> show
3
To make sure that the server is running in standalone mode, enter the following:
CIFS> server status
The CIFS service is now running in standalone mode.
See “About managing local users and groups” on page 159.
See “About managing CIFS shares” on page 260.
117
Using Veritas Access as a CIFS server
Changing security settings
Changing security settings
To change security settings
â—†
To set the security to user, enter the following:
CIFS> set security user
To stop the CIFS server:
CIFS> server stop
Changing security settings after the CIFS server
is stopped
To change security settings for a CIFS server that has been stopped
â—†
To set security to a value other than domain, enter the following:
CIFS> set security user
If the server is stopped, then changing the security mode will disable the
membership of the existing domain.
About Active Directory (AD)
In order to provide CIFS services, Veritas Access must be able to authenticate
within the Windows environment.
Active Directory (AD) is a technology created by Microsoft that provides a variety
of network services including LDAP directory services, Kerberos-based
authentication, Domain Name System (DNS) naming, secure access to resources,
and more.
Veritas Access will not join the AD domain if its clock is excessively out-of-sync
with the clock on the AD domain controller. Ensure that Network Time Protocol
(NTP) is configured on Veritas Access, preferably on the same NTP server as the
AD domain controller.
Configuring entries for Veritas Access DNS for authenticating to
Active Directory (AD)
Name resolution must be configured correctly on Veritas Access. Domain Name
System (DNS) is usually used for name resolution.
118
Using Veritas Access as a CIFS server
About Active Directory (AD)
To configure entries for Veritas Access DNS for authenticating to Active
Directory
1
Create an entry for the Veritas Access cluster name.
The cluster name is chosen at the time of installation, and it cannot be reset
afterwards. It is also the NetBios name of the cluster, hence it must resolve to
an IP address.
2
Configure the Veritas Access cluster name in DNS so that queries to it return
the Virtual IP Addresses (VIPs) associated with the Veritas Access cluster in
a round-robin fashion.
This is done by creating separate A records that map the cluster name to each
VIP. So, if there are four VIPs associated with the Veritas Access cluster (not
including special VIPs for backup, replication for Veritas Access, and so on),
then there must be four A records mapping the cluster name to the four VIPs.
3
Verify that the DNS server has correct entries for Veritas Access by querying
from a client:
myclient:~ # nslookup myaccess
After configuring the DNS server correctly, Veritas Access must be configured
as a DNS client.
This is done during installation, but may be modified by using the following
commands:
Network> dns set domainname accesstest-ad2.local
Network> dns set nameservers <IP address>
Network> dns enable
4
Verify that DNS client parameters are set correctly by entering the following
command:
Network> dns show
5
Ensure host resolution is querying DNS by checking nsswitch:
Network> nsswitch show
In the above scenario, host resolution first looks at files, and then DNS.
Configuring name resolution correctly is critical in order to successfully join
Veritas Access to Active Directory.
119
Using Veritas Access as a CIFS server
About Active Directory (AD)
Joining Veritas Access to Active Directory (AD)
To join Veritas Access to Active Directory (AD)
1
To stop the CIFS server, enter the following command.
CIFS> server stop
2
To set the domain, enter the following command:
CIFS> set domain accesstest-ad2.local
In this example, it is the same as the DNS domain name.
This is the domain name of Active Directory.
3
To set the domain controller, enter the following command:
CIFS> set domaincontroller <IP address>
The IP address can be the IP address of the Active Directory Domain Controller.
However, this is not a requirement . The DNS server and Active Directory can
run on different servers, and hence this IP address may be different from the
IP address of the DNS server.
4
To set the domain user, enter the following command:
CIFS> set domainuser newuser
This is a user whose credentials are used to join the Active Directory domain.
The domainuser must have Domain Join privilege into the Active Directory
domain. The domainuser need not be Administrator.
5
To set the CIFS security mode, enter the following command:
CIFS> set security ads
The other CIFS security mode is user for local users. For authenticating to
Active Directory, use the ads CIFS security mode.
6
To start the CIFS server, enter the following command:
CIFS> server start
Veritas Access displays the time on the cluster as well as the time on the Active
Directory Domain Controller.
If NTP has been configured correctly, then there will be no time skew.
Otherwise, you will need to reconfigure NTP correctly.
You will be prompted to enter the password of domainuser.
120
Using Veritas Access as a CIFS server
About configuring CIFS for Active Directory (AD) domain mode
Verifying that Veritas Access has joined Active Directory (AD)
successfully
To verify that Veritas Access has joined Active Directory (AD) successfully
â—†
To verify that Veritas Access has joined Active Directory successfully, enter
the following command:
CIFS> server status
Refer to the Domain membership status line of the output to verify that the
Veritas Access cluster has joined the domain (displays as Enabled) if the join
is successful.
If the cluster did not join the domain, an informative error message is provided
indicating why the Veritas Access cluster cannot join the domain.
About configuring CIFS for Active Directory (AD)
domain mode
This section assumes that an Active Directory (AD) domain has already been
configured and that Veritas Access can communicate with the AD domain controller
(DC) over the network. The AD domain controller is also referred to as the AD
server.
121
Using Veritas Access as a CIFS server
About configuring CIFS for Active Directory (AD) domain mode
Configuring CIFS for the AD domain mode
To set the domain user for AD domain mode
1
To verify that the CIFS server is stopped, enter the following:
CIFS> server status
2
If the server is running, stop the server. Enter the following:
CIFS> server stop
3
To set the domain user, enter the following:
CIFS> set domainuser username
where username is the name of an existing AD domain user who has permission
to perform the join domain operation.
For example:
CIFS> set domainuser administrator
To set the domain for AD domain mode
â—†
To set the domain for AD domain mode, enter the following:
CIFS> set domain domainname
where domainname is the name of the domain.
For example:
CIFS> set domain VERITASDOMAIN.COM
To set the domain controller for AD domain mode
â—†
To set the domain controller, enter the following:
CIFS> set domaincontroller servername
where servername is the server's IP address or DNS name.
122
Using Veritas Access as a CIFS server
About configuring CIFS for Active Directory (AD) domain mode
To set security to ads
â—†
To set security to ads, enter the following:
CIFS> set security ads|user
Enter ads for security.
CIFS> set security ads
To set the workgroup
â—†
To set the workgroup name if the WORKGROUP or NetBIOS domain name
is different from the domain name, enter the following:
CIFS> set workgroup workgroup
where workgroup sets the WORKGROUP name. If the name of the
WORKGROUP or NetBIOS domain name is different from the domain name,
use this command to set the WORKGROUP name.
For example, if SIMPLE is the name of the WORKGROUP you want to set,
you would enter the following:
CIFS> set workgroup SIMPLE
Though the following symbols $,( ), ', and & are valid characters for naming a
WORKGROUP, the Veritas Access CIFS implementation does not allow using
these symbols.
To start the CIFS server
1
To start the CIFS server, enter the following:
CIFS> server start
After you enter the correct password for the user administrator belonging to
AD domain, you get a message saying that the CIFS server has started
successfully.
2
To make sure that the service is running, enter the following:
CIFS> server status
The CIFS server is now running in the AD domain mode. You can export the
shares, and the domain users can access the shares subject to the AD
authentication and authorization control.
123
Using Veritas Access as a CIFS server
About configuring CIFS for Active Directory (AD) domain mode
Using multi-domain controller support in CIFS
Veritas Access allows you to set a comma-separated list of primary and backup
domain controllers for the given domain.
Note: You need to set dns nameserver for other domain controller (i.e. backup
domain controller) using the network dns set nameserver command.
You will need to stop and start the CIF server.
See “Reconfiguring the CIFS service” on page 145.
To display the list of domain controllers
â—†
To display the list of domain controllers, enter the following:
CIFS> show
If the primary domain controller goes down, the CIFS server tries the next
domain controller in the list until it receives a response. You should always
point Veritas Access to the trusted domain controllers to avoid any security
issues. Veritas Access does not perform list reduction or reordering, instead
it uses the list as it is. So, avoid entering the redundant name for the same
domain controller.
About leaving an AD domain
There is no Veritas Access command that lets you leave an AD domain. It happens
automatically as a part of change in security or domain settings, and then starts or
stops the CIFS server. Thus, Veritas Access provides the domain leave operation
depending on existing security and domain settings and new administrative
commands. However, the leave operation requires the credentials of the old domain’s
user. All of the cases for a domain leave operation have been documented in
Table 9-2.
124
Using Veritas Access as a CIFS server
About configuring CIFS for Active Directory (AD) domain mode
Table 9-2
Change AD domain mode settings commands
Command
Definition
set domain
Sets the domain.
When you change any of the domain settings and you restart the CIFS
server, the CIFS server leaves the old domain. Thus, when a change
is made to either one or more of domain, domain controller, or domain
user settings, and the next time the CIFS server is started, the CIFS
server first attempts to leave the existing join and then joins the AD
domain with the new settings.
See “About leaving an AD domain” on page 124.
set security
user
Sets the security user.
If you change the security setting from ads to user and you stop or
restart the CIFS server, it leaves the AD domain.
When you change the security setting, and you stop or restart the CIFS
server, the CIFS server leaves the existing AD domain. For example,
the CIFS server leaves the existing AD domain if the existing security
is ads, and the new security is changed to user, and the CIFS server
is either stopped, or started again.
See “About leaving an AD domain” on page 124.
If the CIFS server is already stopped, changing the security to a value
other than ads causes Veritas Access to leave the domain. Both the
methods mentioned earlier require either stopping or starting the CIFS
server. This method of leaving the domain is provided so that if a CIFS
server is already stopped, and may not be restarted in near future, you
should have some way of leaving an existing join to AD domain.
See “About leaving an AD domain” on page 124.
Changing domain settings for AD domain mode
Each case assumes that the Veritas Access cluster is part of an AD domain.
To verify the cluster is part of an AD domain
â—†
To verify that the cluster is part of an AD domain, enter the following:
CIFS> server status
125
Using Veritas Access as a CIFS server
About configuring CIFS for Active Directory (AD) domain mode
To change domain settings for AD domain mode
1
To stop the CIFS server, enter the following:
CIFS> server stop
2
To change the domain, enter the following:
CIFS> set domain newdomain.com
When you start the CIFS server, it tries to leave the existing domain. This
requires the old domainuser to enter its password. After the password is
supplied, and the domain leave operation succeeds, the CIFS server joins an
AD domain with the new settings.
3
To start the CIFS server, enter the following:
CIFS> server start
To change the security settings for the AD domain mode
â—†
To set the security to user, enter the following:
CIFS> set security user
To stop the CIFS server:
CIFS> server stop
Changing security settings with stopped server on the AD domain mode
â—†
To set security to a value other than ads, enter the following:
CIFS> set security user
Removing the AD interface
You can remove the Veritas Access cluster from the AD domain by using the Active
Directory interface.
To remove the Veritas Access cluster
1
Open the interface Active Directory Users and Computers.
2
In the domain hierarchy tree, click on Computers.
3
In the details pane, right-click the computer entry corresponding to Veritas
Access (this can be identified by the Veritas Access cluster name) and click
Delete.
126
Using Veritas Access as a CIFS server
Setting NTLM
Setting NTLM
When you use Veritas Access in AD domain mode, there is an optional configuration
step that can be done. You can disable the use of Microsoft NTLM (NT LAN
Manager) protocol for authenticating users.
When the Veritas Access CIFS service is running in the standalone mode (with
security set to user) some versions of the Windows clients require NTLM
authentication to be enabled. You can do this by setting CIFS> set ntlm_auth
to yes.
When NTLM is disabled and you use Veritas Access in AD domain mode, the
available authentication protocols are Kerberos and NTLMv2. The one used depends
on the capabilities of both the Veritas Access clients, and domain controller. If no
special action is taken, Veritas Access allows the NTLM protocol to be used.
For any specific CIFS connection, all the participants, that is the client machine,
Veritas Access and the domain controller select the protocol that they all support
and that provides the highest security. In the AD domain mode, Kerberos provides
the highest security.
To disable NTLM
1
If the server is running, enter the following:
CIFS> server stop
2
To disable NTLM, enter the following:
CIFS> set ntlm_auth no
3
To start the CIFS service, enter the following:
CIFS> server start
To enable NTLM
1
If the server is running, enter the following:
CIFS> server stop
2
To enable the NTLM protocol, enter the following:
CIFS> set ntlm_auth yes
3
To start the CIFS service, enter the following:
CIFS> server start
127
Using Veritas Access as a CIFS server
About setting trusted domains
About setting trusted domains
The Microsoft Active Directory supports the concept of trusted domains. When you
authenticate users, you can configure domain controllers in one domain to trust the
domain controllers in another domain. This establishes the trust relation between
the two domains. When Veritas Access is a member in an AD domain, both Veritas
Access and the domain controller are involved in authenticating the clients. You
can configure Veritas Access to support or not support trusted domains.
Table 9-3
Command
Set trusted domains commands
Definition
set
Enables the use of trusted domains in the AD domain mode.
allow_trusted_domains Note: If the security mode is user, it is not possible to enable
yes
AD trusted domains. All the IDMAP backend methods (rid, ldap,
and hash) are able to support trusted domains.
See “Setting Active Directory trusted domains” on page 140.
set
Disables the use of trusted domains in the AD domain mode.
allow_trusted_domains
See “Setting Active Directory trusted domains” on page 140.
no
Specifying trusted domains that are allowed access to the CIFS
server
You can specify the trusted domains that are allowed access to a CIFS server when
the CIFS> set allow_trusted_domains option is set to yes, and idmap_backend
is set to rid or ad.
See “Allowing trusted domains access to CIFS when setting an IDMAP backend
to rid” on page 129.
By default, all the trusted domains of the joined active directory domain are included
in the CIFS settings and configuration if allow_trusted_domains is set to yes.
By default, CIFS> set allow_trusted_domains is set to no.
128
Using Veritas Access as a CIFS server
About setting trusted domains
To specify the trusted domains that are allowed access to the CIFS server
â—†
To specify the trusted domains that are allowed access to the CIFS server,
enter the following:
CIFS> set allow_trusted_domains yes | no [trusted_domains]
where trusted_domains are the trusted domains that you want to allow access
to the CIFS server.
Allowing trusted domains access to CIFS when setting an IDMAP
backend to rid
To allow trusted domains access to CIFS when setting IDMAP backend to rid
1
If the CIFS server is running, enter the following:
CIFS> server stop
2
To set the idmap_backend to rid, enter the following:
CIFS> set idmap_backend rid [uid_range]
where uid_range represents the range of identifiers that are used by Veritas
Access when mapping domain users and groups to local users and groups.
You can obtain unique user IDs (UIDs) or group IDs (GIDs) from domains by
reading ID mappings from an Active Directory server that uses RFC2307/SFU
schema extensions. This is a read-only idmap backend. Trusted domains are
allowed if set allow_trusted_domains is set to yes. A valid user from a
domain or trusted domain should have a UID as well as a GID for the user's
primary group.
By default, the uid_range is set to 10000-1000000. Change it in cases where
there are more than 1000000 users existing on a local Veritas Access cluster
where there are joined Active Directory domains or trusted domains.
Note: The uid_range is adjusted automatically according to the search results
of the defined UNIX IDs from the domain after a CIFS server restart.
CIFS> set idmap_backend rid
3
To set allow_trusted_domains to yes, enter the following:
CIFS> set allow_trusted_domains yes
129
Using Veritas Access as a CIFS server
About setting trusted domains
4
To start the CIFS server again, enter the following:
CIFS> server start
5
To verify the CIFS server status when there are trusted domains, enter the
following:
CIFS> server status
Domain names containing square brackets indicate that the domain used to
be a trusted domain, but the domain is currently obsolete.
130
Using Veritas Access as a CIFS server
About setting trusted domains
Allowing trusted domains access to CIFS when setting an IDMAP
backend to ldap
To allow trusted domains access to CIFS when setting an IDMAP backend to
ldap
1
To configure AD as an IDMAP backend, follow the steps provided at:
See “About configuring Windows Active Directory as an IDMAP backend for
CIFS” on page 134.
2
To set idmap_backend to ldap, enter the following:
CIFS> set idmap_backend ldap [idmap_ou] [uid_range]
idmap_ou
Specifies the CIFS idmap Organizational Unit Name (OU)
configured on the LDAP server, which is used by Veritas Access
when mapping users and groups to local users and groups. The
default value is cifsidmap.
uid_range
Specifies the range of identifiers that are used by Veritas Access
when mapping domain users and groups to local users and groups.
You can obtain unique user IDs (UIDs) or group IDs (GIDs) from
domains by reading ID mappings from an Active Directory server
that uses RFC2307/SFU schema extensions. This is a read-only
idmap backend. Trusted domains are allowed if set
allow_trusted_domains is set to yes. A valid user from a
domain or trusted domain should have a UID as well as a GID for
the user's primary group.
By default, the uid_range is set to 10000-1000000. Change it in
cases where there are more than 1000000 users existing on a
local Veritas Access cluster where there are joined Active Directory
domains or trusted domains.
Note: The uid_range is adjusted automatically according to the
search results of the defined UNIX IDs from the domain after a
CIFS server restart.
CIFS> set idmap_backend ldap
3
To set allow_trusted_domains to yes, enter the following:
CIFS> set allow_trusted_domains yes
131
Using Veritas Access as a CIFS server
About setting trusted domains
4
To restart the CIFS server again, enter the following:
CIFS> server start
5
To verify the CIFS server status when there are trusted domains, enter the
following:
CIFS> server status
Allowing trusted domains access to CIFS when setting an IDMAP
backend to hash
To allow trusted domains access to CIFS when setting an IDMAP backend to
hash
1
If the CIFS server is running, enter the following:
CIFS> server stop
2
To set idmap_backend to hash, enter the following:
CIFS> set idmap_backend hash
You can obtain unique user IDs (UIDs) or group IDs (GIDs) from domains by
reading ID mappings from an Active Directory server that uses RFC2307/SFU
schema extensions. This is a read-only idmap backend. Trusted domains are
allowed if set allow_trusted_domains is set to yes. A valid user from a
domain or trusted domain should have a UID as well as a GID for the user's
primary group.
By default, the uid_range is set to 10000-1000000. Change it in cases where
there are more than 1000000 users existing on a local Veritas Access cluster
where there are joined Active Directory domains or trusted domains.
Note: The uid_range is adjusted automatically according to the search results
of the defined UNIX IDs from the domain after a CIFS server restart.
132
Using Veritas Access as a CIFS server
About setting trusted domains
3
To set allow_trusted_domains to yes, enter the following:
CIFS> set allow_trusted_domains yes
4
To verify the CIFS server status when there are trusted domains, enter the
following:
CIFS> server status
Allowing trusted domains access to CIFS when setting an IDMAP
backend to ad
To allow trusted domains access to CIFS when setting IDMAP backend to ad
1
If the CIFS server is running, enter the following:
CIFS> server stop.
2
To set the idmap_backend to ad, enter the following:
CIFS> set idmap_backend ad [uid_range]
where uid_range represents the range of identifiers that are used by Veritas
Access when mapping domain users and groups to local users and groups.
You can obtain unique user IDs (UIDs) or group IDs (GIDs) from domains by
reading ID mappings from an Active Directory server that uses RFC2307/SFU
schema extensions. This is a read-only idmap backend. Trusted domains are
allowed if set allow_trusted_domains is set to yes. A valid user from a
domain or trusted domain should have a UID as well as a GID for the user's
primary group.
By default, the uid_range is set to 10000-1000000. Change it in cases where
there are more than 1000000 users existing on a local &product_name_isa;
cluster where there are joined Active Directory domains or trusted domains.
Note: The uid_range is adjusted automatically according to the search results
of the defined UNIX IDs from the domain after a CIFS server restart.
3
To set allow_trusted_domains to yes, enter the following:
CIFS> set allow_trusted_domains yes
133
Using Veritas Access as a CIFS server
About setting trusted domains
4
To start the CIFS server again, enter the following:
CIFS> server start
5
To verify the CIFS server status when there are trusted domains, enter the
following:
CIFS> server status
Domain names containing square brackets indicate that the domain used to
be a trusted domain, but the domain is currently obsolete.
About configuring Windows Active Directory as an IDMAP backend
for CIFS
The CIFS server requires equivalent UNIX identities for Windows accounts to service
requests from Windows clients. In the case of trusted domains, Veritas Access has
to store the mapped UNIX identities (IDMAP) in a centralized database that is
accessible from each of the cluster nodes.
Active Directory (AD), as with any LDAP V3 compliant directory service, can function
as the backend for CIFS IDMAP backend storage. When the CIFS server joins a
Windows Active Directory Domain as a member server, and you want to use LDAP
as an IDMAP backend, then it is necessary to create an Active Directory application
partition for the IDMAP database. To support the creation of an Active Directory
application partition, Windows 2003 R2 and above version is required.
Active Directory application partition provides the ability to control the scope of
replication and allow the placement of replicas in a manner more suitable for dynamic
data. As a result, the application directory partition provides the capability of hosting
dynamic data in the Active Directory server, thus allowing ADSI/LDAP access to it.
By extending the AD schema with the necessary CIFS-schema extensions, and
creating an AD application partition, it is possible to store CIFS IDMAP data entries
in AD, using one or more domain controllers as IDMAP LDAP backend servers.
Also, it is possible to replicate this information in a simple and controlled manner
to a subset of AD domain controllers located either in the same domain or in different
domains in the AD forest.
Note: A single domain user account is used, for example, cifsuser for setting
application partition Access Control List (ACL) settings. Make sure the selected
user naming context has no space key inside (for example,
CN=cifsuser1,CN=Users,DC=example,DC=com). A sample AD server is used,
for example, adserver.example.com. Use relevant values when configuring your
AD server.
134
Using Veritas Access as a CIFS server
About setting trusted domains
Configuring the Active Directory schema with CIFS-schema
extensions
To extend the Active Directory schema with the necessary CIFS-schema
extensions
1
Login with Schema Admins privileges on the Active Directory Forest Schema
Master domain controller.
2
Download ADCIFSSchema.zip from the Veritas Access server
(/opt/SYMCsnas/install/ADCIFSSchema.zip) with software such as
WinSCP.exe.
3
Unzip the file and open each .ldf file to perform a search and replace of the
string dc=example,dc=com, replacing the string with the top-level domain
component (that is, dc=yourdomain,dc=com) values for the AD forest.
4
Install the schema extensions by executing the schemaupdate.bat file from
the command prompt.
To validate the schema extensions
1
Execute regsvr32 schmmgmt.dll in a command prompt window to install the
Active Directory Schema Snap-In on the AD server.
2
Enter mmc in Run.
3
On the File menu, click Add/Remove Snapin.
4
In Available snap-ins, click Active Directory Schema, and then click Add.
5
Click OK.
6
Click Attributes in the left frame, and try to find uidNumber and gidNumber
in the right frame.
Validate that the uidNumber and gidNumber attributes have no minimum or
maximum value setting by viewing the properties of the attribute objects.
To create an application partition
1
Open a command prompt window on the domain controller that will hold the
first replica of the application partition.
2
Enter ntdsutil in the command prompt window.
3
At the ntdsutil command prompt, enter the following:
domain management
If you are using Windows 2008, change this command to the following:
partition management
135
Using Veritas Access as a CIFS server
About setting trusted domains
4
136
At the domain management command prompt, enter the following:
connection
5
At the connection command prompt, enter the following:
connect to server adserver.example.com
6
At the connection command prompt, enter the following:
quit
7
At the domain management command prompt, enter the following such as:
create nc dc=idmap,dc=example,dc=com null
Example settings:
C:\>ntdsutil
ntdsutil: domain management
domain management: connection
server connections: connect to server adserver.example.com
Binding to adserver.example.com ...
Connected to adserver.si2m.com using credentials of locally logged
on user.
server connections: quit
domain management: create nc dc=idmap,dc=example,dc=com NULL
adding object dc=idmap,dc=example,dc=com
domain management: quit
ntdsutil: quit
Disconnecting from adserver.example.com...
Using Veritas Access as a CIFS server
About setting trusted domains
8
Once the application partition has been created, open ADSIedit.msc fromRun,
then right-click on ADSI Edit in the left frame, and click connect to ... to connect
to the application partition using the settings as indicated:
Name
Enter Domain.
Connection Point
Select or enter a Distinguished Name or Naming Context, as
in:
dc=idmap,dc=example,dc=com
Computer
Select or enter a domain or server, as in:
adserver.example.com
137
Using Veritas Access as a CIFS server
About setting trusted domains
9
Once connected, select the top-level application partition (for example,
dc=idmap,dc=example,dc=com) node in the left panel, and right-click to
select New then Object from the list, and then select SambaUnixIdPool.
When prompted, enter the following values:
OU attribute
cifsidmap
uidNumber
10000
gidNumber
10000
10 Click Finish to complete the configuration.
11 Once the ou=cifsidmap,dc=idmap,dc=example,dc=com container has been
created, right-click the object, and select properties.
12 On the Security tab, click Add, and proceed to add the cifsuser user account,
and grant the account Read, Write, Create All Child Objects, and Delete All
Child Objects permissions.
138
Using Veritas Access as a CIFS server
About setting trusted domains
139
Configuring the LDAP client for authentication using the CLI
To configure the LDAP client for authentication using the CLI
1
Log into the cluster CLI using the master account.
2
Configure Network> ldap settings.
Example settings:
Network> ldap set basedn dc=idmap,dc=example,dc=com
Network> ldap set binddn cn=cifsuser,dc=example,dc=com
Network> ldap set rootbinddn cn=cifsuser,cn=users,dc=example,dc=com
Network> ldap set server adserver.example.com
Network> ldap enable
Configuring the CIFS server with the LDAP backend
To configure the CIFS server with the LDAP backend
1
Log in to the Veritas Access cluster CLI using the master account.
2
Set the domain, domaincontroller, and domainuser.
3
Set security to ads.
Using Veritas Access as a CIFS server
About setting trusted domains
4
Set idmap_backend to ldap, and specify idmap OU as cifsidmap.
Example settings:
CIFS> set domain example.com
CIFS> set domainuser administrator
CIFS> set domaincontroller adserver.example.com
CIFS> set security ads
CIFS> set idmap_backend ldap cifsidmap
CIFS> server start
5
Start the CIFS server.
The CIFS server will take some time to import all the users from the joined
domain and trusted domain(s) to the application partition. Wait for at least ten
minutes before trying to access the shares from Windows clients after starting
the CIFS server.
To validate that IDMAP entries are being entered correctly in the Active
Directory application partition, connect to the Active Directory application
partition using an LDAP administration tool, for example, LDP or ADSIEdit.
Expand the IDMAP container (ou=cifsidmap). There should be numerous
entries.
Setting Active Directory trusted domains
To enable Active Directory (AD) trusted domains
1
If the server is running, enter the following:
CIFS> server stop
2
To enable trusted domains, enter the following:
CIFS> set allow_trusted_domains yes
3
To start the CIFS server, enter the following:
CIFS> server start
140
Using Veritas Access as a CIFS server
About storing account information
To disable trusted domains
1
If the server is running, enter the following:
CIFS> server stop
2
To disable trusted domains, enter the following:
CIFS> set allow_trusted_domains no
3
To start the CIFS server, enter the following:
CIFS> server start
About storing account information
Veritas Access maps between the domain users and groups (their identifiers) and
local representation of these users and groups. Information about these mappings
can be stored locally on Veritas Access or remotely using the DC directory service.
Veritas Access uses the idmap_backend configuration option to decide where this
information is stored.
This option can be set to one of the following:
rid
Maps SIDs for domain users and groups by deriving UID and GID from
RID on the Veritas Access CIFS server.
ldap
Stores the user and group information in the LDAP directory service.
hash
Maps SIDs for domain users and groups to 31-bit UID and GID by the
implemented hashing algorithm on the Veritas Access CIFS server.
ad
Obtains unique user IDs (UIDs) or group IDs (GIDs) from domains by
reading ID mappings from an Active Directory server that uses
RFC2307/SFU schema extensions.
Note: SID/RID are Microsoft Windows concepts that can be found at:
http://msdn.microsoft.com/en-us/library/aa379602(VS.85).aspx.
The rid and hash values can be used in any of the following modes of operation:
â– 
Standalone
â– 
AD domain
141
Using Veritas Access as a CIFS server
About storing account information
rid is the default value for idmap_backend in all of these operational modes. The
ldap value can be used if the AD domain mode is used.
When security is set as "user" idmap_backend is irrelevant.
Table 9-4
Store account information commands
Command
Definition
set
idmap_backend
rid
Configures Veritas Access to store information about users and groups
locally.
Trusted domains are allowed if allow_trusted_domains is set to
yes. The uid_range is set to 10000-1000000 by default.
Change the default range in cases where it is not appropriate to
accommodate local Veritas Access cluster users, Active Directory, or
trusted domain users.
Do not attempt to modify LOW_RANGE_ID (10000) if user data has
already been created or copied on the CIFS server. This may lead to
data access denied issues since the UID changes.
See “Storing user and group accounts” on page 143.
set
idmap_backend
hash
Allows you to obtain the unique SID to UID/GID mappings by the
implemented hashing algorithm. Trusted domains are allowed if
allow_trusted_domains is set to yes.
See “Storing user and group accounts” on page 143.
set
idmap_backend
ad
Allows you to obtain unique user IDs (UIDs) or group IDs (GIDs) from
domains by reading ID mappings from an Active Directory server that
uses RFC2307/SFU schema extensions.
See “Storing user and group accounts” on page 143.
142
Using Veritas Access as a CIFS server
Storing user and group accounts
Table 9-4
Store account information commands (continued)
Command
Definition
set
idmap_backend
ldap
Configures Veritas Access to store information about users and groups
in a remote LDAP service. You can only use this command when Veritas
Access is operating in the AD domain mode. The LDAP service can
run on the domain controller or it can be external to the domain
controller.
Note: For Veritas Access to use the LDAP service, the LDAP service
must include both RFC 2307 and proper schema extensions.
See “Configuring the LDAP client for authentication using the CLI”
on page 139.
This option tells the CIFS server to obtain SID to UID/GID mappings
from a common LDAP backend. This option is compatible with multiple
domain environments. So allow_trusted_domains can be set to
yes.
If idmap_backend is set to ldap, you must first configure the Veritas
Access LDAP options using the Network> ldap commands.
See “About configuring LDAP settings” on page 52.
See “Storing user and group accounts” on page 143.
Storing user and group accounts
To set idmap_backend to rid
1
If the server is running, enter the following:
CIFS> server stop
2
To store information about user and group accounts locally, enter the following:
CIFS> set idmap_backend rid [uid_range]
where uid_range represents the range of identifiers that are used by Veritas
Access when mapping domain users and groups to local users and groups.
The default range is 10000-1000000.
3
To start the CIFS server, enter the following:
CIFS> server start
143
Using Veritas Access as a CIFS server
Storing user and group accounts
To set idmap_backend to LDAP
1
To make sure that you have first configured LDAP, enter the following:
Network> ldap show
2
If the CIFS server is running, enter the following:
CIFS> server stop
3
To use the remote LDAP store for information about the user and group
accounts, enter the following:
CIFS> set idmap_backend ldap [idmap_ou]
where idmap_ou represents the CIFS idmap Organizational Unit Name (OU)
configured on the LDAP server, which is used by Veritas Access when mapping
users and groups to local users and groups. The default value is cifsidmap.
4
To start the CIFS server, enter the following:
CIFS> server start
To set idmap_backend to a hash algorithm
1
If the CIFS server is running, enter the following:
CIFS> server stop
2
To store information about user and group accounts locally, enter the following:
CIFS> set idmap_backend hash
3
To start the CIFS server, enter the following:
CIFS> server start
144
Using Veritas Access as a CIFS server
Reconfiguring the CIFS service
To set idmap_backend to ad
1
If the CIFS server is running, enter the following:
CIFS> server stop
2
To obtain the unique UID/GID from domains by reading ID mappings from an
Active Directory (AD) server, enter the following:
CIFS> set idmap_backend ad [uid_range]
where uid_range represents the range of identifiers that are used by Veritas
Access when mapping domain users and groups to local users and groups.
The default range is 10000-1000000. Change it in cases where there are more
than 1000000 users existing on a local Veritas Access cluster where there are
joined Active Directory domains or trusted domains.
Note: The uid_range is adjusted automatically according to the search results
of the defined UNIX IDs from the domain after a CIFS server restart.
3
To start the CIFS server, enter the following:
CIFS> server start
Reconfiguring the CIFS service
Sometime after you have configured the CIFS service, and used it for awhile, you
need to change some of the settings. For example, you may want to allow the use
of trusted domains or you need to move Veritas Access from one security domain
to another. To carry out these changes, set the new settings and then start the
CIFS server. As a general rule, you should stop the CIFS service before making
the changes.
An example where Veritas Access is moved to a new security domain (while the
mode of operation stays unchanged as, AD domain) is referenced below.
This example deals with reconfiguring CIFS. So make sure that if any of the other
AD services like DNS or NTP are being used by Veritas Access, that Veritas Access
has already been configured to use these services from the AD server belonging
to the new domain.
Make sure that the DNS service, NTP service and, if used as an ID mapping store,
also the LDAP service, are configured as required for the new domain.
To reconfigure the CIFS service, do the following:
145
Using Veritas Access as a CIFS server
Reconfiguring the CIFS service
â– 
Make sure that the server is not running.
â– 
Set the domain user, domain, and domain controller.
â– 
Start the CIFS server.
To set the user name for the AD
1
To verify that the CIFS server is stopped, enter the following:
CIFS> server status
2
If the server is running, stop the server, and enter the following:
CIFS> server stop
3
To set the user name for the AD, enter the following:
CIFS> set domainuser username
where username is the name of an existing AD domain user who has permission
to perform the join domain operation.
To set the AD domain
â—†
To set the AD domain, enter the following:
CIFS> set domain domainname
where domainname is the name of the domain. This command also sets the
system workgroup. For example:
To set the AD server
â—†
To set the AD server, enter the following:
CIFS> set domaincontroller servername
where servername is the AD server IP address or DNS name.
If you use the AD server name, you must configure Veritas Access to use a
DNS server that can resolve this name.
146
Using Veritas Access as a CIFS server
About mapping user names for CIFS/NFS sharing
To start the CIFS server
1
To start the CIFS server, enter the following:
CIFS> server start
2
To make sure that the service is running, enter the following:
CIFS> server status
3
To find the current settings, enter the following:
CIFS> show
About mapping user names for CIFS/NFS sharing
The CIFS server uses user name mapping to translate login names sent by a
Windows client to local or remote UNIX user names. The CIFS server uses file
lookup for mapping, and this mapping is unidirectional. You can map a CIFS user
to an NFS user, but the reverse operation is not possible.
This functionality can be used for the following purposes:
â– 
CIFS and NFS sharing by mapping CIFS users to NFS users
â– 
File sharing among CIFS users by mapping multiple CIFS users to a single
UNIX user
â– 
Mapping between two UNIX users by using the CIFS> mapuser add
<CIFSusername> LOCAL <NFSusername> command, where both the CIFS user
and the NFS user are UNIX users
User name mapping is stored in a configuration file.
When user name mapping takes place is dependent on the current security
configurations. If security is set to user, mapping is done prior to authentication,
and a password must be provided for the mapped user name. For example, if there
is a mapping between the users CIFSuser1 and NFSuser1. If CIFSuser1 wants to
connect to the Veritas Access server, then CIFSuser1 needs to provide a password
for NFSuser1. In this case, NFSuser1 must be the CIFS local user.
If security is set to either ads or domain, user name mapping is done after
authentication with the domain controller. This means, the actual password must
be supplied for the login user CIFSuser1 in the example cited above. In this case,
NFSuser1 may not be the CIFS local user.
147
Using Veritas Access as a CIFS server
About the mapuser commands
The domain you specify for CIFS user name mapping must be the netbios domain
name (instead of the Active Directory DNS domain name) for the user. For example,
a netbios domain name might be listed as VERITASDOMAIN instead of
VERITASDOMAIN.COM (without the .com extension).
To determine the netbios domain name, login to your Active Directory Server and
type the following in a command window:
set | findstr DOMAIN
The results will include:
USERDOMAIN netbios_domain_name
USERDNSDOMAIN Active_Directory_DNS_domain_name
Use the value of USERDOMAIN (the netbios domain name) when you map user names.
Note: When setting quotas on home directories and using user name mapping,
make sure to set the quota on the home directory using the user name to which
the original name is mapped.
Note: For mapped Active Directory users to access their home directory CIFS
shares, use the following convention: \\access\realADuser instead of
\\access\homes.
Note: For UNIX users (LDAP/NIS/local) users, make sure to set up these users
properly, so that these users are recognized by Samba. User mapping can work
properly only after these users are recognized by Samba.
About the mapuser commands
The CIFS> mapuser commands are used to add, remove, or display the mapping
between CIFS and NFS users.
Typically, a CIFSusername is a user coming from an AD server (with a specified
domainname), or a locally created CIFS user on this system (local). An
NFSusername is a user coming from a locally-created CIFS user on this system,
or from a NIS/LDAP server configured in the network section.
148
Using Veritas Access as a CIFS server
Adding, removing, or displaying the mapping between CIFS and NFS users
Note: To make sure user mappings work correctly with a NIS/LDAP server,
Network> nsswitch settings may need to be adjusted in the Network> nsswitch
section. You may need to move the position of ldap or nis in the Network>
nsswitch section, depending on which name service is being used first.
Adding, removing, or displaying the mapping
between CIFS and NFS users
To add a mapping between a CIFS and an NFS user
â—†
To add a mapping between a CIFS and an NFS user, enter the following:
CIFS> mapuser add CIFSusername domainname NFSusername
To remove a mapping between a CIFS and an NFS user
â—†
To remove a mapping between a CIFS and an NFS user, enter the following:
CIFS> mapuser remove CIFSusername [domainname]
To display a mapping between a CIFS and an NFS user
â—†
To display a mapping between a CIFS and an NFS user, enter the following:
CIFS> mapuser show [CIFSusername] [domainname]
149
Using Veritas Access as a CIFS server
Automatically mapping of UNIX users from LDAP to Windows users
Automatically mapping of UNIX users from LDAP
to Windows users
To automatically map UNIX users from LDAP to Windows users
1
Ensure that Veritas Access joins the LDAP domain using network ldap.
From the LDAP server, users and groups should be visible by using the getent
passwd or getent group commands .
2
Ensure that Veritas Access joins the Windows AD domain using cifs.
3
Use the wildcard mapping rule CIFS> mapuser add * AD Domain Name *.
The effect is whenever a Windows domain user, say DOM\foobar, wants to
access CIFS shares, the CIFS server determines if there is a local (a
non-Windows) user also named foobar, and establishes the mapping between
the Windows user and the non-Windows user.
The user name must match between the LDAP and AD domains.
About managing home directories
You can use Veritas Access to store the home directories of CIFS users.
The home directory share name is identical to the Veritas Access user name. When
Veritas Access receives a new CIFS connection request, it checks if the requested
share is one of the ordinary exported shares. If it is not, Veritas Access checks if
the requested share name is the name of an existing Veritas Access user (either
local user or domain user, depending on the current mode of operation). If a match
is found, it means that the received connection request is for a home directory
share.
You can access your home directory share the same way you access the file system
ordinary shares. A user can connect only to his or her own home directory.
Note: The internal directories structure of home directory file systems is maintained
by Veritas Access. It is recommended not to use a file system as a homedirfs that
has been used by a normal share in the past or vice versa.
Setting the home directory file systems
Home directory shares are stored in one or more file systems. A single home
directory can exist only in one of these file systems, but a number of home directories
150
Using Veritas Access as a CIFS server
About managing home directories
can exist in a single home directory file system. File systems that are to be used
for home directories are specified using the CIFS> set homedirfs command.
When a file system is exported as a homedirfs, its mode is set to a 0755 value. This
takes place when you start the CIFS server after setting the homedirfs list.
Note: Snapshots cannot be shared as home directory file systems.
To specify one or more file systems as the home directories
1
To reserve one or more file systems for home directories, enter the following:
CIFS> set homedirfs [filesystemlist]
where filesystemlist is a comma-separated list of names of the file systems
which are used for the home directories.
2
If you want to remove the file systems you previously set up, enter the command
again, without any file systems:
CIFS> set homedirfs
3
To find which file systems (if any) are currently used for home directories, enter
the following:
CIFS> show
After you select one or more of the file systems to be used in this way, you
cannot export the same file systems as ordinary CIFS shares.
If you want to change the current selection, for example, to add an additional
file system to the list of home directory file systems or to specify that no file
system should be used for home directories, you have to use the same CIFS>
set homedirfs command. In each case you must enter the entire new list of
home directory file systems, which may be an empty list when no home directory
file systems are required.
Veritas Access treats home directories differently from ordinary shares. The
differences are as follows:
â– 
An ordinary share is used to export a file system, while a number of home
directories can be stored in a single file system.
â– 
The file systems used for home directories cannot be exported as ordinary
shares.
â– 
Exporting a home directory share is done differently than exporting an
ordinary share. Also, removing these two kinds of shares is done differently.
151
Using Veritas Access as a CIFS server
About managing home directories
â– 
The configuration options you specify for an ordinary share (such as
read-only or use of opportunistic locks) are different from the ones you
specify for a home directory share.
Setting up home directories
You can set the home directory for the specified user with the CIFS> homedir set
command. If the home directory does not exist for the specified user, the CIFS>
homedir set command creates that user's home directory.
Use the Storage> quota cifshomedir set command to set the quota value for
the specified user. Otherwise, the value set from the Storage> quota cifshomedir
setdefault command is used to configure the quota limit. If either the user or
default quota is not set, 0 is used as the default value for the unlimited quota.
Once the global quota value is specified, the value applies to the automatically
created homedir. For example, if you set the global quota value to Storage> quota
cifshomedir setdefault 100M, and you then create a new homedir in Windows,
then the 100M quota value is assigned to that homedir.
152
Using Veritas Access as a CIFS server
About managing home directories
To set the home directory for the specified user
1
To set the home directory for the specified user, enter the following:
CIFS> homedir set username [domainname] [fsname]
2
username
The name of the CIFS user. If a CIFS user name includes a space,
enter the user name with double quotes.
domainname
The domain for the new home directory.
fsname
The home directory file system where the user's home directory
is created. If no file system is specified, the user's home directory
is created on the home directory file system that has the fewest
home directories.
To find the current settings for a home directory, enter the following:
CIFS> homedir show [username] [domainname]
3
username
The name of the CIFS user. If a CIFS user name includes a space,
enter the user name with double quotes.
domainname
The Active Directory/Windows NT domain name or specify local
for the Veritas Access local user local.
To find the current settings for all home directories, enter the following:
CIFS> homedir show
Because the CIFS> homedir show command takes a long time when there
are more than 1000 CIFS home directories to display, you will be prompted if
you want to continue displaying CIFS home directories or not.
When you connect to your home directory for the first time, and if the home
directory has not already been created, Veritas Access selects one of the
available home directory file systems and creates the home directory there.
The file system is selected in a way that tries to keep the number of home
directories balanced across all available home directory file systems. The
automatic creation of a home directory does not require any commands, and
is transparent to both the users and the Veritas Access administrators.
The quota limits the amount of disk space you can allocate for the files in a
home directory.
You can set the same quota value for all home directories using the Storage>
quota cifshomedir setall command.
153
Using Veritas Access as a CIFS server
About managing home directories
Displaying home directory usage information
You can display information about home directories using the CIFS> homedir show
command.
Note: Information about home directory quotas is up-to-date only when you enable
the use of quotas for the home directory file systems.
To display information about home directories
1
To display information about a specific user's home directory, enter the
following:
CIFS> homedir show [username] [domainname]
2
username
The name of the CIFS user. If a CIFS user name includes a space,
enter the user name with double quotes.
domainname
The domain where the home directory is located.
To display information about all home directories, enter the following:
CIFS> homedir show
Deleting home directories and disabling creation of home directories
You can delete a home directory share. This also deletes the files and sub-directories
in the share.
After a home directory is deleted, if you try to access the same home directory
again, a new home directory will automatically be created.
If you have an open file when the home directory is deleted, and you try to save
the file, a warning appears:
Warning: Make sure the path or filename is correct.
Save dialog?
Click on the Save button which saves the file to a new home directory.
154
Using Veritas Access as a CIFS server
About CIFS clustering modes
To delete a home directory share
â—†
To delete the home directory of a specific user, enter the following:
CIFS> homedir delete username [domainname]
username
The name of the CIFS user. If a CIFS user name includes a space,
enter the user name with double quotes.
Respond with y(es) or n(o) to confirm the deletion.
domainname
The domain it is located in.
You can delete all of the home directory shares with the CIFS> homedir deleteall
command. This also deletes all files and subdirectories in these shares.
After you delete the existing home directories, you can again create the home
directories manually or automatically.
To delete the home directories
â—†
To delete all home directories, enter the following:
CIFS> homedir deleteall
Respond with y(es) or n(o) to confirm the deletion.
After you delete the home directories, you can stop Veritas Access serving
home directories by using the CIFS> set homedirfs command.
To disable creation of home directories
â—†
To specify that there are no home directory file systems, enter the following:
CIFS> set homedirfs
After these steps, Veritas Access does not serve home directories.
About CIFS clustering modes
The following clustering modes are supported by Veritas Access:
â– 
Normal
â– 
Clustered Trivial Database (CTDB) - a cluster implementation of the TDB (Trivial
database) based on the Berkeley database API
The following operating modes are supported by Veritas Access:
â– 
User
155
Using Veritas Access as a CIFS server
About migrating CIFS shares and home directories
â– 
Domain
â– 
ADS
Each clustering mode supports all of the three operating modes. The ctdb clustering
mode is a different clustered implementation of Veritas Access CIFS, which supports
almost all of the features that are supported by normal clustering mode as well as
some additional features.
Additional features supported in ctdb clustering mode:
â– 
Directory-level share support
â– 
Multi-instance share export of a file system/directory
â– 
Simultaneous access of a share from multiple nodes and therefore better load
balancing
About switching the clustering mode
You can switch from normal to ctdb clustering mode or from ctdb to normal clustering
mode. You must stop the CIFS server prior to switching to any cluster mode.
See “About CIFS clustering modes” on page 155.
About migrating CIFS shares and home directories
You can migrate CIFS shares and home directories from normal to ctdb clustering
mode and from ctdb to normal clustering mode.
Veritas Access automatically migrates all CIFS shares and home directories while
switching from one clustering mode to another. However, it is not possible to migrate
directory-level shares in the normal clustering mode, because directory-level sharing
is not supported in normal clustering mode.
Automatic migration of the content of users (that is, users' home directories) from
one file system to another file system while switching home directories is not
supported. So, if a Veritas Access administrator changes home directories from fs1
to fs2, then users' home directories are not migrated from fs1 to fs2 automatically.
While migrating from normal to ctdb clustering mode, a simple share is created for
each split share, because splitting shares is not supported in ctdb clustering mode.
156
Using Veritas Access as a CIFS server
About migrating CIFS shares and home directories
Migrating CIFS shares and home directories from normal to ctdb
clustering mode
To migrate CIFS shares and home directories from normal to ctdb clustering
mode
1
To check the CIFS server status to confirm that the current cluster mode is set
to normal, enter the following:
CIFS> server status
2
To list the CIFS shares and home directories, enter the following:
CIFS> share show
3
To stop the CIFS server before changing the clustering mode to ctdb, enter
the following:
CIFS> server stop
CIFS> set clustering_mode ctdb
4
To start the CIFS server in ctdb clustering mode and check the CIFS server
status, enter the following:
CIFS> server start
CIFS> server status
5
To verify that all the CIFS shares and home directories are properly migrated
to the ctdb clustering mode, enter the following:
CIFS> share show
CIFS> homedir show
Migrating CIFS shares and home directories from ctdb to normal
clustering mode
If a file system is exported as multiple CIFS shares in ctdb clustering mode, then
while migrating to normal clustering mode, Veritas Access creates only one CIFS
share, whichever comes first in the list.
157
Using Veritas Access as a CIFS server
Setting the CIFS aio_fork option
To migrate a CIFS share and home directory from ctdb to normal clustering
mode
1
To check the status of the CIFS server, enter the following:
CIFS> server status
2
To list the CIFS shares and home directories, enter the following:
CIFS> share show
CIFS> homedir show
3
To stop the CIFS server to switch the clustering mode to normal, enter the
following:
CIFS> server stop
CIFS> set clustering_mode normal
4
To start the CIFS server in normal clustering mode, enter the following:
CIFS> server start
If you get a warning message, it indicates that Veritas Access was unable to
migrate the directory-level share to normal clustering mode. The rest of the
CIFS share and home directory were migrated.
5
To list the CIFS shares and home directories after migrating to normal clustering
mode, enter the following:
CIFS> share show
CIFS> homedir show
Setting the CIFS aio_fork option
The CIFS> set aio_size option allows you to set an Asynchronous I/O (AIO)
read/write size with an unsigned integer.
158
Using Veritas Access as a CIFS server
About managing local users and groups
To set the aio_fork option
â—†
To set the aio_fork option, enter the following:
CIFS> set aio_size size
where size is the AIO read/write size.
If size is not set to 0, then enable the aio_fork option, and set it as an AIO
read/write size. If size is set to 0, then disable the aio_fork option, and set 0
to an AIO read/write size.
About managing local users and groups
When Veritas Access is operating in the standalone mode, only the local users and
groups of users can establish CIFS connections and access the home directories
and ordinary shares. The Veritas Access local files store the information about
these user and group accounts. Local procedures authenticate and authorize these
users and groups based on the use of names and passwords. You can manage
the local users and groups as described in the rest of this topic.
Accounts for local users can be created, deleted, and information about them can
be displayed using the CIFS> local user commands.
Creating a local CIFS user
To create the new local CIFS user
â—†
To create a local CIFS user, enter the following:
CIFS> local user add username [grouplist]
where username is the name of the user. The grouplist is a comma-separated
list of group names.
To set the local user password
â—†
To set the local password, enter the following:
CIFS> local password username
where username is the name of the user whose password you are changing.
159
Using Veritas Access as a CIFS server
About managing local users and groups
To display the local CIFS user(s)
1
To display local CIFS users, enter the following:
CIFS> local user show [username]
where username is the name of the user.
2
To display one local user, enter the following:
CIFS> local user show usr1
To delete the local CIFS user
â—†
To delete a local CIFS user, enter the following:
CIFS> local user delete username
where username is the name of the local user you want to delete.
To change a user's group membership
â—†
To change a user's group membership, enter the following:
CIFS> local user members username grouplist
where username is the local user name being added to the grouplist. Group
names in the grouplist must be separated by commas.
Configuring a local group
A local user can be a member of one or more local groups. This group membership
is used in the standalone mode to determine if the given user can perform some
file operations on an exported share. You can create, delete, and display information
about local groups using the CIFS> local group command.
To create a local group
â—†
To create a local group, enter the following:
CIFS> local group add groupname
where groupname is the name of the local group.
To list all local groups
â—†
To list all existing local groups, enter the following:
CIFS> local group show [groupname]
where groupname lists all of the users that belong to that specific group.
160
Using Veritas Access as a CIFS server
Enabling CIFS data migration
161
To delete the local CIFS groups
â—†
To delete the local CIFS group, enter the following:
CIFS> local group delete groupname
where groupname is the name of the local CIFS group.
Enabling CIFS data migration
Veritas Access provides the following command for enabling CIFS data migration:
CIFS> set data_migration yes|no
To enable data migration for the CIFS server
1
To enable data migration for the CIFS server, enter the following:
CIFS> set data_migration yes
2
Restart the CIFS server by entering the following command:
CIFS> server start
3
Map the CIFS share on the Windows domain using the
isa_Cluster_Name\root by the Domain Administrator.
4
Copy the data with ROBOCOPY by entering the following command in a
Windows command prompt:
C:\> ROBOCOPY /E /ZB /COPY:DATSO [windows_source_dir] [CIFS_target_dir]
Make sure you have the Windows Resource Kit Tools installed.
5
Disable the CIFS data migration option after migration completes for CIFS
server security by entering the following command:
CIFS> set data_migration no
6
Restart the CIFS server by entering the following command:
CIFS> server start
Chapter
10
Configuring Veritas
Access to work with
Oracle Direct NFS
This chapter includes the following topics:
â– 
About using Veritas Access with Oracle Direct NFS
â– 
About the Oracle Direct NFS architecture
â– 
About Oracle Direct NFS node or storage connection failures
â– 
Configuring an Oracle Direct NFS storage pool
â– 
Configuring an Oracle Direct NFS file system
â– 
Configuring an Oracle Direct NFS share
â– 
Best practices for improving Oracle database performance
About using Veritas Access with Oracle Direct
NFS
Veritas Access lets you create and manage storage for Oracle database clients.
Oracle hosts access the storage using Oracle Direct NFS (DNFS).
Oracle Direct NFS is an optimized NFS (Network File System) client that provides
faster access to NFS storage that is located on NAS storage devices. The Oracle
Database Direct NFS client integrates the NFS client functionality directly in the
Oracle software. Through this integration, the I/O path between Oracle and the NFS
server is optimized, providing significantly better performance. In addition, the Oracle
Configuring Veritas Access to work with Oracle Direct NFS
About the Oracle Direct NFS architecture
Direct NFS client simplifies and, in many cases, automates the performance
optimization of the NFS client configuration for database workloads.
The Oracle Direct NFS client outperforms traditional NFS clients, and is easy to
configure. The Oracle Direct NFS client provides a standard NFS client
implementation across all hardware and operating system platforms.
Veritas Access creates different storage pools for different Oracle object types.
Veritas Access has the following storage pools as described in Table 10-1.
Table 10-1
Veritas Access storage pools for Oracle object types
Pool Name
Database Object Type
Function
ora_data_pool
Oracle TABLE data files
Stores TABLE data of data files.
This database object type uses
striped volumes over four LUNs.
The stripe size is 256K.
ora_index_pool
Oracle INDEX files
Stores INDEX data.
ora_temp_pool
Temporary files
Stores temporary files.
The storage administrator
should make sure the LUNs are
from the fastest tier. Temporary
files are used for sort, merge, or
join queries.
ora_archive_pool
Archive logs
Stores archive logs.
This database object type is a
concatenated volume. Tier 2
LUNs can be used for this
storage pool.
ora_txnlog_pool
REDO txnlog files
Stores REDO transaction logs.
It is recommended to assign the
fastest storage LUNs to this
storage pool.
About the Oracle Direct NFS architecture
Oracle Direct NFS can issue 1000s of concurrent operations due to the parallel
architecture. Every Oracle process has its own TCP connection.
See the Veritas Access Installation Guide for the supported Oracle operating
systems.
163
Configuring Veritas Access to work with Oracle Direct NFS
About the Oracle Direct NFS architecture
Figure 10-1 describes the data flow from the Oracle host to Veritas Access.
Figure 10-1
Oracle Direct NFS architecture
Oracle Host
Oracle
Direct
NFS
Layer
Recovery Manager
Transaction Log Writer
Process
Database Writer Process
Jumbo Frames
Large TCP Window Size
Query Slaves
NFS over TCP
Veritas Access
Setting a larger frame size on an interface is commonly referred to as using jumbo
frames. Jumbo frames help reduce fragmentation as data is sent over the network
and in some cases, can also provide better throughput and reduced CPU usage.
You can configure jumbo frames for Oracle Direct NFS by setting the Maximum
Transmission Unit (MTU) value.
164
Configuring Veritas Access to work with Oracle Direct NFS
About Oracle Direct NFS node or storage connection failures
About Oracle Direct NFS node or storage
connection failures
When a node or a storage connection fails, Veritas Access fails over the virtual IP
(VIP) to the healthy node. If Oracle does active transactions, some I/O can fail
during this VIP failover time window. Oracle generally issues several asynchronous
I/O requests using Oracle Direct NFS in parallel. If Oracle detects some I/O failure
waiting for completed I/Os, the result depends on the database file type. If the write
fails for the REDO transaction log or the database control files, then the database
instance fails. If I/O to a data file fails, then that particular data file is taken offline.
The SYSTEM and the UNDO data files are considered critical data files. The Oracle
database fails if I/O to these critical data files fail. When the database instance fails,
the database administrator should wait until the VIP fails over to the healthy Veritas
Access node. When the VIP is online, the database administrator can start up the
database and then recover the database.
Configuring an Oracle Direct NFS storage pool
To create an Oracle Direct NFS storage pool
â—†
Use the Database> pool create command to create a pool to store database
objects.
Database> pool create obj-type disk1[,disk2,...]
obj-type
Specifies the Oracle object type.
It is recommended to group the storage
according to the database objects that are
stored in the file system. Oracle database
objects are broadly divided into REDO
transaction logs, archived logs, table data,
index, or temporary files.
Available values include:
â– 
txnlog
â– 
data
â– 
index
â– 
temp
â– 
archivelog
See “About using Veritas Access with Oracle
Direct NFS” on page 162.
165
Configuring Veritas Access to work with Oracle Direct NFS
Configuring an Oracle Direct NFS file system
disk1, disk2
Specifies the disks to include in the Oracle
Direct NFS storage pool.
An error message displays if a disk is not
present, or if the disk is already used.
To destroy an Oracle Direct NFS storage pool
â—†
To destroy a specified Oracle Direct NFS storage pool, enter the following:
Database> pool destroy obj-type
To list all your Oracle Direct NFS storage pools
â—†
Use the Database> pool list command to list all your Oracle Direct NFS
storage pools that are configured for the database.
Database> pool list
Configuring an Oracle Direct NFS file system
To create an Oracle Direct NFS file system
â—†
Use the Database> fs create command to create an Oracle Direct NFS file
system for storing objects.
Database> fs create
obj-type
obj-type db_name fs_name size
Specifies the Oracle object type.
Available values include:
â– 
txnlog
â– 
data
â– 
index
â– 
temp
â– 
archivelog
See “About using Veritas Access with Oracle
Direct NFS” on page 162.
db_name
Specifies the Oracle database name.
166
Configuring Veritas Access to work with Oracle Direct NFS
Configuring an Oracle Direct NFS share
fs_name
Specifies the file system name for which you
want to create.
For a given database, Veritas recommends
having at least three file systems provisioned
from respective storage pools:
size
â– 
One file system for txnlog
â– 
One file system for data
â– 
One file system for archivelog
Specifies the size of the file system that you
want to create.
To destroy an Oracle Direct NFS file system
â—†
Use the Database> fs destroy command to destroy an Oracle Direct NFS
file system.
Database> fs destroy db_name fs_name
To list the Oracle Direct NFS file systems
â—†
Use the Database> fs list command to list the Oracle Direct NFS file systems
that are created for storing database files.
Database> fs list
Configuring an Oracle Direct NFS share
To configure an Oracle Direct NFS share
1
Use the Database> share add command to share and export a file system.
After issuing the Database> share add command, database clients are able
to NFS mount the specified file system on their Oracle host.
Database> share add obj-type export_dir [client]
167
Configuring Veritas Access to work with Oracle Direct NFS
Best practices for improving Oracle database performance
obj_type
Specifies the Oracle object type.
Available values include:
â– 
txnlog
â– 
data
â– 
index
â– 
temp
â– 
archivelog
See “About using Veritas Access with Oracle Direct NFS”
on page 162.
2
export_dir
Specifies the directory location of the exported file system.
client
Specifies the database client.
Use the Database> share show command to display all the shared database
file systems.
Database> share show
3
Use the Database> share delete command to delete or unshare the exported
file system.
Database> share delete export-dir [client]
Best practices for improving Oracle database
performance
Oracle database performance depends on I/O bandwidth and latency.
The Oracle database has the following object types:
â– 
DATA
â– 
INDEX
â– 
TXNLOG
â– 
ARCHLOG
â– 
TEMPFILES
See “About using Veritas Access with Oracle Direct NFS” on page 162.
The Oracle database transaction rate depends heavily on TXNLOG write latency.
168
Configuring Veritas Access to work with Oracle Direct NFS
Best practices for improving Oracle database performance
Table 10-2
Best practices for improving Oracle database performance
Item
Action
DATA and INDEX files
Separate DATA and INDEX files into separate disk pools with a
minimum of four LUNs.
REDOLOG files
Configure REDOLOG files in a separate file system (NFS share).
The underlying storage LUNs should not be shared with other
volumes or file systems.
TEMPFILES
For data warehouse applications, keep TEMPFILES in separate
pools with a minimum of four LUNs.
TXNLOG files
Place TXNLOG files in a separate disk pool with dedicated fast
LUNs.
NFS daemon threads
To get better performance, increase NFS daemon threads to 128
or more. If the network permits, you may want to enable jumbo
frames.
See “About the Oracle Direct NFS architecture” on page 163.
Veritas Access
database-specific file
systems
Access Veritas Access database-specific file systems from the
Oracle host using dedicated virtual IPs. Do not use the same virtual
IP for other applications.
Veritas Access file
systems
Always use Oracle recommended mount options to mount Veritas
Access file systems on the Oracle host. This mount option depends
on the Oracle database version.
See the Veritas Access Installation Guide for the supported Oracle
database versions.
See the Oracle documentation for the recommended mount
options.
169
Chapter
11
Configuring an FTP server
This chapter includes the following topics:
â– 
About FTP
â– 
Creating the FTP home directory
â– 
Using the FTP server commands
â– 
About FTP server options
â– 
Customizing the FTP server options
â– 
Administering the FTP sessions
â– 
Uploading the FTP logs
â– 
Administering the FTP local user accounts
â– 
About the settings for the FTP local user accounts
â– 
Configuring settings for the FTP local user accounts
About FTP
The file transfer protocol (FTP) server feature allows clients to access files on the
Veritas Access servers using the FTP protocol. The FTP service provides
secure/non-secure access by FTP to files in the Veritas Access servers. The FTP
service runs on all of the nodes in the cluster and provides simultaneous read and
write access to the files. The FTP service also provides configurable for anonymous
access to Veritas Access.
By default, the FTP server is not running. You can start the FTP server using the
FTP> server start command. The FTP server starts on the standard FTP port
21.
Configuring an FTP server
Creating the FTP home directory
The Veritas Access FTP service does not support transparent failover. During
failover due to either a shutdown or a restart of the server, the FTP client loses its
connection to the server. As a consequence, any upload or download to the FTP
service during the failover fails. Restart any upload or download to the FTP service
from the beginning after the connection to the FTP service has been re-established.
Creating the FTP home directory
Veritas Access can act as an FTP server for LDAP, NIS, or AD users, or local users.
When a user logs into the FTP server for the first time, Veritas Access retrieves the
user's home directory information from the authentication server. The authentication
server can be an LDAP, NIS, or AD server.
If the create_homedirs option is set to yes, Veritas Access creates a user's home
directory on the FTP server with the same name that was retrieved from the
authentication server. This directory is used internally. If the create_homedirs
option is set to no, the Veritas Access administrator must manually create a directory
that matches the home directory on the authentication server.
Regardless of the setting of the create_homedirs option, the Veritas Access
administrator must manually create the user's directory where the user logs in. This
directory is in the location specified by the homedir_path option. The directory must
have execute permissions set.
Using the FTP server commands
The FTP> server commands start, stop, and display the status of the FTP server.
To display the FTP server status
â—†
To display the FTP server status, enter
FTP> server status
171
Configuring an FTP server
About FTP server options
To start the FTP server
1
If the attribute user_logon is set to yes (the default value), set a value for
homedir_path.
The homedir_path must be set before the FTP server can start.
FTP> set homedir_path pathname
Where:
pathname
2
Specifies the location of the login directory for users. Valid values
include any path that starts with /vx/.
To start the FTP server, enter the following:
FTP> server start
To check server status, enter the following:
FTP> server status
To stop the FTP server
â—†
To stop the FTP server, enter the following:
FTP> server stop
To check the server status, enter the following:
FTP> server status
About FTP server options
Veritas Access lets you set various configurable options for the FTP server.
For the changes to take effect, restart the FTP server.
172
Configuring an FTP server
About FTP server options
Table 11-1
FTP options
Option
Definition
allow_delete
Specifies whether or not to allow users to delete files on the
FTP server. This option only applies to users. It does not
apply to anonymous logins. Anonymous logins are never
allowed to delete files.
Enter yes (default) to allow users to delete files on the FTP
server. Enter no to prevent users from deleting files on the
FTP server.
allow_non_ssl
Specifies whether or not to allow non-secure (plain-text) logins
into the FTP server. Enter yes (default) to allow non-secure
(plain-text) logins to succeed. Enter no to allow non-secure
(plain-text) logins to fail.
anonymous_login_dir
Specifies the login directory for anonymous users. Valid
values of this parameter start with /vx/. Make sure that the
anonymous user (UID:40 GID:49 UNAME:ftp) has the
appropriate permissions to read files in login_directory.
anonymous_logon
Tells the FTP server whether or not to allow anonymous
logons. Enter yes to allow anonymous users to log on to the
FTP server. Enter no (default) to not allow anonymous logons.
anonymous_write
Specifies whether or not anonymous users have the [write]
value in their login_directory. Enter yes to allow anonymous
users to modify contents of their login_directory. Enter no
(default) to not allow anonymous users to modify the contents
of their login_directory. Make sure that the anonymous user
(UID:40 GID:49 UNAME:ftp) has the appropriate permissions
to modify files in their login_directory.
chroot_users
Specifies whether users should be restricted to their home
directories. A value of yes limits users to their home directory.
A value of no allows users to view files in parent directories.
Users are restricted by their homedir_path. If security is local,
then chroot_users should be set to yes.
create_homedirs
Specifies if home directories should be created when a user
logs in, if the home directory does not exist. A value of yes
allows FTP to create a user’s home directory, if it does not
already exist. If the value is no, then a home directory should
exist for this user, and the user should have permissions to
read and execute in this directory. Otherwise, the login fails.
173
Configuring an FTP server
About FTP server options
Table 11-1
FTP options (continued)
Option
Definition
homedir_path
Specifies the location of the login directory for users. Valid
values include any path that starts with /vx/. This option is
required if user_logon is set to yes.
idle_timeout
Specifies the amount of time in minutes after which an idle
connection is disconnected. Valid values for time_in_minutes
range from 1 to 600 (default value is 15 minutes).
listen_ipv6
Specifies whether the FTP service should listen on IPv6 for
connections. Valid values for this parameter are yes or no.
The default value is no.
listen_port
Specifies the port number on which the FTP service listens
for connections. Valid values for this parameter range from
10-1023. The default value is 21.
max_connections
Specifies the maximum number of simultaneous FTP clients
allowed. Valid values for this parameter range from 1-9999.
The default value is 2000.
max_conn_per_client
Specifies the maximum number of simultaneous FTP
connections that are allowed from a single client IP address.
Valid values for this parameter range from 1-9999. The default
value is 2000.
passive_port_range
Specifies the range of port numbers to listen on for passive
FTP transfers. The port_range defines a range that is
specified as startingport:endingport. A port_range of
30000:40000 specifies that port numbers starting from 30000
to 40000 can be used for passive FTP. Valid values for port
numbers range from 30000 to 50000. The default value of
this option is 30000:40000.
security
Specifies the type of users that are allowed to log in to the
FTP server. Enter nis_ldap (default) to allow users with
accounts configured on NIS or LDAP servers to log in to the
FTP server. Users that are created with the FTP > local
user add command cannot log in.
Enter local to allow users with accounts created with the FTP>
local user add command to log in to the FTP server. NIS
and LDAP users cannot log in.
The ads option allows access to users configured on Windows
Active Directory as specified in the CIFS> show command.
NIS, LDAP, and local users are not allowed to log in.
174
Configuring an FTP server
Customizing the FTP server options
Table 11-1
FTP options (continued)
Option
Definition
umask
Specifies the mask for permissions with which files or
directories are created using FTP.
If the file_umask is set to 177, then new files and directories
are created with permissions 600, which defines rw--------.
The owner of the file or directory has read and write
permissions to the file or directory. Members in the users
group do not have read or write permissions.
user_logon
Specifies whether to allow FTP access for users. A value of
yes allows normal users (non-anonymous users) to log in.
If user_logon is set to yes, then the homedir_path also
must be set or the FTP server cannot start.
Customizing the FTP server options
The FTP> set commands let you set various configurable options for the FTP
server.
For the changes to take effect, the FTP server must be restarted.
To change the FTP server options
1
To view the current settings or view the pending command changes , enter the
following:
FTP> show
2
To change the required server options, use the set command.
For example, to enable anonymous logons, enter the following:
FTP> set anonymous_logon yes
3
To implement the changes, you must stop and restart the FTP server.
Enter the following:
FTP> server stop
FTP> server start
4
To view the new settings, enter the following:
FTP> show
175
Configuring an FTP server
Administering the FTP sessions
Administering the FTP sessions
To display the current FTP sessions
â—†
To display the current FTP sessions, enter the following:
FTP> session show
To display the FTP session details
â—†
To display the details in the FTP sessions, enter the following:
FTP> session showdetail [filter_options]
where filter_options display the details of the sessions under specific headings.
Filter options can be combined by using ','. If multiple filter options are used,
sessions matching all of the filter options are displayed.
To display all of the session details, enter the following:
FTP> session showdetail
To terminate an FTP session
â—†
To terminate one of the FTP sessions that are displayed in the FTP> session
showdetail command, enter the following:
FTP> session terminate session_id
where session_id is the unique identifier for each FTP session that is displayed
in the FTP> session showdetail output.
Uploading the FTP logs
The FTP> logupload command lets you upload the FTP server logs to a specified
URL.
176
Configuring an FTP server
Administering the FTP local user accounts
To upload the FTP server logs
â—†
To upload the FTP server logs to a specified URL, enter the following:
FTP> logupload url [nodename]
url
The URL where the FTP logs are uploaded. The URL supports
both FTP and SCP (secure copy protocol). If a node name is
specified, only the logs from that node are uploaded.
The default name for the uploaded file is ftp_log.tar.gz.
Passwords that are added directly to the URL are not supported.
nodename
The node on which the operation occurs. Enter the value all for
the operation to occur on all of the nodes in the cluster.
password
Use the password you already set up on the node to which you
upload the logs.
Administering the FTP local user accounts
The FTP> local user commands let you create and manage local user accounts
on the FTP server.
When you add a local user account, the user's home directory is created
automatically on the FTP server. User home directories on the FTP server are
specified by path/username where path is the home directory path configured by
the FTP > set homedir_path command.
All users are limited to their home directories and are not allowed to access files
on the FTP server beyond their home directories.
To add a local user account
1
To add a local user account, enter the following:
FTP> local user add username
where username is the name of the user whose account you want to add.
2
When the password prompt appears, enter a password for the local user.
3
Type the password again for verification.
177
Configuring an FTP server
About the settings for the FTP local user accounts
To change a password for a local user
1
To change a password for a local user, enter the following:
FTP> local user passwd username
where username is the name of the user whose password you want to change.
2
When the password prompt appears, enter a new password, then type the
password again for verification.
To delete a local user account
â—†
To delete a local user account, enter the following:
FTP> local user delete username
where username is the name of the user whose account you want to delete.
When you delete a local user account, the local user's home directory is not
deleted.
To show local user accounts
â—†
To show local user accounts (and account settings) configured on the FTP
server, enter the following:
FTP> local user show
About the settings for the FTP local user accounts
By default, local user accounts on the FTP server have no limits for the following:
â– 
Bandwidth.
â– 
Number of simultaneous connections.
To configure limits for these options, use the FTP> user local set commands.
You can also use the FTP> local user set command to specify home directories
for local users accounts.
Local user changes are effective immediately for new connections. You do not need
to restart the FTP server.
178
Configuring an FTP server
Configuring settings for the FTP local user accounts
Table 11-2
FTP local user options
Option
Definition
bandwidth
Specifies the maximum bandwidth (in MB/second) for a local
user account on the FTP server. By default, there is no limit
on the bandwidth for local users.
max_connections
Specifies the maximum number of simultaneous connections
a local user can have to each node in the cluster. By default
there is no limit to the number of connections a local user
can have to the FTP server.
homedir
Specifies the home directory for a local user account.
The home directory you configure for a local user account is
created relative to the home directory path that is configured
by the FTP > set homedir_path command.
The default home directory value for local user accounts is
username where username is the login name for the local
user account.
For example, if the home directory path is set to
/vx/fs1/ftp_home and the user name is user1, the
default home directory for user1 is
/vx/fs1/ftp_home/user1
Changes to this value are applicable for any new connections.
Configuring a new home directory location does not migrate
any existing data in a local user's current home directory to
the new home directory.
Configuring settings for the FTP local user
accounts
To show local user settings
â—†
To show the current settings for local user accounts, enter the following:
FTP> local user show
179
Configuring an FTP server
Configuring settings for the FTP local user accounts
To set bandwidth
â—†
To set the maximum bandwidth for a local user account, enter the following:
FTP> local user set bandwidth username max_value
username
Specifies the name of a user account.
max_value
Specifies the maximum upload bandwidth value (measured in
MB/second) for the user's account.
To set maximum connections
â—†
To set the maximum number of simultaneous connections a local user can
have to the FTP server, enter the following:
FTP> local user set max_connections username
number
username
Specifies the name of a user account.
number
Specifies the maximum number of simultaneous connects a user
can have to the FTP server.
To set the home directory
â—†
To set the home directory for a local user account, enter the following:
FTP> local user set homedir username
dir_name
username
Specifies that name of a user account.
dir_name
Specifies the name of the home directory for the local user account.
The home directory you configure for a local user account is relative to the
home directory path that is configured by the FTP> set homedir_path
command.
Changes to this value are applicable for any new connections. Configuring a
new home directory location does not migrate any existing data in a local user's
current home directory to the new home directory.
180
Section
5
Managing the Veritas
Access Object Store server
â– 
Chapter 12. Using Veritas Access as an Object Store server
Chapter
12
Using Veritas Access as
an Object Store server
This chapter includes the following topics:
â– 
About the Object Store server
â– 
Use cases for configuring the Object Store server
â– 
Configuring the Object Store server
â– 
About buckets and objects
â– 
File systems used for objectstore buckets
â– 
Multi-protocol support for NFS with S3
About the Object Store server
The Veritas Access Object Store server lets you store and retrieve the data that is
stored in Veritas Access using the Amazon Simple Storage Service (S3) compatible
protocol. The protocol implements the RESTful API interface over standard HTTP
protocol.
See the ObjectAccess service (S3) APIs section in the Veritas Access RESTful API
Guide for more information on the APIs supported by the Veritas Access Object
Store server.
Features of the Object Store server:
â– 
High availability
â– 
Customization of storage layouts as per requirement
â– 
Concurrent access from multiple nodes
Using Veritas Access as an Object Store server
Use cases for configuring the Object Store server
â– 
Scalability with multiple nodes and buckets
â– 
Sharing of file systems and customization of file system layout using groups
Use cases for configuring the Object Store server
You can configure the Object Store server depending on different use cases.
Use Case 1: Large number of objects per bucket are required.
â– 
The admin can configure a default pool without using the fs_sharing option.
â– 
The file system is not shared across buckets. A bucket can have large number
of objects. The choice of file system sharing limits the number of buckets created.
Use Case 2: Admin needs large number of buckets but does not expect large
number of objects per bucket.
â– 
The admin can create a group in its authentication server and configure this
group in Object Store using the objectaccess> group set command.
â– 
The grouping provides options like choosing the disk pool to use, file system
type, file system sharing, file system size, other file system options.
â– 
The admin can use the fs_sharing option to configure the Object Store server
to share a file system across all buckets that are created by a user of that
particular group.
â– 
The file system sharing allows the Object Store server to create a large number
of buckets but limits the total number of objects present across the bucket.
Use Case 3: Admin wants to control the file system used for a bucket.
â– 
The admin has to pre-create the required file system using the storage> fs
commands.
â– 
The admin can use the objectaccess> map command to map a directory of
the existing file system as a bucket for a particular user.
Configuring the Object Store server
To configure the Object Store server
1
Log on to Veritas Access using the CLISH.
2
Create a default storage pool (at least one) on the cluster.
CLISH> storage pool create pool1 disk1,disk2,disk3,disk4
183
Using Veritas Access as an Object Store server
Configuring the Object Store server
3
Use the storage pool that was created in Step 2 as the default object access
pool.
You need to set the default pool, as it is required for enabling the Object Store
server.
CLISH> objectaccess set pools pool1
Note: Based on your storage requirements, you can configure different types
of storage pools by using the Object Store group commands.
4
Verify the configured storage pool.
CLISH> objectaccess show
5
Enable and start the Object Store server.
CLISH> objectaccess server enable
CLISH> objectaccess server status
6
Configure the cluster using any authentication server (AD, LDAP, or NIS).
See the following manual pages for more information on configuring AD, LDAP,
or NIS:
7
â– 
CLISH> network man ldap
â– 
CLISH> man cifs
â– 
CLISH> man nis
Create the access and secret keys for the authorized user, or any user in the
authentication server.
You have two options for creating the access and the secret keys, either using
the Veritas Access RESTful APIs or by using the Veritas Access helper script.
Create the access and secret keys using the Veritas Access RESTful
APIs:
â– 
Before using the Veritas Access RESTful APIs, set the host name resolution
for the host as shown in the objectaccess> show output against
ADMIN_URL.
â– 
See the Veritas Access RESTful API Guide on the SORT site for accessing
the Object Store server (S3) user management APIs.
184
Using Veritas Access as an Object Store server
Configuring the Object Store server
â– 
185
After creating your access and secret key, you can create a bucket using
the S3 API.
Create the access and the secret keys using the Veritas Access helper
script:
â– 
Add the ADMIN_URL name in your /etc/hosts file.
where the ADMIN_URL is admin.<cluster_name> and the port is 8144. This
url should point to the Veritas Access management console IP address.
â– 
Location of the helper script:
/opt/VRTSnas/scripts/utils/objectaccess/objectaccess_client.py
â– 
The Veritas Access helper script can be used from any client system that
has Python installed.
â– 
To run the script, your S3 client needs to have the argparse and requests
Python modules.
If these modules are missing, install both these modules using pip or
easy_install.
â– 
Create the access and the secret key using the Veritas Access helper script
by providing the user name, password, and ADMIN_URL (check the online
Help of the Veritas Access helper script for all of the provided operations
like list key and delete key).
Create a secret key:
clus_01:~ # ./objectaccess_client.py --create_key
--server admin.clus:8144 --username localuser1 --password root123
--insecure
UserName
AccessKeyId
Status
SecretAccessKey
:
:
:
:
localuser1
Y2FkODU2NTU2MjVhYzV
Active
ODk0YzQxMDhkMmRjM2M5OTUzNjI5OWIzMDgyNzY
The <localuser1> is the local user created on both the Veritas Access cluster
nodes with same unique ID.
List a secret key for the specified user:
clus_01:~ # ./objectaccess_client.py --list_key --server
admin.clus:8144 --username localuser2 --password root123 --insecure
Delete a secret key for the specified user:
clus_01:~ # ./objectaccess_client.py --delete_key
ZTkyNDdjZTViM2EyMWZ --server admin.clus:8144 --username localuser2
--password root123 --insecure
Using Veritas Access as an Object Store server
Configuring the Object Store server
â– 
186
If the Object Store server is enabled without the SSL option, you need to
add the --insecure option.
clus_01 ~# ./objectaccess_client.py --server
admin.clus:8144 --username <uname> --create_key --insecure
8
Use the following objectaccess command to see all the existing access and
secret keys in the Veritas Access cluster:
CLISH> objectaccess account user show
Changing the Object Store server options
It is possible to change an already set parameter or set new parameters by
specifying different options. For example, you can change the other Object Store
server defaults, such as fs_type, fs_size, and other options.
After setting the defaults, you can verify whether the proper value is assigned or
not.
vmdellr> objectaccess set fs_type
ecoded largefs mirrored mirrored-stripe simple striped striped-mirror
vmdellr> objectaccess set fs_type simple
ACCESS ObjectAccess INFO V-288-0 Set fs_type successful.
vmdellr> objectaccess set fs_size 2G
ACCESS ObjectAccess INFO V-288-0 Set operation successful.
vmdellr> objectaccess show
Name
=============
Value
=========================
Server Status
Admin_URL
S3_URL
admin_port
s3_port
ssl
poollist
fs_size
fs_blksize
fs_pdirenable
fs_encrypt
fs_type
Enabled
http://admin.vmdellr:8144
http://s3.vmdellr:8143
8144
8143
no
['pool1']
2G
8192
no
off
simple
Using Veritas Access as an Object Store server
About buckets and objects
Using the group option for bucket creation
If you have multiple users, and you want to set different default values for different
sets of users, you can use the group option.
You can also use the group option to use the existing file systems for bucket creation
instead of creating a new file system for every bucket. If you set the group
fs_sharing option to yes, and if any request for bucket creation comes from a user
who is part of that group, then the S3 server searches for any existing file system
created by the specific group user. If an existing file system is found, it uses the
existing file system. Otherwise, it creates a new file system for the bucket.
To use the group option
1
Create a group in the authentication server (AD/LDAP/NIS) and add the required
users to that group.
2
Set the group specific configuration for the group created in the authentication
server.
3
Set or unset the defaults per your requirements.
vmdellr> objectaccess group set fs_type simple VRTS-grp
ACCESS ObjectAccess INFO V-288-0 Group set fs-type successful.
vmdellr> objectaccess group set pool VRTS-grp pool1
ACCESS ObjectAccess INFO V-288-0 Success.
vmdellr> objectaccess group show
Group Name
===========
VRTS-grp
Fs Sharing
===========
-
Fs Size
========
-
Fs Type
========
simple
Pool(s)
========
pool1
Fs Type
========
-
Pool(s)
========
pool1
vmdellr> objectaccess group show
Group Name
===========
VRTS-grp
Fs Sharing
===========
-
Fs Size
========
-
About buckets and objects
The Object Store sever consists of a collection of objects. The container of an object
is known as a bucket. In Veritas Access Object Store, the buckets are stored on
file systems as directories and object are stored as files.
Buckets and objects are resources which can be managed using the APIs.
187
Using Veritas Access as an Object Store server
About buckets and objects
Once the Object Store Server is configured, you can create bucket and objects and
perform the required operations.
Veritas Access supports the following methods for accessing the buckets and the
objects:
â– 
Path-style method
â– 
Virtual-hosted-style method
When using the virtual hosted-style method, the bucket_name.s3.cluster_name
should be DNS resolvable.
See the objectaccess_bucket(1) manual page for more information.
See the objectaccess manual pages for all of the Veritas Access Object Store
server operations that can be performed.
Buckets are created by S3 clients by calling the standard S3 APIs to the Veritas
Access S3 server. For creating a bucket, you need the endpoint of the Veritas
Access server, access key, and the secret key. The endpoint of the Veritas Access
Object Store server is s3.cluster_name:8143.
The Veritas Access Object Store server can also be accessed using the fully qualified
domain name:
s3.cluster_name.fqdn:8143
Make sure that you associate one (or more) of the VIPs of the Veritas Access cluster
to s3.cluster_name.fqdn in the client's DNS server.
Table 12-1 describes the restrictions enforced by the Veritas Access Object Storage
Server. Configure your S3 clients within these limitations to ensure that Veritas
Access works correctly.
Table 12-1
Object and bucket restrictions
Description
Limit
Maximum recommended parallel threads
10
Maximum number of buckets per file system with fs_sharing enabled
10000
Maximum number of objects per file system
1 billion
Maximum supported size of an object that can be uploaded using a single
PUT
100 MB
Maximum number of parts supported for multipart upload
10000
Maximum supported size range of an object that can be downloaded using 100 MB
a single GET
188
Using Veritas Access as an Object Store server
File systems used for objectstore buckets
Table 12-1
Object and bucket restrictions (continued)
Description
Limit
Maximum number of grantees supported for setting ACL on buckets/objects 128
File systems used for objectstore buckets
Veritas Access supports the following file systems for creating buckets:
â– 
Ecoded
â– 
Largefs
â– 
Mirrored
â– 
Mirrored-stripe
â– 
Simple striped
â– 
Striped-mirror
Multi-protocol support for NFS with S3
Veritas Access supports multi-protocol support for NFS with S3. If an NFS share
is present (and objects may be present in the exported path), the storage admin
can map that path as an S3 bucket (S3 over NFS). In addition, a normal file system
path can also be mapped as an S3 bucket. The buckets created by S3 APIs cannot
be exported as an NFS share (NFS over S3).
Obtaining the path to map as S3 bucket
The path has the following characteristics:
â– 
Path is the absolute path inside a file system.
â– 
The name of the bucket is the name of the directory of the path which should
be S3 compliant.
â– 
The path can be either NFS exported path or any directory in the normal file
system. You cannot not use the ObjectAccess file systems (file system having
S3 bucket created by S3 APIs).
â– 
No other bucket should exist with the same name.
â– 
No other bucket should be present either inside or outside the given path. You
can verify this using the following command:
objectaccess> bucket show
189
Using Veritas Access as an Object Store server
Multi-protocol support for NFS with S3
â– 
NFS share should not be present before or after that directory. You can verify
using the following command:
NFS> share show
Creating an S3 user
You can configure the cluster with any authentication server like AD/LDAP/NIS.
Then, all the users present in the authentication server can be used as S3 users.
The S3 user should be authorized to access the S3 bucket (access key and secret
key should be present for that user). You can verify using the following command:
objectaccess> account user show
See “Configuring the Object Store server” on page 183.
Mapping the path to the S3 bucket for the user
You can map the path to the S3 bucket for the user using the following command:
objectaccess> map <path> <user>
The storage admin can verify the bucket creation using the following command:
objectaccess> bucket show
Using the multi-protocol feature
The storage admin can use the NFS share at the same time when the S3 user uses
the bucket. Existing objects inside the bucket retain the permissions set by the
owner of those objects.
Unmapping the S3 bucket
In multi-protocol case, an S3 user can delete bucket without deleting all the objects.
Deleting the bucket is equivalent to unmapping or unreferencing the bucket.
Limitations
The following limitations apply for multi-protocol support:
â– 
An S3 user cannot access a bucket if the bucket ownership or permissions from
the NFS client is changed.
â– 
Permissions that are set or modified from protocols like NFS are not honored
by S3 and vice versa.
â– 
Object ETag is inaccurate whenever object is created or modified from the NFS
client. An incorrect ETag is corrected when a GET or HEAD request is performed
on the object.
190
Using Veritas Access as an Object Store server
Multi-protocol support for NFS with S3
â– 
Accessing the same object from different protocol in exclusive mode is not
supported.
191
Section
Monitoring and
troubleshooting
â– 
Chapter 13. Monitoring events and audit logs
6
Chapter
13
Monitoring events and
audit logs
This chapter includes the following topics:
â– 
About event notifications
â– 
About severity levels and filters
â– 
Configuring an email group
â– 
Configuring a syslog server
â– 
Displaying events on the console
â– 
Exporting events in syslog format to a given URL
â– 
About SNMP notifications
â– 
Configuring an SNMP management server
â– 
Configuring events for event reporting
About event notifications
Veritas Access monitors the status and health of various network and storage
components, and generates events to notify the administrator. Veritas Access
provides a mechanism to send these events to external event monitoring applications
like syslog server, SNMP trap logger, and mail servers. This section explains how
to configure Veritas Access so that external event monitoring applications are
notified of events on the Veritas Access cluster.
Monitoring events and audit logs
About severity levels and filters
About severity levels and filters
Veritas Access monitors events of different severity levels. Set the severity to a
particular level to specify the severity level to include in notifications. Notifications
are sent for events having the same or higher severity.
Table 13-1 describes the valid Veritas Access severity levels in descending order
of severity.
Table 13-1
Severity levels
Valid value
Description
emerg
Indicates that the system is unusable
alert
Indicates that immediate action is required
crit
Indicates a critical condition
err
Indicates an error condition
warning
Indicates a warning condition
notice
Indicates a normal but a significant condition
info
Indicates an informational message
debug
Indicates a debugging message
Veritas Access also classifies event notifications by type. Set the event filter to
specify which type of events to include in notifications. Notifications are sent only
for events matching the given filter.
The filter is set to one of the following options:
â– 
Network - for networking events
â– 
Storage - for storage-related events. For example, events related to file systems,
snapshots, disks, and pools.
â– 
Replication - for replication-related events.
â– 
All - resets the filter to show all events.
For example, if the filter is set to network, a network event triggers a notification.
A storage-related event would not trigger a notification.
194
Monitoring events and audit logs
Configuring an email group
Configuring an email group
Veritas Access can be configured to send email messages to users or groups of
users through an external SMTP server.
To display attributes of an email group
â—†
To display attributes of an email group, enter the following:
Report> email show [group]
where group is optional, and it specifies the group for which to display the
attributes. If the specified group does not exist, an error message is displayed.
For example:
To add a new email group
â—†
To add a new email group, enter the following:
Report> email add group group
where group specifies the name of the new email group and can only contain
the following characters:
â– 
Alpha characters
â– 
Numbers
â– 
Hyphens
â– 
Underscores
If the entered group already exists, then no error message is displayed. For
example:
Report> email add group alert-grp
OK Completed
Multiple email groups can be defined, each with their own email addresses,
event severity, and filter.
To add an email address to an existing group
â—†
To add an email address to an existing group, enter the following:
Report> email add email-address group email-address
group
Specifies the group to which the email address is added.
The email group must already exist.
email-address
Specifies the email address to be added to the group.
195
Monitoring events and audit logs
Configuring an email group
To add a severity level to an existing email group
â—†
To add a severity level to an existing email group, enter the following:
Report> email add severity group severity
group
Specifies the email group for which to add the severity.
The email group must already exist.
severity
Indicates the severity level to add to the email group.
See “About severity levels and filters” on page 194.
Only one severity level is allowed at one time.
You can have two different groups with the same
severity levels and filters.
Each group can have its own severity definition. You
can define the lowest level of the severity that triggers
all other severities higher than it.
To add a filter to an existing group
â—†
To add a filter to an existing group, enter the following:
Report> email add filter group filter
group
Specifies the email group for which to apply the filter.
The email group must already exist.
filter
Specifies the filter for which to apply to the group.
See “About severity levels and filters” on page 194.
The default filter is all.
A group can have more than one filter, but there may
not be any duplicate filters for the group.
To delete an email address from an existing group
â—†
To delete an email address from an existing group, enter the following:
Report> email del email-address group email-address
group
Specifies the group from which to delete the email
address.
email-address
Specifies the email address from which to delete from
the group.
196
Monitoring events and audit logs
Configuring an email group
To delete a filter from an existing group
â—†
To delete a filter from an existing group, enter the following:
Report> email del filter group filter
group
Specifies the group to remove the filter from.
filter
Specifies the filter to be removed from the group.
See “About severity levels and filters” on page 194.
The default filter is all.
To delete an existing email group
â—†
To delete an existing email group, enter the following:
Report> email del group group
where group specifies the name of the email group to be deleted.
To delete a severity from a specified group
â—†
To delete a severity from a specified group, enter the following:
Report> email del severity group severity
group
Specifies the name of the email group from which the
severity is to be deleted.
severity
Specifies the severity to delete from the specified group.
See “About severity levels and filters” on page 194.
To display mail server settings
â—†
To display mail server settings, enter the following:
Report> email get
197
Monitoring events and audit logs
Configuring a syslog server
To add a mail server and user account
â—†
To add a mail server and user account from which email notifications are sent
out, enter the following:
Report> email set [email-server] [email-user]
email-server
Specifies the external mail server from which email
notifications are sent out.
email-user
Specifies the user account from which email notifications are
sent out.
If email-user is specified, then the password for that user
on the SMTP server is required.
To delete the mail server from sending email messages
â—†
To delete the mail server from sending email messages, enter the following
command without any options:
Report> email set
Configuring a syslog server
Veritas Access can be configured to send syslog messages to syslog servers based
on set severities and filters.
In Veritas Access, options include specifying the external system log (syslog) server
for event reporting, and setting the filter and the severity levels for events. Event
notifications matching configured severity levels and filters are logged to those
external syslog servers.
See “About severity levels and filters” on page 194.
To display the list of syslog servers
â—†
To display the list of syslog servers, enter the following:
Report> syslog show
198
Monitoring events and audit logs
Configuring a syslog server
To add a syslog server to receive event notifications
â—†
To add a syslog server to receive event notifications, enter the following:
Report> syslog add syslog-server-ipaddr
where syslog-server-ipaddr specifies the host name or the IP address of the
external syslog server.
To set the severity of syslog messages
â—†
To set the severity of syslog messages to be sent, enter the following:
Report> syslog set severity value
where value indicates the severity of syslog messages to be sent.
For example:
Report> syslog set severity warning
See “About severity levels and filters” on page 194.
To set the filter level of syslog messages
â—†
To set the filter level of syslog messages to be sent, enter the following:
Report> syslog set filter value
where value indicates the filter level of syslog messages to be sent.
For example:
Report> sylog set filter storage
See “About severity levels and filters” on page 194.
To display the values of the configured filter and severity level settings
â—†
To display the values of the configured filter and severity level settings, enter
the following:
Report> syslog get filter|severity
For example:
Report> syslog get severity
Severity of the events: err
OK Completed
199
Monitoring events and audit logs
Displaying events on the console
To delete a syslog server from receiving message notifications
â—†
To delete a syslog server from receiving message notifications, enter the
following:
Report> syslog delete syslog-server-ipaddr
syslog-server-ipaddr specifies the host name or the IP address of the syslog
server.
Displaying events on the console
To display events on the console
â—†
To display events on the console, enter the following:
Report> showevents [number_of_events]
where number_of_events specifies the number of events that you want to
display. If you leave number_of_events blank, or if you enter 0, Veritas Access
displays all of the events in the system.
Exporting events in syslog format to a given URL
You can export events in syslog format to a given URL.
Supported URLs for upload include:
â– 
FTP
â– 
SCP
To export events in syslog format
â—†
To export events in syslog format to a given URL, enter the following:
Report> exportevents url
url
Exports the events in syslog format to the specified URL. URL
supports FTP and SCP. If the URL specifies the remote directory,
the default file name is access_event.log.
For example:
Report> exportevents
scp://[email protected]:/exportevents/event.1
Password: *****
OK Completed
200
Monitoring events and audit logs
About SNMP notifications
About SNMP notifications
Simple Network Management Protocol (SNMP) is a network protocol to simplify
the management of remote network-attached devices such as servers and routers.
SNMP is an open standard system management interface. Information from the
Management Information Base (MIB) can also be exported.
SNMP traps enable the reporting of a serious condition to a management station.
The management station is then responsible for initiating further interactions with
the managed node to determine the nature and extent of the problem.
See “About severity levels and filters” on page 194.
Configuring an SNMP management server
To add an SNMP management server to receive SNMP traps
â—†
To add an SNMP management server to receive SNMP traps, enter the
following:
Report> snmp add snmp-mgmtserver-ipaddr [community_string]
snmp-mgmtserver-ipaddr specifies the host name or the IP address of the
SNMP management server.
[community_string] specifies the community name for the SNMP management
server. The default community_string is public.
You can specify either an IPv4 address or an IPv6 address.
When you use the Report> snmp show command, community_string displays
as follows:
[email protected], [email protected]
For example, if using the IP address, enter the following:
Report> snmp add 10.10.10.10
OK Completed
Report> snmp add 2001:21::11
Command completed successfully
For example, if using the host name, enter the following:
Report> snmp add mgmtserv1.veritas.com
OK Completed
SNMP traps can be sent to multiple SNMP management servers.
201
Monitoring events and audit logs
Configuring an SNMP management server
To display the current list of SNMP management servers
â—†
To display the current list of SNMP management servers, enter the following:
Report> snmp show
To delete an already configured SNMP management server from receiving
SNMP traps
â—†
To delete an already configured SNMP management server from receiving
SNMP traps, enter the following:
Report> snmp delete snmp-mgmtserver-ipaddr
snmp-mgmtserver-ipaddr specifies the host name or the IP address of the
SNMP management server.
To set the severity for SNMP traps to be sent
â—†
To set the severity for SNMP traps to be sent, enter the following:
Report> snmp set severity value
where value indicates the severity for the SNMP trap to be sent.
See “About severity levels and filters” on page 194.
To set the filter level of SNMP traps
â—†
To set the filter level for SNMP traps, enter the following:
Report> snmp set filter value
where value indicates the filter.
For example:
Report> snmp set filter network
OK Completed
See “About severity levels and filters” on page 194.
202
Monitoring events and audit logs
Configuring an SNMP management server
To display the filter or the severity levels of SNMP traps to be sent
â—†
To display the filter or the severity levels of SNMP traps to be sent, enter the
following:
Report> snmp get filter|severity
For example:
Report> snmp get severity
Severity of the events: warning
OK Completed
Report> snmp get filter
Filter for the events: network
OK Completed
To export the SNMP MIB file to a given URL
â—†
To export the SNMP MIB file to a given URL, enter the following:
Report> snmp exportmib url
where url specifies the location the SNMP MIB file is exported to.
FTP and SCP URLs are supported.
For example:
Report> snmp exportmib
scp://[email protected]:/tmp/access_mib.txt
Password: *****
OK Completed
If the url specifies a remote directory, the default file name is access_mib.txt.
203
Monitoring events and audit logs
Configuring events for event reporting
Configuring events for event reporting
To reduce duplicate events
â—†
To reduce the number of duplicate events that are sent for notifications, enter
the following:
Report> event set dup-frequency number
where number indicates time (in seconds) in which only one event (of duplicate
events) is sent for notifications.
For example:
Report> event set dup-frequency 120
OK Completed
where number indicates the number of duplicate events to ignore.
Report> event set dup-number number
For example:
Report> event set dup-number 10
OK Completed
To display the time interval or the number of duplicate events sent for
notifications
â—†
To display the time interval, enter the following:
Report> event get dup-frequency
To set the number of duplicate events that are sent for notifications, enter the
following:
Report> event get dup-number
204
Monitoring events and audit logs
Configuring events for event reporting
To set the time interval for scanning event notifications
â—†
To set the time interval for scanning event notifications in /var/log/messages
and /var/log/messages-*.bz2 files, enter the following:
Report> event set log-scan-frequency frequency
where frequency is the time interval in seconds for scanning the
/var/log/messages directory.
For example, to set the scan frequency to 30 seconds, enter the following:
Report> event set log-scan-frequency 30
Command completed successfully
To display the time interval for scanning event notifications
â—†
To display the time interval for scanning event notifications, enter the following:
Report> event get log-scan frequency
For example:
Report> event get log-scan-frequency
Log scan frequency (in seconds): 120 (default)
Command completed successfully
To set the from email address when sending email notifications to users
â—†
To set the from email address when sending email notifications to users, enter
the following:
Report> event set from-address from-email-address
where from-email-address is the from email address when sending email
notifications to users.
For example, to set the from email address to [email protected], enter
the following:
Report> event set from-address [email protected]
Command completed successfully
To display the from email address when sending email notifications to users
â—†
To display the from email address when sending email notifications to users,
enter the following:
Report> event get from-address
205
Section
7
Provisioning and managing
Veritas Access file systems
â– 
Chapter 14. Creating and maintaining file systems
Chapter
14
Creating and maintaining
file systems
This chapter includes the following topics:
â– 
About creating and maintaining file systems
â– 
About scale-out file systems
â– 
Considerations for creating a file system
â– 
Best practices for creating file systems
â– 
About striping file systems
â– 
About FastResync
â– 
About creating a tuned file system for a specific workload
â– 
About scale-out fsck
â– 
About managing application I/O workloads using maximum IOPS settings
â– 
About setting retention in files
About creating and maintaining file systems
A Veritas Access environment consists of multiple nodes that can access and
update files in the same Veritas file system at the same time. Many file systems
can be supported at the same time. You create file systems on groups of disks
called storage pools.
File systems consist of both metadata and file system data. Metadata contains
information such as the last modification date, creation time, permissions, and so
on. The total amount of the space that is required for the metadata depends on the
Creating and maintaining file systems
About scale-out file systems
number of files in the file system. A file system with many small files requires more
space to store metadata. A file system with fewer larger files requires less space
for handling the metadata.
When you create a file system, you need to set aside some space for handling the
metadata. The space that is required is generally proportional to the size of the file
system. For this reason, after you create the file system, a small portion of the space
appears to be used. The space that is set aside to handle metadata may increase
or decrease as needed. For example, a file system on a 1 GB volume takes
approximately 35 MB (about 3%) initially to store metadata. In contrast, a file system
of 10 MB requires approximately 3.3 MB (30%) initially for storing the metadata.
File systems can be increased or decreased in size. SmartTier functionality is also
provided at the file system level.
See “About Veritas Access SmartTier ” on page 322.
Any file system can be enabled for deduplication.
About scale-out file systems
A scale-out file system consists of a set of on-premises file systems and set of cloud
tier(s) all exposed in a single name space. One on-premises file system stores the
metadata (including the attributes) and all the other file systems store the data.
Data is distributed among the file systems using a consistent hashing algorithm.
This separation of metadata and data allows the scale-out file system to scale
linearly.
Unlike a standard file system, a scale-out file system is Active/Passive, which means
that the file system can be online on only one node of the cluster at a time. A
scale-out file system is always active on the node where its virtual IP address is
online. A virtual IP address is associated with a scale-out file system when the file
system is created.
Veritas Access supports access to scale-out file systems using NFS-Ganesha and
S3. NFS shares that are created on scale-out file systems must be mounted on the
NFS clients using the virtual IP address that is associated with the scale-out file
system, similarly S3 buckets created on a scale-out file system must be accessed
using the same virtual IP address.
You can find the virtual IP address associated with a scale-out file system by using
the NFS> share show command or by using the objectaccess> bucket show
command based on the protocol that you are using.
S3 buckets created on a scale-out file system must be accessed using
virtual-hosted-style URL (rather than the path-style URL) and the S3 client's DNS
must be updated to this virtual IP address for the corresponding virtual-hosted-style
208
Creating and maintaining file systems
About scale-out file systems
URL. If a bucket "bucket1" is created by the S3 client, then its virtual-hosted-style
URL would be "bucket1.s3.cluster_name:8143," where the cluster_name is the
Veritas Access cluster name and 8143 is the port on which the Veritas Access S3
server is running.
Scale-out file system specifications:
â– 
Twenty percent of a scale-out file system's size is devoted to the metadata file
system.
â– 
The maximum size of a metadata file system is 10 TB.
â– 
The minimum size of a scale-out file system is 10 GB.
â– 
The maximum size of a scale-out file system is 3 PB.
â– 
To create a scale-out file system above 522 TB, you need to provide the file
system size in multiples of 128 GB.
â– 
You can grow a scale-out file system up to 3 PB.
â– 
To create or grow a scale-out file system above 522 TB, you need to provide
the file system size in multiples of 128 GB.
Note: Growing a scale-out file system beyond 522 TB creates additional data
file systems (based on the grow size), and data movement is triggered from the
old file systems to the newly added file systems, so that data is distributed evenly
among all the data file systems.
â– 
You can shrink the scale-out file system only if its size is less than 522 TB.
â– 
Access the data present in a scale-out file system using NFS (both v3 and v4)
and S3 (supports both AWS signature version 2 and version 4).
See “Using the NFS-Ganesha server” on page 105.
â– 
Ability to tier infrequently accessed data to the cloud using the cloud as a tier
feature:
There can be only one on-premises tier.
There can be up to eight cloud tiers per a scale-out file system.
You can move data between cloud tiers, for example, moving data from Azure
to Glacier.
Configure policies to move data from or to on-premises or cloud tiers.
Policies can be configured based on the access time, modification time, or
pattern.
â– 
Azure has a limitation of 500 TB per storage account. Azure users can have
200 storage accounts per subscription. A Scale-out file system supports adding
multiple Azure storage accounts in a single tier. Effectively, you can attach 100
209
Creating and maintaining file systems
About scale-out file systems
PB of Azure storage to a single tier. When multiple storage accounts are used,
Veritas Access selects one of the storage accounts to store data in a round-robin
manner.
New data file systems are created when you grow the scale-out file system beyond
522 TB. The pool on which the scale-out file system is created is used to create
these new file systems. There is also data movement to these new file systems so
that data is distributed evenly among all the file systems (on-premises).
The following types of clouds can be added as storage tiers for a scale-out file
system:
â– 
Amazon S3
â– 
Amazon Glacier
â– 
Amazon GovCloud (US)
â– 
Azure
â– 
Alibaba
â– 
Google cloud
â– 
IBM Cloud Object Storage
â– 
Veritas Access S3 and any S3-compatible storage provider
The data is always written to the on-premises storage tier and then data can be
moved to the cloud using a tiering mechanism. File metadata including any attributes
set on the file resides on-premises even though the file is moved to the cloud. This
cloud as a tier feature is best used for moving infrequently accessed data to the
cloud.
Amazon Glacier is an offline cloud tier, which means that data moved to Amazon
Glacier cannot be accessed immediately. An EIO error is returned if you try to read,
write, or truncate the files moved to the Amazon Glacier tier. If you want to read or
modify the data, move the data to on-premises using tier move or using policies.
The data is available after some time based on the Amazon Glacier retrieval option
you selected.
When Amazon S3, AWS GovCloud(US), Azure, Google cloud, Alibaba, IBM Cloud
Object Storage, Veritas Access S3 and any S3-compatible storage provider is used
as the cloud tier, the data present on these clouds can be accessed any time (unlike
in Amazon Glacier). An EIO error is returned if you try to write, or truncate the files
moved to these clouds. If you want to modify the data, move the data to on-premises
using tier move or using policies.
See the Veritas Access Cloud Storage Tiering Solutions Guide for more information.
210
Creating and maintaining file systems
Considerations for creating a file system
Note: You cannot use the CIFS protocol with a scale-out file system.
See “Configuring the cloud as a tier feature for scale-out file systems” on page 224.
See “Moving files between tiers in a scale-out file system” on page 226.
Considerations for creating a file system
The following sections describe the considerations and best practices for creating
file systems.
Best practices for creating file systems
The following are the best practices for creating file systems:
â– 
Ensure all the disks (LUNs) in each storage pool have an identical hardware
configuration.
Best performance results from a striped file system that spans similar disks.
The more closely you match the disks by speed, capacity, and interface type,
the better the performance you can expect. When striping across several disks
of varying speeds, performance is no faster than that of the slowest disk.
â– 
Create striped file systems rather than simple file systems when creating your
file systems.
See “About striping file systems” on page 213.
â– 
In a given storage pool, create all the file systems with the same number of
columns.
â– 
Ensure that the number of disks in each storage pool is an exact multiple of the
number of columns used by the file systems created in that storage pool.
â– 
Consider how many disks you need to add to your storage pool to grow your
striped file systems.
A 5-TB file system using five columns cannot be grown in a storage pool
containing 8*1-TB disks, despite having 3 TB of disk space available. Instead
create the file system with either four or eight columns, or else add 2*1-TB disks
to the pool. See further examples in the table.
Use case
Action
Result
storage pool with
eight disks of the
same size (1 TB
each)
Create a 5 TB striped file system You cannot grow the file system
with five columns.
greater than 5 TB, even though
there are three unused disks.
211
Creating and maintaining file systems
Best practices for creating file systems
storage pool with
eight disks of the
same size (1 TB
each)
Create a 5 TB striped file system You can grow the file system to
with eight columns.
8 TB.
storage pool with
eight disks of the
same size (1 TB
each)
Create a 4 TB striped file system You can grow the file system to
with four columns.
8 TB.
storage pool with
eight disks of the
same size (1 TB
each)
Create a 3 TB striped file system You cannot grow the file system
with three columns.
to 8 TB.
storage pool with
eight disks of the
different sizes (3
are 500 GB each,
and 5 are 2 TB
each)
Create an 8 TB striped file
system with eight columns.
You cannot create this 8-TB file
system.
â– 
Consider the I/O bandwidth requirement when determining how many columns
you require in your striped file system.
Based on the disks you have chosen, I/O throughput is limited and potentially
restricted. Figure 14-1 describes the LUN throughput restrictions.
â– 
Consider populating each storage pool with the same number of disks from
each HBA. Alternatively, consider how much of the total I/O bandwidth that the
disks in the storage pool can use.
If you have more than one card or bus to which you can connect disks, distribute
the disks as evenly as possible among them. That is, each card or bus must
have the same number of disks attached to it. You can achieve the best I/O
performance when you use more than one card or bus and interleave the stripes
across them.
â– 
Use a stripe unit size larger than 64 KB. Performance tests show 512 KB as the
optimal size for sequential I/O, which is the default value for the stripe unit. A
greater stripe unit is unlikely to provide any additional benefit.
â– 
Do not change the operating system default maximum I/O size of 512 KB.
212
Creating and maintaining file systems
About striping file systems
Figure 14-1
LUN throughput - details on the LUN throughput restrictions
Reading from a single LUN,
the performance team
achieved 170 MB/sec.
Reading from two LUNs, the
performance team achieved
219 MB/sec because of the
contention at the modular
array level.
Reading from two LUNs,
where each LUN is located in
a different modular storage
array, the performance is 342
MB/sec.
Each modular array controller
cache is shared when I/O is
generated to multiple LUNs
within the same array.
About striping file systems
You can obtain huge performance benefits by striping (RAID-0) using
software-defined storage (SDS). You achieve performance benefits regardless of
the choice of LUN configuration in your storage hardware. Striping is useful if you
need large amounts of data that is written to or read from physical disks, and
consistent performance is important. SDS striping is a good practice for all Veritas
Access use cases and workloads.
Veritas strongly recommends that you create striped file systems when creating
your file system for the following reasons:
â– 
Maximize the I/O performance.
â– 
Proportion the I/O bandwidth available from the storage layer.
213
Creating and maintaining file systems
About striping file systems
â– 
Balance the I/O load evenly across multi-user applications running on multiple
nodes in the cluster.
However there are also pitfalls to avoid.
The following information is essential before selecting the disks to include in your
striped file system:
â– 
Understanding of your hardware environment
â– 
Storage capabilities and limitations (bottlenecks)
â– 
Choice of LUNs (each LUN, or disk, equates to a column in a SDS-striped
volume)
Figure 14-2
An example hardware configuration
Two nodes where each node
has a dual-port HBA card.
There are two active paths to
each LUN.
Two switches with two switch ports
per switch (four in total) are used to
connect to the dual-port HBA cards.
Six switch ports per switch (12 in
total) are used to connect to the six
modular storage arrays.
Each modular storage array has one
port connected to each switch (two
connections for each modular
storage array).
Each modular array has four LUNs
using 11 disks. Total 24 LUNs
available.
An extreme example might be if one column (equal to one LUN) is composed of
only hard disk drives (HDDs) in the storage array. All of the other columns in the
same striped volume are composed of only SSDs in the storage array. The overall
I/O performance bottlenecks on the single slower HDD LUN.
214
Creating and maintaining file systems
About striping file systems
Understanding the LUN configuration and ensuring that all of the LUNs have an
identical configuration is therefore essential for maximizing performance and
achieving balanced I/O across all the LUNs.
Figure 14-3
LUN configuration
Each LUN is composed of 11 disks in
a RAID-0 configuration.
There are 4 LUNs available per
modular array controller.
There are a total of 24 LUNs available
in the environment.
Each LUN is approximately 3 TB.
Because there are 2 active paths
per LUN (dual-port HBA card),
there is a total of 48 active paths
from each node.
Each of the 6 modular array
controllers has 4 LUNs using
11 disks. There are 24
identical LUNs available in
total.
All 24 LUNs have an identical hardware configuration.
215
Creating and maintaining file systems
About FastResync
Figure 14-4
Volume configuration
24 columns to balance I/O across all the
hardware
64 KB
512 KB
1024 KB
When creating a volume, the stripe
unit needs to be specified. The
performance team performed
experiments with three different stripe
unit sizes to understand the best
choice for performance in the
hardware configuration. The optimum
stripe unit size in testing was 512 KB.
This is also the default stripe unit size
in Veritas Access.
By performing I/O to all 24 LUNs, the
performance team achieved perfectly
balanced I/O across all of the LUNs
with an I/O size of 512 KB to all the
path devices.
The performance team created a volume with 24 columns to use the entire storage
bandwidth in one file system. Veritas does not advise changing the operating system
default maximum I/O size of 512 KB. The optimum stripe-unit size is 512 KB.
About FastResync
The FastResync feature performs quick and efficient resynchronization of stale
mirrors (a mirror that is not synchronized). FastResync optimizes mirror
resynchronization by keeping track of updates to stored data that have been missed
by a mirror.
When FastResync has been enabled, it does not alter how you administer mirrors.
The only visible effect is that repair operations conclude more quickly.
216
Creating and maintaining file systems
About creating a tuned file system for a specific workload
About creating a tuned file system for a specific
workload
Veritas Access provides an easy way to create a well-tuned file system for a given
type of workload.
You can use the newly created file system for the following common client
applications:
â– 
Virtual machine workloads
â– 
Media server workloads
Streaming media represents a new wave of rich Internet content. Recent
advancements in video creation, compression, caching, streaming, and other
content delivery technology have brought audio and video together to the Internet
as rich media. You can use Veritas Access to store your rich media, videos,
movies, audio, music, and picture files.
See the Storage> fs man page for more information.
Storage> fs create pretuned media_fs 100g pool2 workload=mediaserver layout=striped 8
The workload=mediaserver option creates a file system called media_fs that is
100g in size in pool2 striped across eight disks.
Note: You can select only one workload for a specified file system. You specify the
workload when you create the file system, and you cannot change the workload
after you have created the file system.
Virtual machine workloads
A virtual machine disk file, also known as a VMDK file, is a file held in the Veritas
Access file system that represents a virtual disk for a virtual machine. A VMDK file
is the same size as the virtual disk, so VMDK file sizes are typically very large. As
writes are performed to the virtual disk within the virtual machine, the VMDK file is
populated with extents in the Veritas Access file system. Because the VMDK files
are large, they can become heavily fragmented, which gradually impedes
performance when reading and writing to the virtual disk. If you create a file system
specific to a virtual machine workload, Veritas Access internally tunes the file system
to allocate a fixed extent size of 1MB for VMDK files. The 1MB block size significantly
reduces both file system and VMDK file fragmentation while improving the virtual
machine workload performance.
217
Creating and maintaining file systems
About scale-out fsck
Media server workloads and tunable for setting
write_throttle
Media server workloads involve heavy sequential reads and writes. Striping across
multiple disks yields better I/O latency.
See “Best practices for creating file systems” on page 211.
For media server workloads, Veritas Access provides a tunable that can help restrict
the amount of write I/O throughput. The tunable helps prevent the streaming of
information (sequential reads) from being affected by other processes performing
write I/O on the same NAS server. An example use case is as follows. You want
to stream a movie, this is reading a file (sequential reads). You do not want the
movie experience to pause due to buffering. Another user might be uploading new
content to the same file system (the upload is writing data to a different file). The
uploading (writing) can cause the streaming (reading) to pause due to buffering.
Veritas Access throttles the writing processes so that they do not consume too
much of the system memory.
Each file system must tune the value write_thottle independently of other file
systems. The default value is 0, which implies there is no write_throttle. The throttle
is per file, so when writing to multiple files at the same time, the write_thottle
threshold applies to each file independently.
Setting a non-zero value for a file system prevents the number of dirty memory
pages that are associated with the file from increasing beyond the threshold. If you
set a write_thottle value of 256, then writes to a file pause to flush the file to disk
once 256 dirty memory pages have built up for the file. After the number of dirty
pages for a file reaches the write_throttle threshold, further dirtying of pages is
paused, and the file system starts flushing the file’s pages to disk, even if free
memory is available. Each memory page is 4KB of file data, so 256 pages is 1MB
of file data. Setting a value for write_thottle means a writing thread pauses upon
reaching the threshold (on the NAS server) while the file’s dirty pages are flushed
to disk, before resuming further writes to that file. Once flushed, the dirty pages
become clean pages, which means the memory (the pages) can then be reused
for perhaps pre-fetching data from disk for streaming reads. Setting a value for
write_throttle helps prevent write I/O from consuming too much of the system
memory.
Setting write_throttle requires some experimentation, which is why Veritas Access
does not set a non-zero value by default. The valid range for write_throttle is 0 to
2048 pages. A good starting value for experimentation is 256.
About scale-out fsck
The scale-out fsck operation does the following:
218
Creating and maintaining file systems
About managing application I/O workloads using maximum IOPS settings
â– 
Checks the consistency of the metadata container, the data container, and the
database, and repairs any inconsistencies.
â– 
Checks if the metadata container and the data container are marked for full fsck.
If yes, scale-out fsck performs a full fsck of the corresponding file systems.
Based on the actions taken by fsck on the individual file systems, the scale-out
fsck operation repairs the inconsistencies in other parts of the scale-out file
system.
See “About scale-out file systems” on page 208.
â– 
Goes through all the file handles present in the database, and checks if the
corresponding metadata container and the data container file handles are
consistent with each other.
In some cases, full fsck might delete files from the data container. To maintain
consistency, the corresponding files from the metadata container and the data
container are removed, and the corresponding key is removed from the database.
Storage> fs fsck fs1
fsck of largefs fs1 is successful
About managing application I/O workloads using
maximum IOPS settings
When multiple applications use a common storage subsystem, it is important to
ensure that a particular application does not monopolize the storage bandwidth
thereby impacting all the other applications using the same storage. It is also
important to balance the application I/O requests in a way that allows all the
applications to co-exist in a shared environment. You can address this need by
setting a maximum threshold on the I/O operations per second (MAXIOPS) for the
file system.
The MAXIOPS limit determines the maximum number of I/Os processed per second
collectively by the storage underlying the file system.
When an I/O request comes in from an application, it is serviced by the storage
underlying the file system until the application I/O reaches the MAXIOPS limit. When
the limit is exceeded for a specified time interval, further I/O requests on the
application are queued. The queued I/Os are taken up on priority in the next time
interval along with new I/O requests from the application.
You should consider the following factors when you set the MAXIOPS threshold:
â– 
Storage capacity of the shared subsystem
â– 
Number of active applications
219
Creating and maintaining file systems
About setting retention in files
â– 
I/O requirements of the individual applications
Only application-based I/Os can be managed with MAXIOPS.
MAXIOPS addresses the use case environment of multiple applications using a
common storage subsystem where an application is throttled because of insufficient
storage bandwidth while another less critical application uses more storage
bandwidth.
See the maxiops man pages for detailed examples.
About setting retention in files
The retention feature provides a way to ensure that the files are not deleted or
modified until the retention period is applied on the files. You can set, clear, and
show the retention period on files from the CLISH.
the file system should be created with worm=yes option to use the retention feature.
See the Storage> fs create man page for more information.
To set retention:
Storage> fs retention set [path] [retention_period]
Where path is the specified file or directory on which retention is set. If the specified
path is a directory, then retention is set on all the files that are currently present in
the directory.
retention_period is the duration for which retention is set. It can be in either
[1-9](d|D|m|M|y|Y) or mm-dd-yyyy format.
To show retention:
Storage> fs retention show [path]
Where path is the specified file on which retention is set.
To clear retention:
Storage> fs retention clear [path]
Where path is the specified file or directory on which retention is cleared. If the
specified path is a directory, then it clears the retention on all the files that are
currently present in the directory.
See the Storage> fs man page for detailed examples.
220
Section
8
Configuring cloud storage
â– 
Chapter 15. Configuring the cloud gateway
â– 
Chapter 16. Configuring cloud as a tier
Chapter
15
Configuring the cloud
gateway
This chapter includes the following topics:
â– 
About the cloud gateway
â– 
Configuring the cloud gateway
About the cloud gateway
You can configure Veritas Access as a gateway to cloud storage. You can register
cloud subscriptions to your Veritas Access cluster. Multiple cloud subscriptions can
be attached, so you need to assign a service name to each subscription. You can
then use the service name to attach the subscription to a scale-out file system as
a storage tier.
The following clouds can be added as storage tiers for a scale-out file system:
â– 
Alibaba
â– 
AWS
â– 
AWS Gov Cloud (US)
â– 
Azure
â– 
Google
â– 
S3 Compatible
â– 
IBM Cloud Object Storage
The cloud as a tier feature lets you have hybrid storage that uses both on-premises
storage and public or private cloud storage. After the gateway and tier are configured,
you can use the cloud as a tier feature to move data between the cloud and the
Configuring the cloud gateway
Configuring the cloud gateway
on-premises storage. The files in the cloud, like the files on the on-premises storage,
are accessible using the NFS or S3 protocol. Access of the data present in the
cloud tier is transparent to the application.
See “Configuring the cloud as a tier feature for scale-out file systems” on page 224.
Before you provision cloud storage, you set up cloud subscriptions. To set up the
cloud gateway, you attach cloud subscriptions to your Veritas Access cluster. You
need to have the subscription credentials to add the cloud subscriptions. You need
different subscription credentials based on your cloud provider.
Configuring the cloud gateway
A cloud gateway enables you to register one or more cloud services to the cluster.
You can add the cloud service as a cloud tier for a scale-out file system. Whenever
you are adding a tier to a scale-out file system, you need to add a cloud service for
the cloud tier. You require a cloud service for a scale-out file system.
To configure the cloud service for scale-out file systems
1
Add the cloud service.
Storage> cloud addservice service_name service_provider=AWS
You are prompted to provide Amazon S3 subscription credentials.
2
Display the added cloud services.
Storage> cloud listservice service_name
3
Remove the cloud service.
Storage> cloud removeservice service_name
If any scale-out file system has a cloud tier associated with the service, the
remove cloud service operation fails. Remove all the tiers from all the scale-out
file systems before removing the cloud service.
223
Chapter
16
Configuring cloud as a tier
This chapter includes the following topics:
â– 
Configuring the cloud as a tier feature for scale-out file systems
â– 
Moving files between tiers in a scale-out file system
â– 
About policies for scale-out file systems
â– 
Obtaining statistics on data usage in the cloud tier in scale-out file systems
â– 
Workflow for moving on-premises storage to cloud storage for NFS shares
Configuring the cloud as a tier feature for
scale-out file systems
You can configure the following clouds as a storage tier with a scale-out file system:
â– 
Amazon S3
â– 
Amazon Glacier
â– 
AWS GovCloud(US)
â– 
Azure
â– 
Alibaba
â– 
Google cloud
â– 
IBM Cloud Object Storage
â– 
Veritas Access S3
â– 
Any S3-compatible storage provider
The data is always written to the on-premises storage tier and then data can be
moved to the cloud tier by setting up policies. The policies can be configured based
Configuring cloud as a tier
Configuring the cloud as a tier feature for scale-out file systems
on access time, modification time, and pattern of the files. File metadata including
any attributes set on the file resides on-premises even though the file is moved to
the cloud. The cloud as a tier feature is best used for moving infrequently accessed
data to the cloud.
See “About scale-out file systems” on page 208.
Warning: When any cloud service is used as a cloud tier for a file system, Veritas
Access exclusively owns all the buckets and the objects created by Veritas Access.
Any attempt to tamper with these buckets or objects outside of Veritas Access
corrupts the files represented by those modified objects.
See “Moving files between tiers in a scale-out file system” on page 226.
See the storage_cloud(1) man page for detailed examples.
See the storage_tier(1) man page for detailed examples.
Warning: Veritas Access cannot add a cloud tier if the clock on the Veritas Access
system is more than 15 minutes out-of-sync with the actual time. To ensure that
the Veritas Access clock time is accurate, configure an NTP server or use the
System> clock set command.
To configure the cloud as a tier for scale-out file systems
1
Display the available cloud services.
Storage> cloud listservice service_name
If the cloud service is not listed, you may need to add the cloud subscription
to the cluster.
See “About the cloud gateway” on page 222.
2
Add the cloud as a tier to a scale-out file system.
Storage> tier add cloud fs_name tier_name service_name
region S3 | Glacier
Amazon AWS has standard regions defined. Based on the region you choose
in Veritas Access, AWS storage is served through that region. To get better
performance, always select the closest geographic region.
3
Verify that the cloud tier is configured on the specified scale-out file system.
Storage> tier list fs_name
225
Configuring cloud as a tier
Moving files between tiers in a scale-out file system
To remove the cloud tier
â—†
Remove the cloud tier.
Storage> tier remove fs_name tier_name
If there are any files present in the cloud tier, the remove cloud tier operation
fails. Move the files back to the primary tier before removing the cloud tier.
Moving files between tiers in a scale-out file
system
By default, a scale-out file system has a single tier (also known as a primary tier),
which is the on-premises storage for the scale-out file system. You can add a cloud
service as an additional tier. After a cloud tier is configured, you can move data
between the tiers of the scale-out file system as needed. There can be up to eight
cloud tiers per a scale-out file system. For example, you can configure Azure and
AWS Glacier as two tiers and move data between these clouds.
Use the commands in this section to move data as a one-time operation. For
example, if you have just set up a cloud tier, and you want to move some older data
to that tier.
If you want to specify repeatable rules for maintaining data on the tiers, you can
set up a policy for the file system.
You can specify the following criteria to indicate which files or directories to move
between tiers:
â– 
file or directory name pattern to match
â– 
last accessed time (atime)
â– 
last modified time (mtime)
Because a scale-out file system can be large, and the size of the files to be moved
can be large as well, the Storage> tier move command lets you perform a dry
run.
See the storage_tier(1) man page.
226
Configuring cloud as a tier
Moving files between tiers in a scale-out file system
227
To move data between storage tiers in a scale-out file system
1
(Optional) Perform a dry run to see which files would be moved and some
statistics about the move.
Storage> tier move dryrun fs_name source_tier destination_tier pattern
[atime condition] [mtime condition]
The dry run starts in the background. The command output shows the job ID.
Configuring cloud as a tier
Moving files between tiers in a scale-out file system
2
228
Move the files that match pattern from source_tier to destination_tier based on
the last accessed time (atime) or the last modified time (mtime).
Storage> tier move start fs_name source_tier destination_tier pattern
[atime condition] [mtime condition]
pattern is required. To include all the files, specify * for pattern.
The condition for atime or mtime includes an operator, a value, and a unit.
Possible operators are the following: <, <=, >, >=. Possible units are m, h,
and d, indicating minutes, hours, and days.
The name of the default tier is primary. The name of the cloud tier is specified
when you add the tier to the file system.
The move job starts in the background. The command output shows the job
ID.
Examples:
Move the files that match pattern and that have not been accessed within the
past 100 days to the cloud tier.
Storage> tier move start lfs1 primary cloudtier1 pattern
atime >100d
Move the files that match pattern and that have been accessed recently in the
past 30 days to the specified tier.
Storage> tier move start lfs1 cloud_tier primary pattern atime <=30d
Move the files that match pattern and that have not been modified within the
past 100 days to the cloud tier.
Storage> tier move start lfs1 primary cloud_tier pattern
mtime >=100d
Move only the files that match pattern and that were modified within the last
three days from the cloud tier to the primary tier.
Storage> tier move start lfs2 cloud_tier primary pattern mtime >=3d
Move all files to the primary tier.
Storage> tier move start lfs2 cloud_tier primary *
Configuring cloud as a tier
About policies for scale-out file systems
3
229
View the move jobs that are in progress in the background. This command
lists the job IDs and the status of the job.
Storage> tier move list
Job
Fs name
Source Tier
========== ========
1473684478 largefs1
1473684602 largefs1
4
Destination Tier
Pattern
Atime
============ ================= ===========
======
cloudtier
primary
/vx/largefs1/* >120s
cloudtier
primary
/vx/largefs1/* -
Mtime
State
====== ========
not running
scanning
View the detailed status of the data movement for the specified job ID.
Storage> tier move status jobid
For example:
Storage> tier move status 1473682883
Job run type:
Job Status:
Total Data (Files):
Moved Data (Files):
Last file visited:
5
normal
running
4.0 G (100)
100.0 M (10)
/vx/fstfs/10.txt
If required, you can abort a move job.
Storage> tier move abort jobid
About policies for scale-out file systems
When a scale-out file system includes a cloud tier, you can use policies to control
the movement of data between the on-premises storage and the cloud tier. A policy
is a set of rules defined for a file system for deleting data or moving data. If you
want to specify repeatable rules for maintaining data on the tiers, you can set up a
policy for the file system.
Each rule defines the following criteria:
â– 
what action should be taken (move or delete)
â– 
when the data should be moved or deleted based on the access time or modified
time of the file
â– 
which data should be moved based on the pattern matching for the files and
directories.
See “About pattern matching for data movement policies” on page 230.
Configuring cloud as a tier
About policies for scale-out file systems
Each file system can have more than one rule, though you should be careful not
to create rules that conflict or cause looping.
To move or delete the data, you run the policy. When you run a policy, the rules
are applied to the data at the time the policy is run. The policy does not run
automatically. You can attach a schedule to a policy to have it run automatically at
the specified times.
See “Creating and scheduling a policy for a scale-out file system ” on page 232.
About pattern matching for data movement policies
Within a policy, you can use a pattern to specify that the rule applies to file names
or directory names that match the pattern. By using a pattern, you do not need to
know the exact file name in advance; the files that match the pattern are selected
dynamically.
A pattern uses special characters, which are case sensitive. There are the following
types of patterns:
â– 
Directory patterns
A pattern that ends with a slash (/) is matched only against directories.
â– 
File patterns
A pattern that does not end with a slash (/) is matched only against files.
The following is a list of supported special characters and their meanings:
* (asterisk)
Matches any character any number of times.
? (question mark)
Matches any single character.
** (two asterisks)
Matches across child directories recursively.
The pattern fs1/*/*.pdf will match .pdf file names
present after first sub-directory of fs1/. For example, if the
following files exist:
fs1/dir1/a.pdf
fs1/dir2/b.pdf
fs1/dir3/dir4/c.pdf
then the pattern fs1/*/*.pdf will match only a.pdf and
b.pdf. It will not match c.pdf.
The pattern fs1/**/*.pdf will match .pdf files in any
directory after fs1. For the above file list, it will match all of
the files: a.pdf, b.pdf, and c.pdf.
230
Configuring cloud as a tier
About policies for scale-out file systems
[ ] (square brackets)
Matches either range or set of characters. [0-5] will match
any character in range of 0 to 5. [a-g] will match any character
in range of a to g. [abxyz] will match any one character out
of a,b,x,y,z.
! (exclamation point)
Can be used as the first character in a range to invert the
meaning of the match. [!0-5] will match any character which
is not in range of 0 to 5.
\ (backslash)
Can be used as an escape character. Use this to match for
one of the above pattern matching characters to avoid the
special meaning of the character.
About schedules for running policies
A schedule is specified in a format similar to the UNIX crontab format. The format
uses five fields to specify when the schedule runs:
minute
Enter a numeric value between 0-59, or an asterisk (*), which
represents every minute. You can also enter a step value
(*/x), or a range of numbers separated by a hyphen.
hour
Enter a numeric value between 0-23, or an asterisk (*), which
represents every hour. You can also enter a step value (*/x),
or a range of numbers separated by a hyphen.
day_of_the_month
Enter a numeric value between 1-31, or an asterisk (*), which
represents every day of the month. You can also enter a step
value (*/x), or a range of numbers separated by a hyphen.
month
Enter a numeric value between 1-12, or an asterisk (*), which
represents every month. You can also use the names of the
month. Enter the first three letters of the month (you must
use lower case letters). You can also enter a step value (*/x),
or a range.
day_of_the_week
Enter a numeric value between 0-6, where 0 represents
Sunday, or an asterisk (*), which represents every day of the
week. You can also enter the first three letters of the week
(you must use lower case letters). You can also enter a step
value (*/x), or a range.
A step value (*/x) specifies that the schedule runs at an interval of x. The interval
should be an even multiple of the field's range. For example, you could specify */4
for the hour field to specify every four hours, since 24 is evenly divisible by 4.
However, if you specify */15, you may get undesired results, since 24 is not evenly
divisible by 15. The schedule would run after 15 hours, then 7 hours.
231
Configuring cloud as a tier
About policies for scale-out file systems
A range of numbers (two values separated by a hyphen) represents a time period
during which you want the schedule to run.
Examples: To run the schedule every two hours every day:
0 */2 * * *
To run the schedule on 2:00 a.m. every Monday:
* 2 * * 1
To run the schedule at 11:15 p.m. every Saturday:
15 23 * * 6
Creating and scheduling a policy for a scale-out file system
By default, a scale-out file system has a single disk tier, which is the on-premises
storage for the scale-out file system. You can add a cloud service as an additional
tier. After a cloud tier is configured, you can move data between the tiers of the
scale-out file system as needed.
Use policies to define a set of data movement rules for the scale-out file system.
Each file system can include a policy for deletion and a policy for data movement
between tiers.
Be careful when specifying the criteria for moving files. Conflicting policies may
cause data to move from one tier to another tier. A best practice is to use policies
with a smaller data set first before applying those policies to file systems using a
schedule.
A data movement policy can use the following criteria to indicate which files or
directories to move between tiers:
â– 
pattern
â– 
atime
â– 
mtime
You can also perform a dry run of a policy.
See the storage_fs(1) policy section of the manual page for detailed examples.
232
Configuring cloud as a tier
About policies for scale-out file systems
233
To create a policy
1
Create a data movement policy policy1 for file system fs1 to move the files with
file name extensions of .txt and .pdf from the primary tier (disk tier) to tier1
(cloud tier), which did not get accessed or modified for the last two days.
Storage> fs policy add operation=move policy1 fs1 primary tier1 *.txt,*
.pdf atime >2d mtime >2d
ACCESS policy SUCCESS V-288-0 Policy policy1 for fs fs1 added
successfully.
2
Retrieve data from Amazon Glacier. Create a policy pol1 to move all the files
with the file name extension of .txt from Amazon Glacier to the primary tier
using the Bulk retrieval option.
Files are copied to on-premises and then deleted from Amazon Glacier. The
time when the files are available on-premises depends on the type of retrieval
option selected.
Storage> fs policy add operation=move pol1 gfs2 gtier primary
retrieval_option=Bulk \*.txt
3
Create a data deletion policy policy2 for file system fs1 to move the files with
file name extensions of .txt and .pdf from tier1 (cloud tier), which did not get
accessed or modified for the last two days.
Storage> fs policy add operation=delete policy2 fs1 tier1 \*.txt,
\*.pdf atime >2d mtime >2d
ACCESS policy SUCCESS V-288-0 Policy policy2 for fs fs1 added
successfully.
4
Modify data movement policy policy1 for file system fs1 to move the files with
the file name extension of .doc, which did not get accessed or modified for the
last three days.
Storage> fs policy modify policy1 \*.doc atime >3d mtime >3d
ACCESS policy SUCCESS V-288-0 Policy policy1 modified
successfully.
Configuring cloud as a tier
About policies for scale-out file systems
5
234
List all the policies.
Storage> fs policy list
Name
=======
FS name Action
======== =======
Source Tier
Destination Tier Retrieval Option Pattern
=============== ================= ================= ============
policy2
policy1
fs1
fs1
tier1
primary
Atime
======
>2d
>3d
Mtime
======
>2d
>3d
delete
move
tier1
Standard
Standard
\*.txt, \*.pdf
\*.doc
State
=========
not running
running
6
List all the policies set for file system fs1.
Storage> fs policy list fs1
Name
=======
policy2
policy1
FS name
========
fs1
fs1
Action
=======
delete
move
Source Tier
===============
tier1
primary
Destination Tier
=================
tier1
Retrieval Option
=================
Standard
Standard
Pattern
============
\*.txt, \*.pdf
\*.doc
Atime
Mtime
State
====== ====== ========
>2d
>2d
running
>3d
>3d
not running
7
Delete policy policy1 set for file system fs1.
Storage> fs policy delete policy1 fs1
ACCESS policy SUCCESS V-288-0 Policy policy1 for fs fs1 deleted
successfully.
8
Rename policy2 to policy3.
Storage> fs policy rename policy2 policy3
ACCESS policy SUCCESS V-288-0 Policy policy2 renamed to policy3.
Configuring cloud as a tier
About policies for scale-out file systems
9
235
Show the status of policy run for the policy Policy1.
Storage> fs policy status Policy1
Policy Name:
Policy1
=================================================
Policy Run Type:
Policy Run Status:
Total Data (Files):
Moved/Deleted Data (Files):
Last File Visited:
normal
running
93.1 GB (100000)
47.7 MB (879)
file100.txt
10 Abort the currently running policy Policy1.
Storage> fs policy abort Policy1
ACCESS policy INFO V-288-0 Policy Policy1 aborted successfully.
11 Start a dry run of the policy Policy1.
Storage> fs policy dryrun Policy1
ACCESS policy INFO V-288-0 Policy Policy1 dryrun started in background,
please check 'fs policy status' for progress.
12 Pause the currently running policy Policy1.
Storage> fs policy pause Policy1
ACCESS policy INFO V-288-0 Policy Policy1 paused successfully.
13 Run the currently paused policy Policy1.
Storage> fs policy run Policy1
Policy Policy1 is not running currently, as it was killed/paused.
Would you like to start new run (y/n): y
ACCESS policy INFO V-288-0 Policy Policy1 run started in background,
please check 'fs policy status' for progress.
14 Resume the currently paused policy Policy1.
Storage> fs policy resume Policy1
ACCESS policy INFO V-288-0 Policy Policy1 resume started in background,
please check 'fs policy status' for progress.
Configuring cloud as a tier
Obtaining statistics on data usage in the cloud tier in scale-out file systems
Obtaining statistics on data usage in the cloud
tier in scale-out file systems
You can find the following information for data stored in the cloud tier in a scale-out
file system:
â– 
Storage utilization in the cloud
â– 
Number of the objects that are stored in the cloud
See “About buckets and objects” on page 187.
â– 
Number of the files that are stored in the cloud
â– 
Number of GET, PUT, and DELETE requests
See the storage_tier(1) man page for detailed examples.
To display the number of GET, PUT, and DELETE requests
â—†
Show the number of GET (read requests), PUT (update or replacement
requests), and DELETE (deletion requests).
Storage> tier stats show fs_name
tier_name
These statistics are maintained in memory and so are not persistent across
reboots.
Example:
Storage> tier stats show largefs1 cloudtier
GET
GET bytes
PUT
PUT bytes
DELETE
168
174.5MB
918
10.3GB
20
236
Configuring cloud as a tier
Obtaining statistics on data usage in the cloud tier in scale-out file systems
To monitor the usage of data in the cloud tier
â—†
Monitor the usage of data in the cloud tier.
Storage> tier stats monitor fs_name
tier_name [interval]
Example:
Storage> tier stats monitor largefs1 cloudtier
GET
GET bytes
PUT
PUT bytes
6
384.0MB
4
256.0MB
0
0
0
0
0
0
0
0
0
0
2
128.0MB
0
0
0
0
DELETE
6
0
0
0
0
The default interval is five seconds.
To show the total data usage in the cloud tier
â—†
Show the total data usage in the cloud tier.
Storage> tier stats usage fs_name
tier_name
Unlike the Storage> tier stats show command, these statistics are persistent
across reboots.
Example:
Storage> tier stats usage largefs1 cloudtier
Storage Utilized
Number of objects
Number of files
223.1GB
488
231
This example shows that 223.1 GB is used in the cloud tier. Based on the size
of the file, each file is chunked to multiple objects when moved to the cloud,
so 231 files were stored as 488 objects in the cloud.
To reset the in-memory statistics of the cloud tier to zero
â—†
Reset the statistics of the specified cloud tier to zero.
Storage> tier stats reset fs_name tier_name
After executing the Storage> tier stats reset command, the output for
the Storage> tier stats show command is reset to zero.
237
Configuring cloud as a tier
Workflow for moving on-premises storage to cloud storage for NFS shares
Workflow for moving on-premises storage to
cloud storage for NFS shares
Figure 16-1 describes the workflow for moving your on-premises storage to cloud
storage for NFS shares for a scale-out file system.
238
Configuring cloud as a tier
Workflow for moving on-premises storage to cloud storage for NFS shares
Figure 16-1
Workflow for moving on-premises storage to cloud storage for
NFS shares for a scale-out file system
Create a storage pool
Storage> pool create
Create a scale-out file system
Storage> fs create largefs
Add a cloud subscription provider
Storage> cloud addservice
Attach a cloud tier
Storage> tier add cloud
Create an NFS share
NFS> share add
Add parameters to your NFS share
NFS> share add
Add or edit data retention and data
movement policies
Storage> tier move or Storage> fs
policy
Schedule policies
Storage> fs policy add
239
Section
9
Provisioning and managing
Veritas Access shares
â– 
Chapter 17. Creating shares for applications
â– 
Chapter 18. Creating and maintaining NFS shares
â– 
Chapter 19. Creating and maintaining CIFS shares
â– 
Chapter 20. Using Veritas Access with OpenStack
Chapter
17
Creating shares for
applications
This chapter includes the following topics:
â– 
About file sharing protocols
â– 
About concurrent access
â– 
Sharing directories using CIFS and NFS protocols
â– 
About concurrent access with NFS and S3
About file sharing protocols
Veritas Access provides support for multiple file sharing protocols.
Veritas Access offers unified access, which provides the option to share a file system
or a directory in a file system with more than one protocol. For unified access, only
certain protocols combinations are supported.
See “About concurrent access” on page 242.
Table 17-1
Protocols
Protocol
Definition
Amazon S3
The object server lets you store and retrieve the data that is stored in
Veritas Access using the Amazon Simple Storage Service (Amazon
S3) protocol.
See “About the Object Store server” on page 182.
Creating shares for applications
About concurrent access
Table 17-1
Protocols (continued)
Protocol
Definition
CIFS
CIFS is active on all nodes within the Veritas Access cluster. The
specific shares are read/write on the node they reside on, but can
failover to any other node in the cluster. Veritas Access supports CIFS
home directory shares.
See “About configuring Veritas Access for CIFS” on page 115.
FTP
Allows clients to access files on Veritas Access servers.
See “About FTP” on page 170.
NFS
All the nodes in the cluster can serve the same NFS share at the same
time in read-write mode. This creates very high aggregated throughput
rates, because you can use the sum of the bandwidth of all the nodes.
Cache-coherency is maintained throughout the cluster.
Veritas Access supports both the NFS kernel-based server and the
NFS-Ganesha server in a mutually exclusive way.
See “About using NFS server with Veritas Access” on page 104.
Oracle Direct NFS Optimized NFS client that provides faster access to NFS storage located
on NAS storage devices.
See “About using Veritas Access with Oracle Direct NFS” on page 162.
About concurrent access
Veritas Access provides support for multi-protocol file sharing where the same file
system can be exported to both Windows and UNIX users using the Common
Internet File System (CIFS), Network File System (NFS), and Simple Storage
Service (S3) protocols. The result is an efficient use of storage by sharing a single
data set across multiple application platforms.
Note: When a share is exported over both NFS and CIFS protocols, the applications
running on the NFS and CIFS clients may attempt to concurrently read or write the
same file. This may lead to unexpected results, such as reading stale data, since
the locking models used by these protocols are different. For this reason, Veritas
Access warns you when the share export is requested over NFS or CIFS and the
same share has already been exported for write access over CIFS or NFS.
The following sections describe concurrent access with multiple protocols.
See “Sharing directories using CIFS and NFS protocols” on page 243.
242
Creating shares for applications
Sharing directories using CIFS and NFS protocols
See “About concurrent access with NFS and S3” on page 246.
See “About file sharing protocols” on page 241.
Sharing directories using CIFS and NFS protocols
Veritas Access provides support for multi-protocol file sharing where the same
directory or file system can be exported to both Windows and UNIX users using
the CIFS and NFS protocols. The result is an efficient use of storage by sharing a
single data set across multi-application platforms.
Figure 17-1 shows how the directory sharing for the two protocols works.
Figure 17-1
Exporting and/or sharing CIFS and NFS directories
Shared
Storage
File System
FS1
2-node cluster
Data access by
CIFS protocol
Windows user
Data access by
NFS protocol
UNIX user
243
Creating shares for applications
Sharing directories using CIFS and NFS protocols
It is recommended that you disable the oplocks option when the following occurs:
â– 
A file system is exported over both the CIFS and NFS protocols.
â– 
Either the CIFS and NFS protocol is set with read and write permission.
244
Creating shares for applications
Sharing directories using CIFS and NFS protocols
To export a directory to Windows and UNIX users
1
To export a directory to Windows and UNIX users with read-only and read-write
permission respectively, enter the CIFS mode and enter the following
commands:
CIFS> show
Name
---netbios name
ntlm auth
allow trusted domains
homedirfs
aio size
idmap backend
workgroup
security
Domain
Domain user
Domain Controller
Clustering Mode
Value
----Pei60
yes
no
0
rid:10000-1000000
PEI-DOMAIN
ads
PEI-DOMAIN.COM
Administrator
10.200.107.251
normal
CIFS> share add fs1 share1 ro
Exporting CIFS filesystem : share1...
CIFS> share show
ShareName FileSystem ShareOptions
share1 fs1 owner=root,group=root,ro
Exit CIFS mode:
CIFS> exit
2
Enter the NFS mode and enter the following commands:
NFS> share add rw fs1
ACCESS nfs WARNING V-288-0 Filesystem (fs1)
is already shared over CIFS with 'ro' permission.
Do you want to proceed (y/n): y
Exporting *:/vx/fs1 with options rw
..Success.
NFS> share show
/vx/fs1 * (rw)
245
Creating shares for applications
About concurrent access with NFS and S3
About concurrent access with NFS and S3
Veritas Access supports concurrent access to a shared file system or a directory
from both NFS and S3. The supported configurations are:
â– 
Applications or users write data to NFS shares, while other applications or users
read the data over S3.
â– 
Applications or users write data to S3 shares, while other applications or users
read the data over NFS.
246
Chapter
18
Creating and maintaining
NFS shares
This chapter includes the following topics:
â– 
About NFS file sharing
â– 
Displaying file systems and snapshots that can be exported
â– 
Exporting an NFS share
â– 
Displaying exported directories
â– 
About managing NFS shares using netgroups
â– 
Unexporting a directory or deleting NFS options
â– 
Exporting an NFS share for Kerberos authentication
â– 
Mounting an NFS share with Kerberos security from the NFS client
â– 
Exporting an NFS snapshot
About NFS file sharing
The Network File System (NFS) protocol enables exported directories (including
all files under the directory that reside on the exported directory's file system) hosted
by an NFS server to be accessed by multiple UNIX and Linux client systems.
Using NFS, a local system can mount and use a disk partition or a file system from
a remote system (an NFS server), as if it were local. The Veritas Access NFS server
exports a directory, with selected permissions and options, and makes it available
to NFS clients.
Creating and maintaining NFS shares
Displaying file systems and snapshots that can be exported
The selected permissions and options can also be updated, by adding or removing
permissions, to restrict or expand the permitted use.
The Veritas Access NFS service is clustered. The NFS clients continuously retry
during a failover transition. Even if the TCP connection is broken for a short time,
the failover is transparent to NFS clients, and NFS clients regain access
transparently as soon as the failover is complete.
See “About using NFS server with Veritas Access” on page 104.
Displaying file systems and snapshots that can
be exported
To display a file system and snapshots that can be exported
â—†
To display online file systems and the snapshots that can be exported, enter
the following:
NFS> show fs
For example:
NFS> show fs
FS/Snapshot
===========
fs1
Exporting an NFS share
You can export an NFS share with the specified NFS options that can then be
accessed by one or more client systems.
If you add a directory that has already been exported with a different NFS option
(rw, ro, async, or secure, for example), Veritas Access provides a warning message
saying that the directory has already been exported. Veritas Access updates
(overwrite) the old NFS options with the new NFS options.
Directory options appear in parentheses.
If a client was not specified when the NFS> share add command was used, then
* is displayed as the system to be exported to, indicating that all clients can access
the directory.
248
Creating and maintaining NFS shares
Exporting an NFS share
Directories that have been exported to multiple clients appear as separate entries.
Directories that are exported to <world> and other specific clients also appear as
separate entries.
For example:
Consider the following set of exported directories where only the client (1.1.1.1)
has read-write access to directory (fs2), while all other clients have read access
only.
/vx/fs2
* (ro)
/vx/fs2
1.1.1.1 (rw)
Exporting the same directory to multiple clients with different permissions is not
supported with NFS-Ganesha.
When sharing a directory, Veritas Access does not check whether the client exists
or not. If you add a share for an unknown client, then an entry appears in the NFS>
show command output.
The NFS> show fs command displays the list of exportable file systems. If a
directory does not exist, the directory is automatically created and exported when
you try to export it.
Valid NFS options include the following:
rw
Grants read and write permission to the directory (including all
files under the directory that reside on the exported directory's file
system). Hosts mounting this directory will be able to make
changes to the directory.
ro (Default)
Grants read-only permission to the directory. Hosts mounting this
directory will not be able to change it.
sync (Default)
Grants synchronous write access to the directory. Forces the
server to perform a disk write before the request is considered
complete.
async
Grants asynchronous write access to the directory. Allows the
server to write data to the disk when appropriate.
secure (Default)
Grants secure access to the directory. Requires that clients
originate from a secure port. A secure port is between 1-1024.
insecure
Grants insecure access to the directory. Permits client requests
to originate from unprivileged ports (those above 1024).
secure_locks
(Default)
Requires authorization of all locking requests. This option is not
supported with NFS-Ganesha.
249
Creating and maintaining NFS shares
Exporting an NFS share
insecure_locks
Some NFS clients do not send credentials with lock requests, and
therefore work incorrectly with secure_locks, in which case you
can only lock world-readable files. If you have such clients, either
replace them with better ones, or use the insecure_locks
option. This option is not supported with NFS-Ganesha.
root_squash
(Default)
Prevents the root user on an NFS client from having root privileges
on an NFS mount.
This effectively "squashes" the power of the remote root user to
the lowest local user, preventing remote root users from acting as
though they were the root user on the local system.
no_root_squash
Disables the root_squash option. Allows root users on the NFS
client to have root privileges on the NFS server.
wdelay (Default)
Causes the NFS server to delay writing to the disk if another write
request is imminent. This can improve performance by reducing
the number of times the disk must be accessed by separate write
commands, reducing write overhead.
Note: The wdelay option is deprecated, and is supported for
backward-compatibility only.
This option is not supported with NFS-Ganesha.
no_wdelay
Disables the wdelay option.
The no_wdelay option has no effect if the async option is also
set.
Note: The no_wdelay option is deprecated, and is supported
for backward-compatibility only. Using the no_wdelay option is
always effective.
This option is not supported with NFS-Ganesha.
subtree_check
Verifies that the requested file is in an exported subdirectory. If
this option is turned off, the only verification is that the file is in an
exported file system. This option is not supported with
NFS-Ganesha.
no_subtree_check
(Default)
Sometimes subtree checking can produce problems when a
requested file is renamed while the client has the file open. If many
such situations are anticipated, it might be better to set
no_subtree_check. One such situation might be the export of
the home directory. Most other situations are best handled with
subtree_check. This option is not supported with NFS-Ganesha.
250
Creating and maintaining NFS shares
Exporting an NFS share
fsid (Default)
Allows the Veritas Access administrator to associate a specific
number as fsid with the share. This option is not supported with
NFS-Ganesha.
nordirplus
Allows you to disable a readdirplus remote procedure call (RPC).
sec
Specifies the Kerberos security options for exporting an NFS share.
The value can be krb5, krb5i , krb5p, or sys. The sys option
does not provide Kerberos authentication. The other options use
Kerberos V5 to authenticate users to the NFS server.
Note: With root_squash, the root user can access the share, but with 'nobody'
permissions.
To export a directory/file system
1
To see your exportable online file systems and snapshots, enter the following:
NFS> show fs
2
To see your NFS shares and their options, enter the following:
NFS> share show
3
To export a directory, enter the following command:
NFS> share add nfsoptions export_dir [client]
nfsoptions
Comma-separated list of export options from the set.
export_dir
Specifies the name of the directory you want to export.
The directory name should start with /vx, and only
a-zA-Z0-9_/@+=.:- characters are allowed for export_dir.
251
Creating and maintaining NFS shares
Displaying exported directories
client
Clients may be specified in the following ways:
â– 
Single host - specify a host either by an abbreviated name that
is recognized by the resolver (DNS is the resolver), the fully
qualified domain name, or an IP address.
â– 
Netgroups - specify netgroups as @group. Only the host part
of each netgroup member is considered for checking
membership.
IP networks - specify an IP address and netmask pair
(address/netmask) to simultaneously export directories to all
hosts on an IP sub-network. Specify the netmask as a
contiguous mask length. You can specify either an IPv4
address or an IPv6 address.
â– 
If the client is not given, then the specified directory can be
mounted or accessed by any client. To re-export new options to
an existing share, the new options will be updated after the
command is run.
Displaying exported directories
You can display the exported directories and the NFS options that are specified
when the directory was exported. For NFS-Ganesha (GNFS), the output also shows
the virtual IP address that must be used on the client to mount the GNFS shares
for the shares that are exported from 'largefs' file systems.
To display exported directories
To display exported directories, enter the following:
NFS> share show
The command output displays the following columns:
Left-hand column
Displays the directory that was exported.
Right-hand column Displays the system that the directory is exported to, and the NFS
options with which the directory was exported.
The command output displays the following columns:
Left-hand column
Displays the directory that was exported.
Middle column
Displays the system that the directory is exported to, and the NFS
options with which the directory was exported.
252
Creating and maintaining NFS shares
About managing NFS shares using netgroups
Right-hand column Displays the virtual IP address that must be used on the client to mount
the GNFS shares of scale-out file systems.
About managing NFS shares using netgroups
A netgroup defines a network-wide group of hosts and users. You use netgroups
for restricting access to shared NFS file systems and to restrict remote login and
shell access.
Each line in the netgroup file consists of a netgroup name followed by a list of
members, where a member is either another netgroup name, or a comma-separated
list of host, user, or a domain. Host, user, and domain are character strings for the
corresponding components. Any of these three fields can be empty, which indicates
a wildcard, or may consist of the string "-" to indicate that there is no valid value for
the field. The domain field must either be the local domain name or empty for the
netgroup entry to be used. This field does not limit the netgroup or provide any
security. The domain field refers to the domain in which the host is valid, not the
domain containing the trusted host.
When exporting a directory by NFS with the specified options, clients may be
specified using netgroups. Netgroups are identified using @group. Only the host
part of each netgroup member is considered when checking for membership.
NFS> share add rw,async /vx/fs1/share @client_group
Unexporting a directory or deleting NFS options
You can unexport the share of the exported directory.
Note: You will receive an error message if you try to remove a directory that does
not exist.
253
Creating and maintaining NFS shares
Unexporting a directory or deleting NFS options
To unexport a directory or delete NFS options
1
To see your existing exported resources, enter the following command:
NFS> share show
Only the directories that are displayed can be unexported.
For example:
NFS> share show
/vx/fs2
/vx/fs3
2
* (sync)
* (secure,ro,no_root_squash)
To delete a directory from the export path, enter the following command:
NFS> share delete export_dir [client]
For example:
NFS> share delete /vx/fs3
Removing export path *:/vx/fs3
..Success.
export_dir
Specifies the name of the directory you want to delete.
The directory name should start with /vx, and only
a-zA-Z0-9_/@+=.:- characters are allowed in export_dir.
You cannot include single or double quotes that do not enclose
characters.
NFS> share delete "*:/vx/example"
254
Creating and maintaining NFS shares
Exporting an NFS share for Kerberos authentication
client
Clients may be specified in the following ways:
â– 
â– 
â– 
Single host - specify a host either by an abbreviated name that
is recognized by the resolver (DNS is the resolver), the fully
qualified domain name, or an IP address.
Netgroups - specify netgroups as @group. Only the host part
of each netgroup member is considered for checking
membership.
IP networks - specify an IP address and netmask pair
(address/netmask) to simultaneously export directories to all
hosts on an IP sub-network. Specify the netmask as a
contiguous mask length.
If client is included, the directory is removed from the export path
that was directed at the client.
If a directory is being exported to a specific client, the NFS> share
delete command must specify the client to remove that export
path.
If the client is not specified, then the specified directory can be
mounted or accessed by any client.
Exporting an NFS share for Kerberos
authentication
Kerberos provides three types of security options for exporting an NFS share:
â– 
krb5
â– 
krb5i
â– 
krb5p
Veritas Access also provides a sys (sec=sys) export option, which does not provide
Kerberos authentication. Veritas Access supports all of the three types of Kerberos
security options. All of the security options use Kerberos V5 to authenticate users
to NFS servers.
krb5i computes a hash on every remote procedure (RPC) call request to the server
and every response to the client. The hash is computed on an entire message:
RPC header, plus NFS arguments or results. Since the hash information travels
with the NFS packet, any attacker modifying the data in the packet can be detected.
Thus krb5i provides integrity protection.
255
Creating and maintaining NFS shares
Mounting an NFS share with Kerberos security from the NFS client
krb5p uses encryption to provide privacy. With krb5p, NFS arguments and results
are encrypted, so a malicious attacker cannot spoof on the NFS packets and see
file data or metadata.
Note: Since krb5i and krb5p perform an additional set of computations on each
NFS packet, NFS performance decreases as compared with krb5.
Performance decreases in the following order: krb5 > krb5i > krb5p.
krb5 provides better performance and krb5p gives the least performance.
Additional export options are available.
See “Exporting an NFS share ” on page 248.
To export a directory using only the krb5 mount option
â—†
Export a directory using only the krb5 mount option:
NFS> share add sec=krb5 /vx/fs1
Exporting /vx/fs1 with options sec=krb5
Success.
To export a directory using krb5, krb5i, krb5p, and sys options
â—†
Export a directory using krb5, krb5i, krb5p, and sys options.
NFS> share add sec=krb5:krb5i:krb5p:sys /vx/fs1
Exporting /vx/fs1 with options sec=krb5:krb5i:krb5p:sys
Success.
Different clients can use different levels of security in this case. Client A can
mount with krb5, and client B can mount with krb5p. If no mount option is given
at the client side, security to be chosen is negotiated, and the highest level of
security is chosen. In this case, it is krb5p.
Mounting an NFS share with Kerberos security
from the NFS client
This section explains how the NFS client will NFS mount with the Kerberos mount
options. This procedure assumes that the NFS service principal of the NFS client
is added to the KDC server, and the keytab is copied at the appropriate location on
the client.
The steps may differ depending on the operating system and version of the client.
On a Red Hat Enterprise Linux (RHEL) client, Kerberos can be configured as follows.
256
Creating and maintaining NFS shares
Mounting an NFS share with Kerberos security from the NFS client
To mount the NFS client with the Kerberos mount options
1
Create the NFS service principal for the client on the KDC server and copy it
to the client system at /etc/krb5.keytab.
2
Configure the /etc/krb5.conf file with the KDC details.
3
Enable SECURE_NFS=yes in the /etc/sysconfig/nfs file.
4
Start the rpcgssd service.
# service rpcgssd start
5
Keep the clocks of the KDC server, the Veritas Access server, and the NFS
client in sync.
A maximum of a five-minute variation is accepted, or otherwise the Kerberos
NFS mount fails.
[root@krb-client]# mount -o vers=4,sec=krb5 10.209.107.24:/vx/fs2/share1 /mnt/share1
Make sure that the virtual IP that is used for mounting can use reverse name
lookup to the Veritas Access cluster name. For example, if access_ga is the
cluster name, then in the above example, access_ga should look up to
10.209.107.24 and vice versa. If the IP 10.209.107.24 can be looked up by
multiple host names, make sure that the entry access_ga is first in the reverse
lookup.
6
Make sure the users accessing the NFS share are already added on the KDC
server.
Use kinit to get the ticket granting ticket from the KDC server on the NFS
client.
[root@krb-client]# su - sfuuser2
[sfuuser2@krb-client ~]$ kinit
Password for [email protected]:
[sfuuser2@krb-client ~]$ cd /mnt/share1
[sfuuser2@krb-client share1]$ touch test.txt
[sfuuser2@krb-client share1]$
[sfuuser2@krb-client share1]$ ls -al total 4
drwxrwxrwx 2 root root 96 May 14 16:03 .
drwxr-xr-x. 17 root root 4096 May 7 19:41 ..
-rw-r--r-- 1 sfuuser2 sfugroup1 0 May 14 16:03 test.txt
257
Creating and maintaining NFS shares
Exporting an NFS snapshot
Exporting an NFS snapshot
To export an NFS snapshot
1
For example, to create an NFS snapshot, enter the following:
Storage> snapshot create fs5sp1 FS5
See “About snapshots” on page 375.
2
For example, to export the NFS snapshot, enter the following:
NFS> share add rw /vx/FS5:fs5sp1
See “Exporting an NFS share ” on page 248.
258
Chapter
19
Creating and maintaining
CIFS shares
This chapter includes the following topics:
â– 
About managing CIFS shares
â– 
Exporting a directory as a CIFS share
â– 
Configuring a CIFS share as secondary storage for an Enterprise Vault store
â– 
Exporting the same file system/directory as a different CIFS share
â– 
About the CIFS export options
â– 
Setting share properties
â– 
Hiding system files when adding a CIFS normal share
â– 
Displaying CIFS share properties
â– 
Allowing specified users and groups access to the CIFS share
â– 
Denying specified users and groups access to the CIFS share
â– 
Exporting a CIFS snapshot
â– 
Deleting a CIFS share
â– 
Modifying a CIFS share
â– 
Making a CIFS share shadow copy aware
Creating and maintaining CIFS shares
About managing CIFS shares
About managing CIFS shares
You can export the Veritas Access file systems to clients as CIFS shares. When a
share is created, it is given a name. The share name is different from the file system
name. Clients use the share name when they import the share.
Note: You cannot export a scale-out file system as a CIFS share.
You create and export a share with one command. The same command binds the
share to a file system, and you can also use it to specify share properties.
In addition to exporting file systems as CIFS shares, you can use Veritas Access
to store user home directories. Each of these home directories is called a home
directory share. Shares that are used to export ordinary file systems (that is, file
systems that are not used for home directories), are called ordinary shares to
distinguish them from home directory shares.
Exporting a directory as a CIFS share
Directory-level share support is available only in the ctdb clustering mode. If you
want to export a directory as a CIFS share, you must first switch to the ctdb clustering
mode.
See “About CIFS clustering modes” on page 155.
To check the status of the CIFS server to confirm that the clustering mode is
set to ctdb
â—†
To check the status of the CIFS server to confirm that the clustering mode is
set to ctdb, enter the following:
CIFS> server status
To export a directory as a CIFS share
1
To export a directory as a CIFS share, enter the following:
CIFS> share add fs1/access share1 rw,full_acl
If the directory name contains a space, enter the directory name with double
quotes (" ").
2
To list the CIFS shares, enter the following:
CIFS> share show
260
Creating and maintaining CIFS shares
Configuring a CIFS share as secondary storage for an Enterprise Vault store
Configuring a CIFS share as secondary storage
for an Enterprise Vault store
You can use Veritas Access as secondary storage with Enterprise Vault 12.0 by
exporting the file system over the CIFS protocol.
Note: Before configuring the CIFS share path as secondary storage, you need to
verify that the CIFS share path is accessible. Confirm that I/O operations can occur
on the CIFS share.
Configuring a CIFS share as secondary storage for an Enterprise Vault store
1
On the Veritas Access cluster, you export the file system over the CIFS protocol
using the following CIFS export options: fs_mode=1777,rw,full_acl.
See “About the CIFS export options” on page 262.
2
On the Enterprise Vault server, open the Enterprise Vault console.
3
Right-click on the partition that is created on Vault Store > Properties.
Enterprise Vault brings up the Vault Store Partition Properties window.
4
In the Vault Store Partition Properties window, select the Migration tab.
5
Specify the path of the CIFS share in the Secondary storage location text
box.
Example:
\\IP address of the CIFS share\name of file system
6
Press Apply.
Exporting the same file system/directory as a
different CIFS share
In ctdb clustering mode, you can export the same file system or directory as a
different CIFS share with different available CIFS options. This features allows you
more granular control over CIFS shares for different sets of users.
If the same file system is exported as different shares in ctdb clustering mode, then
after switching to normal clustering mode only one share out of these is available.
261
Creating and maintaining CIFS shares
About the CIFS export options
Note: If the same file system or directory is exported as different shares, then the
fs_mode value is the same for all of these shares; that is, the last modified fs_mode
value is applicable for all of those shares.
Note: This feature is only supported in the ctdb clustering mode.
To export a directory with read access to everyone, but write access to the
limited set of users who need to be authenticated
â—†
To export a directory with read access to everyone, but write access to the
limited set of users who need to be authenticated, enter the following:
CIFS> share add "fs1/Veritas isa" share1 rw,noguest
CIFS> share add "fs1/Veritas isa" share2 ro,guest
CIFS> share show
The above example illustrates that the same directory is exported as a different
CIFS share for guest and noquest users with different sets of permissions.
About the CIFS export options
The following are the CIFS export options.
Table 19-1
CIFS export options
CIFS export option
Definition
rw
There is a share option which specifies if the files in the share will
be read-only or if both read and write access will be possible,
subject to the authentication and authorization checks when a
specific access is attempted. This share option can be given one
of these values, either rw or ro.
Grants read and write permission to the exported share.
ro (Default)
Grants read-only permission to the exported share. Files cannot
be created or modified.
262
Creating and maintaining CIFS shares
About the CIFS export options
Table 19-1
CIFS export options (continued)
CIFS export option
Definition
guest
Another configuration option specifies if a user trying to establish
a CIFS connection with the share must always provide the user
name and password, or if they can connect without it. In this case,
only restricted access to the share will be allowed. The same kind
of access is allowed to anonymous or guest user accounts. This
share option can have one of the following values, either guest or
noguest.
Veritas Access allows restricted access to the share when no user
name or password is provided.
noguest (Default)
Veritas Access always requires the user name and password for
all of the connections to this share.
full_acl
All Windows Access Control Lists (ACLs) are supported except in
the case when you attempt using the Windows Explorer folder
Properties > Security GUI to inherit down to a non-empty directory
hierarchy while denying all access to yourself.
no_full_acl (Default)
Some advanced Windows Access Control Lists (ACLs) functionality
does not work. For example, if you try to create ACL rules on files
saved in a CIFS share using Windows explorer while allowing
some set of file access for user1 and denying file access for
user2, this is not possible when CIFS shares are exported using
no_full_acl.
hide_unreadable
Prevents clients from seeing the existence of files and directories
that are not readable to them.
The default is: hide_unreadable is set to off.
veto_sys_files
To hide some system files (lost+found, quotas, quotas.grp) from
displaying when using a CIFS normal share, you can use the
veto_sys_files CIFS export option. For example, when adding
a CIFS normal share, the default is to display the system files. To
hide the system files, you must use the veto_sys_files CIFS
export option.
fs_mode
When a file system or directory is exported by CIFS, its mode is
set to an fs_mode value. It is the UNIX access control set on a
file system, and CIFS options like rw/ro do not take precedence
over it. This value is reset to 0755 when the CIFS share is deleted.
The default is: fs_mode = 1777.
263
Creating and maintaining CIFS shares
About the CIFS export options
Table 19-1
CIFS export options (continued)
CIFS export option
Definition
dir_mask
When a directory is created under a file system or directory
exported by CIFS, the necessary permissions are calculated by
mapping DOS modes to UNIX permissions. The resulting UNIX
mode is then bit-wise 'AND'ed with this parameter. Any bit not set
here is removed from the modes set on a directory when it is
created.
The default is: dir_mask = 0775.
create_mask
When a file is created under a file system or directory exported
by CIFS, the necessary permissions are calculated by mapping
DOS modes to UNIX permissions. The resulting UNIX mode is
then bit-wise 'AND'ed with this parameter. Any bit not set here is
removed from the modes set on a file when it is created.
The default is: create_mask = 0775.
oplocks (Default)
Veritas Access supports the CIFS opportunistic locks. You can
enable or disable them for a specific share. The opportunistic locks
improve performance for some workloads, and there is a share
configuration option which can be given one of the following values,
either oplocks or nooplocks.
Veritas Access supports opportunistic locks on the files in this
share.
nooplocks
No opportunistic locks will be used for this share.
Disable the oplocks when:
owner
â– 
1) A file system is exported over both CIFS and NFS protocols.
â– 
2) Either CIFS or NFS protocol has read and write access.
There are more share configuration options that can be used to
specify the user and group who own the share. If you do not specify
these options for a share, Veritas Access uses the current values
as default values for these options. You may want to change the
default values to allow a specific user or group to be the share
owner.
Irrespective of who are owner and group of the exported share,
any CIFS clients can create folders and files in the share. However,
there are some operations that require owner privileges; for
example, changing the owner itself, and changing permissions of
the top-level folder (that is, the root directory in UNIX terms). To
enable these operations, you can set the owner option to a specific
user name, and this user can perform the privileged operations.
264
Creating and maintaining CIFS shares
About the CIFS export options
Table 19-1
CIFS export options (continued)
CIFS export option
Definition
group
By default, the current group is the primary group owner of the
root directory of the exported share. This lets CIFS clients create
folders and files in the share. However, there are some operations
that require group privileges; for example, changing the group
itself, and changing permissions of the top-level folder (that is, the
root directory in UNIX terms). To enable these operations, you
can set the group option to a specific group name, and this group
can perform the privileged operations.
ip
Veritas Access lets you specify a virtual IP address. If you set
ip=virtualip, the share is located on the specified virtual IP
address. This address must be part of the Veritas Access cluster,
and is used by the system to serve the share internally.
Note: ip is not a valid CIFS option when using the ctdb clustering
mode.
See “About CIFS clustering modes” on page 155.
max_connections
Specify the maximum limit for concurrent CIFS connections for a
CIFS share.
The default value is 0, indicating that there are no limited
connections.
shadow_copy
Indicates that this is a shadow_copy capable CIFS share.
See “Making a CIFS share shadow copy aware” on page 272.
enable_encryption
If enable_encryption is set, then all the traffic to a share must
be encrypted once the connection has been made to the share.
The server will return an access denied message to all
unencrypted requests on such a share. As SMB3 is the max
protocol, only SMB3 clients supporting encryption will be able to
connect to the share.
disable_encryption
If disable_encryption is set, then encryption cannot be
negotiated by the client. SMB1, SMB2, and SMB3 clients can
connect to the share.
enable_durable_handles Enables support for durable handles for CIFS shares. Enabling
this option disables use of POSIX/fcntl locks. Exporting the same
CIFS share using NFS may result in data corruption. For support
for durable handles on CIFS shares, you must specify this option.
265
Creating and maintaining CIFS shares
Setting share properties
266
Setting share properties
After a file system is exported as a CIFS share, you can change one or more share
options. This is done using the same share add command, giving the name of an
existing share and the name of the file system exported with this share. Veritas
Access will realize the given share has already been exported and that it is only
required to change the values of the share options.
For example, to export the file system fs1 with the name share1, enter the following:
CIFS> share add fs1 share1 "owner=administrator,group=domain users,rw"
CIFS> share show
To export a file system
â—†
Export a file system, enter the following:
CIFS> share add filesystem sharename \
[@virtual_ip] [cifsoptions]
filesystem
A Veritas Access file system that you want to export as a CIFS share.
The given file system must not be currently used for storing the home
directory shares.
The file system or directory path should always start with the file system
name, not with the file system mount point /vx.
sharename
The name for the newly-exported share. Names of the Veritas Access
shares can consist of the following characters: lower and uppercase
letters "a" - "z" and "A" - "Z," numbers "0" - "9" and special characters:
"_" and "-". ( "-" cannot be used as the first character in a share name).
Note: A share name cannot exceed 256 characters.
@virtual_ip
Specifies an optional full identifier allowing a virtual IP to access the
specified CIFS share.
Veritas Access provides unified access to all shares through virtual IPs.
If a CIFS share is added with the @virtual_ip full identifier, the CIFS
share is created by allowing only this virtual IP to access this CIFS
share.
CIFS> share show
Creating and maintaining CIFS shares
Hiding system files when adding a CIFS normal share
cifsoptions
A comma-separated list of CIFS export options. This part of the
command is optional.
If a CIFS export option is not provided, Veritas Access uses the default
value.
See “About the CIFS export options” on page 262.
For example, an existing file system called FSA being exported as a share called
ABC:
CIFS> share add FSA ABC rw,guest,owner=john,group=abcdev
Hiding system files when adding a CIFS normal
share
When adding a CIFS normal share, the default is to display the system files
(lost+found, quotas, quotas.grp). To hide the system files, you must use the
veto_sys_files CIFS export option.
See “About the CIFS export options” on page 262.
To hide system files when adding a CIFS normal share
â—†
To hide system files when adding a CIFS normal share, enter the following:
CIFS> share add filesystem
sharename [cifsoption]
Use the veto_sys_files CIFS export option to hide system files.
Displaying CIFS share properties
To display share properties
1
To display the information about all of the exported shares, enter the following:
CIFS> share show
2
To display the information about one specific share, enter the following:
CIFS> share show sharename
267
Creating and maintaining CIFS shares
Allowing specified users and groups access to the CIFS share
Allowing specified users and groups access to
the CIFS share
To allow specified users and groups access to the CIFS share
â—†
To allow specified users and groups access to the CIFS share, enter the
following:
CIFS> share allow sharename
@group1 \
[,@group2,user1,user2,...]
sharename
Name of the CIFS share for which you want to allow specified users and groups access.
Names of the Veritas Access shares are non case sensitive and can consist of the following
characters: lower and uppercase letters "a" - "z" and "A" - "Z," numbers "0" - "9" and special
characters: "_" and "-". ( "-", cannot be used as the first character in a share name).
group
If the CIFS server joined a domain, and there is a space in the user or group name, the user or
group name needs to be entered with double quotes (for example, "@domain users").
By default, all groups are allowed to access the shares.
In the case where a CIFS share has joined a domain, and the domain contains trusted domains,
and allow_trusted_domains is set to yes on the CIFS server, if you want to allow/deny users
or groups from the trusted domains, the user or group needs to be prefixed with the trusted
domain name. Separate the domain and user/group with a double backslash.
For example:
CIFS> share allow sharename
"@domain name\\group name"
user
Name of the CIFS user allowed access to the CIFS share.
By default, all users are allowed to access the shares.
If all is specified, then default access restrictions are restored on the CIFS
share.
CIFS> share allow share1 user1,@group1
268
Creating and maintaining CIFS shares
Denying specified users and groups access to the CIFS share
Denying specified users and groups access to
the CIFS share
To deny specified users and groups access to the CIFS share
â—†
To deny specified users and groups access to the CIFS share, enter the
following:
CIFS> share deny sharename \
@group1[,@group2,user1,user2,...]
sharename
Name of the CIFS share for which you want to deny specified users and groups access.
Names of the Veritas Access shares are non case sensitive and can consist of the following
characters: lower and uppercase letters "a" - "z" and "A" - "Z," numbers "0" - "9" and special
characters: "_" and "-". ( "-", cannot be used as the first character in a share name).
group
If the CIFS server joined a domain, and there is a space in the user or group name, the user or
group name needs to be entered with double quotes (for example, "@domain users").
By default, all groups are allowed to access the shares.
In the case where a CIFS share has joined a domain, and the domain contains trusted domains,
and CIFS is set to trusted domains as true, if you want to allow/deny users or groups from the
trusted domains, the user or group needs to be prefixed with the trusted domain name. Separate
the domain and user/group with a double backslash.
For example:
CIFS> share deny sharename
"@domain name\\user name"
user
Name of the CIFS user denied access to the CIFS share.
By default, all users are allowed to access the shares.
If all is specified, then all the users and groups are not able to access the share.
CIFS> share deny share1 user1,@group1
269
Creating and maintaining CIFS shares
Exporting a CIFS snapshot
Exporting a CIFS snapshot
To export a CIFS snapshot
1
To create a CIFS snapshot, enter the following for example:
Storage> snapshot create
cf11sp1 CF11
See “About snapshots” on page 375.
2
To export the CIFS snapshot, enter the following for example:
CIFS> share add
CF11:cf11sp1 cf11sp1 rw,guest
A client can access the CIFS snapshot by the CIFS share name, cf11sp1.
Deleting a CIFS share
To delete a CIFS share
1
To delete a share, enter the following:
CIFS> share delete sharename [@virtual_ip]
sharename
Specifies the name of the share that you want to delete.
@virtual_ip
Specifies an optional full identifier allowing a virtual IP to access
the specified CIFS share.
For example:
CIFS> share delete share1
2
To confirm the share is no longer exported, enter the following:
CIFS> share show
In the case of any remanent sessions (sessions that are not closed while
deleting a CIFS share), Veritas Access displays the following output:
CIFS> share delete share2
This is a rare situation, and it occurs if the following conditions are met:
â– 
CIFS server is online
â– 
CIFS share that is being deleted is ONLINE
â– 
There are some existing client connections with that CIFS share
270
Creating and maintaining CIFS shares
Modifying a CIFS share
â– 
While deleting the share, some remanent sessions are left
If any condition is failed above, then the CIFS> share delete command output
displays as usual.
CIFS> share delete share2
Modifying a CIFS share
You can re-export the file system with the given share name. The new options are
updated after the command is run.
To modify a CIFS share
â—†
To modify a CIFS share, enter the following:
CIFS> share modify sharename[@virtual_ip] [cifsoptions]
sharename
The name of the CIFS share that you want to modify.
Names of the Veritas Access shares can consist of the following
characters: lower and uppercase letters "a" - "z" and "A" - "Z,"
numbers "0" - "9" and special characters: "_" and "-". ( "-" cannot
be used as the first character in a share name).
@virtual_ip
Specifies an optional full identifier allowing a virtual IP to access
the specified CIFS share.
Veritas Access provides unified access to all shares through virtual
IPs.
cifsoptions
A comma-separated list of CIFS export options.
If a CIFS export option is not provided, Veritas Access uses the
default value.
See “About the CIFS export options” on page 262.
For example:
CIFS> share modify share2 ro,full_acl
CIFS> share show
271
Creating and maintaining CIFS shares
Making a CIFS share shadow copy aware
Making a CIFS share shadow copy aware
Shadow Copy (Volume Snapshot Service or Volume Shadow Copy Service or VSS)
is a technology included in Microsoft Windows that allows taking manual or automatic
backup copies or snapshots of data on a specific volume at a specific point in time
over regular intervals.
To make a CIFS share shadow copy aware
â—†
Add the CIFS export option shadow_copy to the CIFS share.
For example:
CIFS> share add fs1 share1 rw,shadow_copy
CIFS> share show share1
See “About the CIFS export options” on page 262.
272
Chapter
20
Using Veritas Access with
OpenStack
This chapter includes the following topics:
â– 
About the Veritas Access integration with OpenStack
â– 
About the Veritas Access integration with OpenStack Cinder
â– 
About the Veritas Access integration with OpenStack Manila
About the Veritas Access integration with
OpenStack
OpenStack is a cloud operating system that controls large pools of computer,
storage, and networking resources in a data center. OpenStack provides a
dashboard that lets you provision resources using a web interface.
Veritas Access is integrated with the following OpenStack components:
â– 
Cinder - is a block storage service for OpenStack. Cinder provides the
infrastructure for managing volumes in OpenStack. Cinder volumes provide
persistent storage to guest virtual machines (known as instances) that manage
OpenStack compute software. Cinder allows the ability for OpenStack instances
to use the storage hosted by Veritas Access.
See “About the Veritas Access integration with OpenStack Cinder” on page 274.
â– 
Manila - lets you share Veritas Access file systems with virtual machines on
OpenStack.
See “About the Veritas Access integration with OpenStack Manila” on page 283.
Using Veritas Access with OpenStack
About the Veritas Access integration with OpenStack Cinder
About the Veritas Access integration with
OpenStack Cinder
Cinder is a block storage service for OpenStack. Cinder provides the infrastructure
for managing volumes in OpenStack. Cinder volumes provide persistent storage
to guest virtual machines (known as instances) that manage OpenStack compute
software.
Veritas Access is integrated with OpenStack Cinder, which provides the ability for
OpenStack instances to use the storage hosted by Veritas Access.
Table 20-1
Mapping of OpenStack Cinder operations to Veritas Access
Operation in OpenStack Cinder
Operation in Veritas Access
Create and delete volumes
Create and delete files.
Attach and detach the volumes to virtual This operation occurs on the OpenStack controller
machines
node.
This operation is not applicable in Veritas Access.
Create and delete snapshots of the
volumes
Create and delete the snapshot files of the volume.
Create a volume from a snapshot
This operation occurs on the OpenStack controller
node.
This operation is not applicable in Veritas Access.
Copy images to volumes
This operation occurs on the OpenStack controller
node.
This operation is not applicable in Veritas Access.
Copy volumes to images
This operation occurs on the OpenStack controller
node.
This operation is not applicable in Veritas Access.
Extend volumes
Extending files.
Note: To perform these operations, you need to use the OpenStack Cinder
commands, not the Veritas Access commands.
The Veritas NFS OpenStack Cinder driver is a Python script that is checked in to
the OpenStack source code in the public domain. To use the Veritas Access
274
Using Veritas Access with OpenStack
About the Veritas Access integration with OpenStack Cinder
integration with OpenStack Cinder, you need to make some configuration changes
on the OpenStack controller node.
For the supported OpenStack versions for running the OpenStack Cinder driver,
see the Veritas Access Installation Guide.
About the Veritas Access integration with OpenStack Cinder
architecture
Figure 20-1 describes the Veritas Access integration with OpenStack Cinder
architecture.
OpenStack instances are the individual virtual machines running on physical compute
nodes. The compute service, Nova, manages the OpenStack instances.
Veritas Access integration with OpenStack Cinder architecture
Figure 20-1
Instance1
/dev/vdb
Instance2
/dev/vdc
Nova
OpenStack
Cinder
Veritas NFS
Cinder Driver
Red Hat
OpenStack
NFS
Cluster File System
Node A
Veritas Access
Instance3
/dev/vdb
Cluster File
System Node B
Shared Storage
275
Using Veritas Access with OpenStack
About the Veritas Access integration with OpenStack Cinder
Configuring Veritas Access with OpenStack Cinder
To show all your NFS shares
â—†
To show all your NFS shares that are exported from Veritas Access, enter the
following:
OPENSTACK> cinder share show
For example:
OPENSTACK> cinder share show
/vx/fs1
*(rw,no_root_squash)
OPENSTACK> cinder share show
/vx/o_fs
2001:21::/120 (rw,sync,no_root_squash)
To share and export a file system
â—†
To share and export a file system, enter the following:
OPENSTACK> cinder share add export-dir world|client
After issuing this command, OpenStack Cinder will be able to mount the
exported file system using NFS.
export-dir
Specifies the path of the directory that needs to be exported
to the client.
The directory path should start with /vx and only the
following characters are allowed:
’a-zAZ0- 9_/@+=.:-’
world
Specifies if the NFS export directory is intended for everyone.
276
Using Veritas Access with OpenStack
About the Veritas Access integration with OpenStack Cinder
client
Exports the directory with the specified options.
Clients may be specified in the following ways:
â– 
â– 
â– 
Single host
Specify a host either by an abbreviated name recognized
by the resolver, the fully qualified domain name, or an IP
address.
Netgroups
Netgroups may be given as @group. Only the host part
of each netgroup member is considered when checking
for membership.
IP networks
You can simultaneously export directories to all hosts on
an IP (sub-network). This is done by specifying an IP
address and netmask pair as address/netmask where the
netmask can be specified as a contiguous mask length.
IPv4 or IPv6 addresses can be used.
To re-export new options to an existing share, the new options
will be updated after the command is run.
For example:
OPENSTACK> cinder share add /vx/fs1 world
Exporting /vs/fs1 with options rw,no_root_squash
OPENSTACK> cinder share add /vx/o_fs 2001:21::/120
Exporting /vx/o_fs with options rw,sync,no_root_squash Success.
To delete the exported file system
â—†
To delete (or unshare) the exported file system, enter the following:
OPENSTACK> cinder share delete export-dir client
For example:
OPENSTACK> cinder share delete /vx/fs1 world
Removing export path *:/vx/fs1
Success.
277
Using Veritas Access with OpenStack
About the Veritas Access integration with OpenStack Cinder
To start or display the status of the OpenStack Cinder service
1
To start the OpenStack Cinder service, enter the following:
OPENSTACK> cinder service start
The OPENSTACK> cinder service start command needs the NFS service
to be up for exporting any mount point using NFS. The OPENSTACK> cinder
service start command internally starts the NFS service by running the
command NFS> server start if the NFS service has not been started. There
is no OPENSTACK> cinder service stop command. If you need to stop NFS
mounts from being exported, use the NFS> server stop command.
For example:
OPENSTACK> cinder server start
..Success.
2
To display the status of the OpenStack Cinder service, enter the following:
OPENSTACK> cinder service status
For example:
OPENSTACK> cinder server status
NFS Status on access_01 : ONLINE
NFS Status on access_02 : ONLINE
To display configuration changes that need to be done on the OpenStack
controller node
â—†
To display all the configuration changes that need to be done on the OpenStack
controller node, enter the following:
OPENSTACK> cinder configure export-dir
export-dir
Specifies the path of the directory that needs to be exported
to the client.
The directory path should start with /vx and only the
following characters are allowed:
’a-zAZ0- 9_/@+=.:-’
For example:
OPENSTACK> cinder configure /vx/fs1
278
Using Veritas Access with OpenStack
About the Veritas Access integration with OpenStack Cinder
279
To create a new volume backend named ACCESS_HDD in OpenStack Cinder
1
Add the following configuration block in the /etc/cinder/cinder.conf file on
your OpenStack controller node.
enabled_backends=access-1
[access-1]
volume_driver=cinder.volume.drivers.veritas_cnfs.VeritasCNFSDriver
volume_backend_name=ACCESS_HDD
nfs_shares_config=/etc/cinder/access_share_hdd
nfs_mount_point_base=/cinder/cnfs/cnfs_sata_hdd
nfs_sparsed_volumes=True
nfs_disk_util=df
nfs_mount_options=nfsvers=3
Add the lines from the configuration block at the bottom of the file.
volume_driver
Name of the Veritas Access Cinder driver.
volume_backend_name
For this example, ACCESS_HDD is used.
This name can be different for each NFS share.
If several backends have the same name, the
OpenStack Cinder scheduler decides in which
backend to create the volume.
nfs_shares_config
This file has the share details in the form of
vip:/exported_dir.
nfs_mount_point_base
Mount point where the share will be mounted on
OpenStack Cinder.
If the directory does not exist, create it. Make sure
that the Cinder user has write permission on this
directory.
nfs_sparsed_volumes
Preallocate or sparse files.
nfs_disk_util
Free space calculation.
nfs_mount_options
These are the mount options OpenStack Cinder
uses to NFS mount.
This same configuration information for adding to the
/etc/cinder/cinder.conf file can be obtained by running the OPENSTACK
CINDER> configure export_dir command.
2
Append the following in the /etc/cinder/access_share_hdd file on your
Using Veritas Access with OpenStack
About the Veritas Access integration with OpenStack Cinder
280
OpenStack controller node:
vip:/vx/fs1
Use one of the virtual IPs for vip:
â– 
192.1.1.190
â– 
192.1.1.191
â– 
192.1.1.192
â– 
192.1.1.193
â– 
192.1.1.199
You can obtain Veritas Access virtual IPs using the OPENSTACK> cinder
configure export-dir option.
3
Create the /etc/cinder/access_share_hdd file at the root prompt, and update
it with the NFS share details.
# cnfs_sata_hdd(keystone_admin)]# cat /etc/cinder/access_share_hdd
192.1.1.190:/vx/fs1
4
The Veritas Access package includes the Veritas Access OpenStack Cinder
driver, which is a Python script. The OpenStack Cinder driver is located at
/opt/VRTSnas/scripts/OpenStack/veritas_cnfs.py on the Veritas Access
node. Copy the veritas_cnfs.py file to
/usr/lib/python2.6/site-packages/cinder/volume/drivers/veritas_cnfs.py
if you are using the Python 2.6 release.
If you are using the OpenStack Kilo version of RDO, the file is located at:
/usr/lib/python2.7/site-packages/cinder/volume/drivers/veritas_cnfs.py
5
Make sure that the NFS mount point on the OpenStack controller node has
the right permission for the cinder user. The cinder user should have write
permission on the NFS mount point. Set the permission using the following
command.
# setfacl -m u:cinder:rwx /cinder/cnfs/cnfs_sata_hdd
# sudo chmod -R 777 /cinder/cnfs/cnfs_sata_hdd
6
Give required permissions to the /etc/cinder/access_share_hdd file.
# sudo chmod -R 777 /etc/cinder/access_share_hdd
Using Veritas Access with OpenStack
About the Veritas Access integration with OpenStack Cinder
7
281
Restart the OpenStack Cinder driver.
# cnfs_sata_hdd(keystone_admin)]# /etc/init.d/openstack-cinder-volume
restart
Stopping openstack-cinder-volume: [ OK ]
Starting openstack-cinder-volume: [ OK ]
Restarting the OpenStack Cinder driver picks up the latest configuration file
changes.
After restarting the OpenStack Cinder driver, /vx/fs1 is NFS-mounted as per
the instructions provided in the /etc/cinder/access_share_hdd file.
# cnfs_sata_hdd(keystone_admin)]# mount |grep /vx/fs1
192.1.1.190:/vx/fs1 on
cnfs_sata_hdd/e6c0baa5fb02d5c6f05f964423feca1f type nfs
(rw,nfsvers=3,addr=10.182.98.20)
You can obtain OpenStack Cinder log files by navigating to:
/var/log/cinder/volume.log
8
If you are using OpenStack RDO, use these steps to restart the OpenStack
Cinder driver.
Login to the OpenStack controller node.
For example:
source /root/keystonerc_admin
Restart the services using the following command:
(keystone_admin)]# openstack-service restart openstack-cinder-volume
For more information, refer to the OpenStack Administration Guide.
9
On the OpenStack controller node, create a volume type named va_vol_type.
This volume type is used to link to the volume backend.
[root@c1059-r720xd-111046 cnfs_sata_hdd(keystone_admin)]#
cinder type-create va_vol_type
+--------------------------------------+------------------+
|
ID
| Name
|
+--------------------------------------+------------------|
| d854a6ad-63bd-42fa-8458-a1a4fadd04b7 | va_vol_type
|
+--------------------------------------+------------------+
Using Veritas Access with OpenStack
About the Veritas Access integration with OpenStack Cinder
10 Link the volume type with the ACCESS_HDD back end.
[root@c1059-r720xd-111046cnfs_sata_hdd(keystone_admin)]# cinder type-key
va_vol_type set volume_backend_name=ACCESS_HDD
11 Create a volume of size 1gb.
[root@c1059-r720xd-111046 cnfs_sata_hdd(keystone_admin)]# cinder create --volume-type
va_vol_type --display-name va_vol1 1
+---------------------+--------------------------------------+
|
Property
|
Value
|
+---------------------+--------------------------------------+
|
attachments
|
[]
|
| availability_zone |
nova
|
|
bootable
|
false
|
|
created_at
|
2014-02-08T01:47:25.726803
|
| display_description |
None
|
|
display_name
|
va_vol1
|
|
id
|
disk ID 1
|
|
metadata
|
{}
|
|
size
|
1
|
|
snapshot_id
|
None
|
|
source_volid
|
None
|
|
status
|
creating
|
|
volume_type
|
va_vol_type
|
+---------------------+--------------------------------------+
[root@c1059-r720xd-111046 cnfs_sata_hdd(keystone_admin)]# cinder list
+---------------+----------+-------------+-----+--------------+--------+------------+
|
ID
| Status | Display Name| Size| Volume Type |Bootable| Attached to|
+---------------+----------+-------------+-----+--------------+--------+------------+
| disk ID 1
| available| va_vol1
| 1 | va_vol_type |
false|
|
+----------------------------------------+-----+--------------+--------+------------+
12 Extend the volume to 2gb.
[root@c1059-r720xd-111046 cnfs_sata_hdd(keystone_admin)]# cinder extend va_vol1 2
[root@c1059-r720xd-111046 cnfs_sata_hdd(keystone_admin)]# cinder list
+------------+-----------+--------------+------+--------------+---------+------------+
|
ID
| Status
| Display Name | Size | Volume Type | Bootable| Attached to|
+------------------------+--------------+------+--------------+----------------------+
| disk ID 1 | available| va_vol1
| 2
| va_vol_type | false |
|
+------------+-----------+--------------+------+--------------+---------+------------+
282
Using Veritas Access with OpenStack
About the Veritas Access integration with OpenStack Manila
283
13 Create a snapshot.
[root@c1059-r720xd-111046 cnfs_sata_hdd(keystone_admin)]# cinder snapshot-create
--display-name va_vol1-snap va_vol1
+---------------------+--------------------------------------+
|
Property
|
Value
|
+---------------------+--------------------------------------+
|
created_at
|
2014-02-08T01:51:17.362501
|
| display_description |
None
|
|
display_name
|
va_vol1-snap
|
|
id
|
disk ID 1
|
|
metadata
|
{}
|
|
size
|
2
|
|
status
|
creating
|
|
volume_id
| 52145a91-77e5-4a68-b5e0-df66353c0591 |
[root@c1059-r720xd-111046 cnfs_sata_hdd(keystone_admin)]# cinder snapshot-list
+-----------+--------------------------------------+-----------+----------------+------+
|
ID | Volume ID
| Status
| Display Name
| Size |
+--------------------------------------------------+-----------+----------------+------+
| disk ID 1 | 52145a91-77e5-4a68-b5e0-df66353c0591| available | va_vol1-snap | 2
|
+--------------------------------------------------+-----------------------------------+
14 Create a volume from a snapshot.
[root@c1059-r720xd-111046 cnfs_sata_hdd(keystone_admin)]# cinder
create --snapshot-id e9dda50f-1075-407a-9cb1-3ab0697d274a --display-name
va-vol2 2
+---------------------+--------------------------------------+
|
Property
|
Value
|
+---------------------+--------------------------------------+
|
attachments
|
[]
|
| availability_zone |
nova
|
|
bootable
|
false
|
|
created_at
|
2014-02-08T01:57:11.558339
|
About the Veritas Access integration with
OpenStack Manila
OpenStack Cinder had the limitation of not being able to share a block device
simultaneously between virtual machines. OpenStack Manila solves this problem.
Using Veritas Access with OpenStack
About the Veritas Access integration with OpenStack Manila
OpenStack Manila provides a shared file system as a service. Using OpenStack
Manila, you can share a single file system between multiple virtual machines.
Veritas Access is integrated with OpenStack Manila through a OpenStack Manila
driver that lets you share Veritas Access file systems with virtual machines on
OpenStack.
For the supported OpenStack versions for running the OpenStack Manila driver,
see the Veritas Access Installation Guide.
The OpenStack Manila driver can create and manage simple file systems. For the
backend to create simple file systems, use va_fstype=simple in the manila.conf
file.
OpenStack Manila use cases
From the OpenStack controller node, an OpenStack administrator can do the
following:
â– 
Create and delete file systems.
â– 
Allow and deny file system access to specific virtual machines.
â– 
Provide IP-based access control.
â– 
Create and delete snapshots of the file system.
â– 
Provide free space statistics.
â– 
NFS-based access of the shares from the instances.
284
Using Veritas Access with OpenStack
About the Veritas Access integration with OpenStack Manila
285
Configuring Veritas Access with OpenStack Manila
To configure Veritas Access with OpenStack Manila
1
Export the pool to Manila.
OPENSTACK> manila resource export pool1
ACCESS Manila SUCCESS V-288-0 Pool exported to Manila
2
Enter the following command to configure the pool with Manila.
OPENSTACK> manila configure pool1
To create a new share backend va-share1 in Manila
-----------------------------------------Make the following changes on OpenStack controller node and restart the
Manila driver. Add the following configuration entries in
/etc/manila/manila.conf file:
In the [DEFAULT] section:
#####
enabled_share_backends=va-share1
#####
At the end of all sections:
#####
[va-share1]
share_driver= manila.share.drivers.veritas.veritas_isa.ACCESSShareDriver
driver_handles_share_servers = False
share_backend_name = va-share1
va_server_ip = 10.209.106.144
va_port = 14161
va_ssl = False
va_fstype = simple
va_user = root
va_pwd = password
va_pool = pool1
#####
Using Veritas Access with OpenStack
About the Veritas Access integration with OpenStack Manila
3
286
Enter the following command to display the resources which are created by
Manila.
OPENSTACK> manila resource list
Pools exported to Manila: pool1
FS created by Manila:
FS snapshots created by Manila:
NFS shares exported by Manila:
Creating a new share backend on the OpenStack controller node
A backend is an instance of the OpenStack Manila share service, which is defined
in a section of the manila.conf file. Each backend has exactly one driver.
To create a new share backend va-share1 in OpenStack Manila, make the following
changes on the OpenStack controller node, and restart the OpenStack Manila
driver.
To create a new share backend on the OpenStack controller node
1
On the OpenStack controller node, add the following configuration entries in
the OpenStack Manila /etc/manila/manila.conf file.
â– 
In the DEFAULT section, add the following:
#####
enabled_share_backends=va-share1
#####
If the entry generic1 is already there, add the va-share1 entry after a
comma. For example:
enabled_share_backends = generic1,va-share1
â– 
At the end of all sections in the /etc/manila/manila.conf file, add the
following configuration entries:
#####
[va-share1]
share_driver= manila.share.drivers.veritas.veritas_isa.VeritasShareDriv
driver_handles_share_servers = False
share_backend_name = va-share1
va_server_ip = 10.182.96.179
va_port = 14161
va_ssl = False
Using Veritas Access with OpenStack
About the Veritas Access integration with OpenStack Manila
va_fstype = simple
va_user = root
va_pwd = password
va_pool = pool1
#####
The following table describes the options.
share_backend_name Name of the share backend. This name can be different
for each share backend.
share_driver
OpenStack Manila driver name.
va_server_ip
Console IP address of the Veritas Access cluster.
va_port
14161
The port on Veritas Access to which the Manila driver is
connected.
va_ssl
SSL certificate on the REST server.
va_fstype
Type of file system to be created on the specified pool.
It can be simple.
va_user
Root user name.
va_pwd
Root password.
va_pool
Existing storage pool on Veritas Access from which the
file systems are to be created.
You use the OPENSTACK> manila configure command to display the
configuration options that need to be performed on the OpenStack controller
node.
2
Restart the OpenStack Manila services.
The restart is on the OpenStack controller node, not on Veritas Access.
Creating an OpenStack Manila share type
An OpenStack Manila share type is an administrator-defined type of service that is
used by the Manila scheduler to make scheduling decisions. OpenStack tenants
can list share types and then use them to create new shares.
287
Using Veritas Access with OpenStack
About the Veritas Access integration with OpenStack Manila
288
To create an OpenStack Manila share type
â—†
On the OpenStack controller node, create a share type for va-backend1 and
va_backend2.
manila@C4110-R720xd-111045:~/OpenStack$ manila type-create va-backend1
False
To associate the share type to a share backend
â—†
On the OpenStack controller node, associate the share type to a share backend.
manila@C4110-R720xd-111045:~/OpenStack$ manila type-key va-backend1 set
driver_handles_share_servers=false share_backend_name=va-share1
manila@C4110-R720xd-111045:~/OpenStack$ manila type-key va-backend2
set driver_handles_share_servers=false share_backend_name=va-share2
Creating an OpenStack Manila file share
An OpenStack Manila file share is equivalent to a file system in Veritas Access.
You can create an OpenStack Manila file share on the OpenStack controller node.
Using Veritas Access with OpenStack
About the Veritas Access integration with OpenStack Manila
To create an OpenStack Manila file share on the OpenStack controller node
1
On the OpenStack controller node, if you wanted to create two OpenStack
Manila file shares called prod_fs and finance_fs of size 1 GB accessible
over NFS, enter the following:
One of the file shares resides on va_backend1, and one of the file shares
resides on va-backend2.
manila@C4110-R720xd-111045:~/OpenStack$ manila create --name prod_fs
--share-type va-backend1 NFS 1
manila@C4110-R720xd-111045:~/OpenStack$ manila create --name finance_fs
--share-type va-backend2 NFS 1
Use the manila list command to see how the file shares look on the
OpenStack controller node.
You can see how the file systems look on Veritas Access as part of the share
creation process.
289
Using Veritas Access with OpenStack
About the Veritas Access integration with OpenStack Manila
2
Give prod_fs read-write access to 10.182.111.84.
manila@C4110-R720xd-111045:~/OpenStack$ manila access-allow --access-level rw
ecba1f14-86b0-4460-a286-a7e938162fb4 ip 10.182.111.84
+--------------+--------------------------------------+
| Property
| Value
|
+--------------+--------------------------------------+
| share_id
| ecba1f14-86b0-4460-a286-a7e938162fb4 |
| deleted
| False
|
| created_at
| 2015-04-28T17:59:45.514849
|
| updated_at
| None
|
| access_type | ip
|
| access_to
| 10.182.111.84
|
| access_level | rw
|
| state
| new
|
| deleted_at
| None
|
| id
| 8a1c2d0b-a3fc-4405-a8eb-939adb8799db |
+--------------+--------------------------------------+
In the manila access-allow command, you can get the ID
(ecba1f14-86b0-4460-a286-a7e938162fb4) from the output of the manila
list command.
3
Give finance_fs read-write access to 10.182.111.81.
manila@C4110-R720xd-111045:~/OpenStack$ manila access-allow --access-level rw
f8da8ff6-15e6-4e0c-814b-d6ba8d08543c ip 10.182.111.81
+--------------+--------------------------------------+
| Property
| Value
|
+--------------+--------------------------------------+
| share_id
| f8da8ff6-15e6-4e0c-814b-d6ba8d08543c |
| deleted
| False
|
| created_at
| 2015-04-28T18:01:49.557300
|
| updated_at
| None
|
| access_type | ip
|
| access_to
| 10.182.111.81
|
| access_level | rw
|
| state
| new
|
| deleted_at
| None
|
| id
| ddcfc2d2-7e71-443a-bd94-81ad05458e32 |
+--------------+--------------------------------------+
Use the manila access-list <share-id> command to display the different
access given to instances.
290
Using Veritas Access with OpenStack
About the Veritas Access integration with OpenStack Manila
Creating an OpenStack Manila share snapshot
You can create an OpenStack Manila share snapshot, which is equivalent to creating
a snapshot (checkpoint) in Veritas Access. Creating an OpenStack Manila share
snapshot creates a checkpoint of the specific file system on Veritas Access. The
checkpoint that is created is non-removable.
Deleting a snapshot deletes the checkpoint of that file system.
To create an OpenStack Manila share snapshot
â—†
On the OpenStack controller node, if you want to create fin_snap and
prod_snap snapshots, enter the following:
manila@C4110-R720xd-111045:~/OpenStack$ manila snapshot-create --name fin_snap
d3ab5cdc-4300-4f85-b4a5-e2a55d835031
manila@C4110-R720xd-111045:~/OpenStack$ manila snapshot-create --name prod_snap
2269b813-0031-419e-a2d3-0073cdb2776e
Use the manila snapshot-list command to display the snapshots you
created.
291
Section
10
Managing Veritas Access
storage services
â– 
Chapter 21. Deduplicating data
â– 
Chapter 22. Compressing files
â– 
Chapter 23. Configuring SmartTier
â– 
Chapter 24. Configuring SmartIO
â– 
Chapter 25. Configuring replication
â– 
Chapter 26. Using snapshots
â– 
Chapter 27. Using instant rollbacks
â– 
Chapter 28. Configuring Veritas Access with the NetBackup client
Chapter
21
Deduplicating data
This chapter includes the following topics:
â– 
About data deduplication
â– 
Best practices for using the Veritas Access deduplication feature
â– 
Setting up deduplication
â– 
Configuring deduplication
â– 
Manually running deduplication
â– 
Scheduling deduplication
â– 
Setting deduplication parameters
â– 
Removing deduplication
â– 
Verifying deduplication
About data deduplication
Data deduplication is the process by which redundant data is eliminated to improve
storage utilization. Using data deduplication, you can reduce the amount of storage
required for storing user and application data. It is most effective in use-cases where
many copies of very similar or even identical copies of data are stored. The
deduplication feature in Veritas Access provides storage optimization for primary
storage (storage of active data).
Each file in the configured file system is broken into user-configurable chunks for
evaluating duplicates. The smaller the chunk size, the higher the percentage of
sharing due to better chances of matches.
The first deduplication of a file system is always a full deduplication of the entire
file system. This is an end-to-end deduplication process that identifies and eliminates
Deduplicating data
About data deduplication
duplicate data. Any subsequent attempt to run deduplication on that file system
results in incremental deduplication.
Note: Deduplication with a small chunk size increases the deduplication time and
load on the system.
Veritas Access deduplication is periodic, that is, as per the user-configured
frequency, redundant data in the file system is detected and eliminated.
Use cases for deduplication
The following are potential use cases for Veritas Access file system deduplication:
â– 
Microsoft Exchange mailboxes
â– 
File systems hosting user home directories
â– 
Virtual Machine Disk Format (VMDK) or virtual image stores.
Relationship between physical and logical data on a file
system
Table 21-1 shows an estimated file system data size that can be supported for a
Veritas Access deduplicated file system.
Table 21-1
Relationship between physical and logical data on a file system
for two billion unique fingerprints with various deduplication ratios
Fingerprint
block size
Deduplication Unique
Physical file
ratio
signature per system data
TB
size
Effective
logical file
system data
size
4K
50%
128 M
16 TB
32 TB
4K
65%
90 M
23 TB
65 TB
4K
80%
51 M
40 TB
200 TB
8K
50%
64 M
32 TB
64 TB
8K
65%
45 M
46 TB
132 TB
8K
80%
25 M
80 TB
400 TB
16 K
50%
32 M
64 TB
128 TB
16 K
65%
22 M
93 TB
266 TB
16 K
80 %
13 M
158 TB
800 TB
294
Deduplicating data
About data deduplication
Overview of the deduplication workflow
Figure 21-1
Overview of the deduplication workflow
CLI command:
Storage> dedup
Deduplication
administration
process
Scheduler
Deduplication process
First run or dry run
Incremental run
Full scan, fingerprint data,
populate deduplication log
Using the file change log,
detect deletes, and
truncates. Populate
deduplication log.
Process log, construct
deduplication database.
Process deduplication log,
purge deduplication
database.
Process duplicate
database, eliminate
duplicates in file system.
Using the file change log
and file system Storage
Checkpoints, scan new
and modified data,
fingerprint data, populate
deduplication log.
Process deduplication log,
rebuild dedupliation
database.
Process duplicate
database, eliminate
duplicates in file system.
File system
/vx/fs1
File change log
The Storage> dedup commands perform administrative functions for the Veritas
Access deduplication feature. The deduplication commands allow you to enable,
disable, start, stop, and remove deduplication on a file system. You can also reset
several deduplication configuration parameters and display the current deduplication
status for your file system.
295
Deduplicating data
Best practices for using the Veritas Access deduplication feature
Note: Some configuration parameters can be set as local (specific to a file system)
and or global (applicable to all deduplication-enabled file systems). Local parameters
override the value of a global parameter.
Best practices for using the Veritas Access
deduplication feature
The following are best practices when using the Veritas Access deduplication
feature:
â– 
Deduplication is most effective when the file system block size and the
deduplication block size are the same for file systems with block sizes of 4K
and above. This also allows the deduplication process to estimate space savings
more accurately.
â– 
The smaller the file system block size and the deduplication block size, the
higher is the time required for performing deduplication. Smaller block sizes
such as 1 KB and 2 KB, increase the number of data fingerprints that the
deduplication database has to store.
Though the file system block size is data-dependent, the recommended block
size for optimal deduplication is 4 KB for file systems less than 1 TB. For file
systems 1 TB and above, it is 8 KB.
â– 
For VMware NFS datastores that store Virtual Machine Disk Format (VMDK)
images, a 4 KB block size is optimal.
â– 
Compressed media files for images, music, and video, like JPEG, mp3, .MOV,
and databases do not deduplicate or compress effectively.
â– 
Home directory file systems are good candidates for deduplication.
â– 
Deduplication is a CPU and I/O intensive process. It is a best practice to schedule
deduplication when the load on your file systems is expected to be low.
â– 
Evaluation of changes in the file system is done by the file system's File Change
Log (FCL). Setting the frequency on a too infrequent basis may cause the FCL
to rollover, thereby missing changes and deduplication opportunities to the file
system.
â– 
After enabling deduplication on file systems with existing data, the first
deduplication run does a full deduplication. This can be time-consuming, and
may take 12 to 15 hours per TB, so plan accordingly.
â– 
The deduplication database takes up 1% to 7% of logical file system data. In
addition, during deduplication processing, an additional but temporary storage
space is required. Though 15% free space is enforced, it is recommended to
296
Deduplicating data
Setting up deduplication
have 30% free space when the deduplication block size is less than 4096 (4
KB).
â– 
If you plan to use the deduplication scheduler, you must have a Network Time
Protocol (NTP) server enabled and configured.
Setting up deduplication
This is an end-to-end sample scenario of deduplication.
To deduplicate data
1
Ensure that the file system is deduplication-enbled. For example:
Storage> dedup list
See “Configuring deduplication” on page 298.
If the file system is not deduplication-enbled, you will need to enable it. For
example:
Storage> dedup enable fs_name
blksize
See “Configuring deduplication” on page 298.
2
(Optional) Once deduplication is enabled, you can set the CPU usage, the
memory, and priority for the deduplication-enabled file system.
See “Setting deduplication parameters” on page 307.
3
Veritas recommends running a dryrun.
The dryrun provides you the space savings in percentage, if the threshold value
is provided then the fsdedup command deduplicates data only if the expected
savings cross the specified threshold.
Storage> dedup dryrun fs_name [threshold]
See “Manually running deduplication” on page 302.
Note: You cannot perform a dryrun on a file that has already been deduplicated.
4
You can either choose to start deduplication now or set up the deduplication
schedule:
To start deduplication now
â– 
Start deduplication now. For example:
297
Deduplicating data
Configuring deduplication
Storage> dedup start fs_name [nodename]
See “Manually running deduplication” on page 302.
Note: If the system where you started deduplication crashes, the
deduplication job fails over to one of the other nodes in the cluster. Run the
dedup status fs_name command to find out the status. The dedup status
command can temporarily show status as "FAILOVER" which means dedup
job is currently being failed over and will resume shortly. dedup failover is
applicable for deduplication jobs started with the dedup start command
only. It is not applicable for scheduled dedup jobs.
To set up the deduplication schedule
â– 
Set up the deduplication schedule. For example:
Storage> dedup schedule set fs_name hours day [freq]
See “Scheduling deduplication” on page 304.
5
You can check the status of the deduplication process. For example:
Storage> dedup status [fs_name]
See “Verifying deduplication” on page 310.
Configuring deduplication
To enable deduplication on a file system
â—†
To enable deduplication on a file system, enter the following:
Storage> dedup enable fs_name blksize
Note: Deduplication must be enabled for a file system before setting file system
configuration parameters and schedules.
This command also re-enables a deduplication schedule for a file system.
Enabling deduplication does not automatically deduplicate a file system.
Deduplication has to be manually started by using the Storage> dedup start
command or by setting up a schedule by using the Storage> dedup set
schedule command.
298
Deduplicating data
Configuring deduplication
fs_name
Specify the file system name for which you want to enable
deduplication.
blksize
Specify the deduplication block size of the file system in bytes,
where possible values of bytes are the following:
â– 
blksize=0 (Default)
â– 
blksize=1024
â– 
blksize=2048
â– 
blksize=4096
â– 
blksize=8192
â– 
blksize=16384
â– 
blksize=32768
â– 
blksize=65536
â– 
blksize=131072
Specify the deduplication block size in bytes, for example, 4096.
The deduplication block size should be a power of 2. For example,
3 KB, is not a valid deduplication block size. The deduplication
block size is a multiple of the file system's block size, and should
be equal to or less than 128 KB.
0 is the default configuration for the deduplication block size.
If blksize=0 is specified while enabling deduplication, then if the
file system block size is < 4096, then the deduplication block size
is set to 4096. Otherwise, the deduplication block size is set to the
same as the file system block size.
Note: Once the deduplication block size is set when enabling file
system deduplication, the deduplication block size cannot be
changed. The only way to change the deduplication block size is
to remove deduplication on the file system and then re-enable
deduplication on the file system.
For example, to enable deduplication on the file system fs1, enter:
Storage> dedup enable fs1 blksize=4096
Note: For deduplication-enabled file systems, you are prompted to destroy the
file system during Storage> fs offline and Storage> fs destroy
operations.
299
Deduplicating data
Configuring deduplication
To disable deduplication on a file system
â—†
To disable deduplication on a file system, enter the following:
Storage> dedup disable fs_name
where fs_name is the name of the deduplication-enabled file system that you
want to disable.
Only the deduplication schedule is suspended for a deduplication-disabled file
system. All other configuration parameters, for example, file system
configuration, schedule, and the deduplication database remain intact.
Note: Keeping a file system deduplication-disabled for a significant period of
time may reduce the effectiveness of deduplication when it is re-enabled.
To list the deduplication-enabled file system or file systems
â—†
To list the deduplication-enabled file system or file systems, enter the following:
Storage> dedup list fs_name
where fs_name is the name of the deduplication-enabled file system that you
want to list.
For example, to list the deduplication-enabled file systems, fs1, and then fs2,
enter:
Storage> dedup list fs1
Storage> dedup list fs2
Schedule hours are displayed as:
â– 
* - is displayed as "Every hour"
â– 
*/N - is displayed as "Every N hours"
â– 
0,6,12,18,23 are shown as "00:00, 06:00, 12:00, 18:00, 23:00"
Note: 0 is replaced by 00:00, 1 is replaced by 01:00, 23 is replaced by 23:00
Schedule day interval is displayed as:
â– 
* - is displayed as "Every day"
â– 
*/N - is displayed as "Every N days"
â– 
1 - is displayed as "Every Sunday"
300
Deduplicating data
Configuring deduplication
â– 
2 - is displayed as "Every Monday"
â– 
3 - is displayed as "Every Tuesday"
â– 
4 - is displayed as "Every Wednesday"
â– 
5- is displayed as "Every Thursday"
â– 
6 - is displayed as "Every Friday"
â– 
7 - is displayed as "Every Saturday"
If you issue the command without fs_name, you get a list of all the
deduplication-enabled file systems.
Storage> dedup list
The Default column header indicates the global value (applicable to all
deduplication-enabled file systems). For example, if you have not set Priority,
CPU, and Memory for file system fs1, the deduplication process uses the
global value. Veritas Access deduplication uses the default values for global
settings options. Local parameters override the value of global parameters.
301
Deduplicating data
Manually running deduplication
Manually running deduplication
To create a deduplication dryrun
â—†
To create a deduplication dryrun, enter the following command:
Storage> dedup dryrun fs_name [threshold]
The Storage> dedup dryrun command is useful for determining the
statistics/potential savings on the file system data if actual deduplication is
performed. Most accurate statistics are obtained when the file system block
size and the deduplication block size are the same.
Note: You cannot perform a dryrun on a file system that has already been
deduplicated.
fs_name
Specify the file system name for which you want to create a dryrun.
threshold
Specify the threshold percentage in the range of [0-100].
A dryrun is automatically converted to the actual deduplication if
the dryrun meets the threshold value. For example, if you specified
a threshold value of 40%, and if deduplication results in a space
savings of >=40%, then the dryrun is automatically converted to
the actual deduplication
To check whether the deduplication dryrun reaches to a threshold value of
60%, enter the following:
Storage> dedup dryrun fs1 60
302
Deduplicating data
Manually running deduplication
To start the deduplication process
â—†
To manually start the deduplication process, enter the following:
Storage> dedup start fs_name [nodename]
where fs_name is the name of the file system where you want to start the
deduplication process and nodename is the node in the cluster where you want
to start deduplication. You can run deduplication on any node in the cluster.
Note: If the system where you started deduplication crashes, the deduplication
job fails over to one of the other nodes in the cluster. Run the dedup status
fs_name command to find out the status. The dedup status command can
temporarily show status as "FAILOVER" which means dedup job is currently
being failed over and will resume shortly. dedup failover is applicable for
deduplication jobs started with the dedup start command only. It is not
applicable for scheduled dedup jobs.
When the deduplication process is started for the first time, a full scan of the
file system is performed. Any subsequent attempt to run deduplication requires
an incremental scan only.
For example:
Storage> dedup start fs1 node_01
Note: When deduplication is running on a file system, you run the Storage>
fs offline or Storage> fs destroy commands, these operations can
proceed only after deduplication is stopped by using the Storage> dedup
stop command.
303
Deduplicating data
Scheduling deduplication
To stop the deduplication process
â—†
To stop the deduplication process running on a file system, enter the following
command:
Storage> dedup stop fs_name
where fs_name is the name of the file system where you want to stop the
deduplication process.
Note: The deduplication process may not stop immediately as a consistent
state is ensured while stopping. Use the Storage> dedup status command
to verify if the deduplication process has stopped.
Scheduling deduplication
To set the deduplication schedule
1
To set the deduplication schedule, enter the following:
Storage> dedup schedule set fs_name
hours
day [freq]
The Storage> dedup schedule set command can only be set as a local
parameter.
Two categories of schedules are allowed: run periodicity and type periodicity.
The granularity of the schedule is limited to the time of day and the day of the
month.
fs_name
Specify the file system where you want to set the deduplication
schedule.
hours
Specify the hours value for setting the duplication schedule.
There are three types of values for the hours field:
â– 
* - indicates every hour.
â– 
*/N - indicates every Nth hour, where N is in the range [1-12].
â– 
You can also specify 5 comma-separated hours in the range
of [0-23].
For example:
304
Deduplicating data
Scheduling deduplication
Specify the interval in days for setting the deduplication schedule.
day
There are three types of values for the day field:
â– 
* - indicates every day.
â– 
*/N - indicates every Nth day, where N is in the range of [1-15].
â– 
Any number in the range of [1-7] where:
â–  1 - Sunday
â– 
2 - Monday
â– 
3 - Tuesday
â– 
4 - Wednesday
â– 
5 - Thursday
â– 
6 - Friday
â– 
7 - Saturday
The deduplication scheduler will only pick up the schedule if the
schedule is enabled for deduplication.
freq
Specify the frequency to run the deduplication schedule in the
range of [1-5]. The default frequency is [1].
This value controls deduplication load on the file system by
distributing phases of deduplication across various runs, and
potentially across systems in the cluster. A value of 4 means,
every 4th run deduplicates the file system, whereas the other runs
consolidate the changes.
2
When you set a deduplication schedule, keep in mind the following:
â– 
If the hour value for a schedule is set as */N, then the deduplication
scheduler picks up the schedule every Nth hour starting from 00:00 and
ending at 23:00. The hour schedule resets at end of day.
For example, if the hour value is */5, then the schedule time will be 00:00,
05:00, 10:00, 15:00, and 20:00 hours. On the next day, the schedule repeats
at the same times.
â– 
If day value for a schedule is set as */N, then the deduplication scheduler
picks up the schedule every Nth day starting from the 1st day of the month
and ending with the last day of the month. The day schedule resets at end
of each month.
For example, if the day value is */5, then the schedule day is on the 1st,
6th, 11th, 16th, 21st, 26th, and 31st days for a 31-day month. For the next
month, the schedule repeats on the same days.
â– 
For both the hour and day schedule, the * and */1 values are
interchangeable. They indicate every hour and every day.
305
Deduplicating data
Scheduling deduplication
306
To modify the deduplication schedule
â—†
To modify the deduplication schedule, enter the following:
Storage> dedup schedule modify fs_name
hours
day
freq
fs_name
Specify the file system where you want to modify the deduplication
schedule.
hours
Specify the hours value for modifying the deduplication schedule.
There are three types of values for the hours field:
â– 
* - indicates every hour.
â– 
*/N - indicates every Nth hour, where N is in the range [1-12].
â– 
You can also specify 5 comma-separated hours in the range
of [0-23].
For example:
Storage> dedup schedule modify fs1 0,6,12,18,23 2 3
ACCESS dedup SUCCESS V-288-0 Schedule modify on
file system fs1.
day
Specify the interval in days for modifying the deduplication
schedule.
There are three types of values for the day field:
â– 
* - indicates every day.
â– 
*/N - indicates every Nth hour, where N is in the range [1-15].
â– 
Any number in the range of [1-7] where:
â–  1 - Sunday
â– 
2 - Monday
â– 
3 - Tuesday
â– 
4 - Wednesday
â– 
5 - Thursday
â– 
6 - Friday
â– 
7 - Saturday
Note: The deduplication scheduler will only pick up the schedule
if the schedule is enabled for deduplication.
freq
Specify the frequency to run the deduplication schedule in the
range of [1-5].
Deduplicating data
Setting deduplication parameters
To delete the deduplication schedule
â—†
To delete the deduplication schedule, enter the following:
Storage> dedup schedule delete fs_name
where fs_name is the name of the file system that you want to delete.
Setting deduplication parameters
To set the CPU usage for the deduplication-enabled file system
â—†
To set the CPU usage for the file system, enter the following:
Storage> dedup set cpu cpuvalue fs_name
cpuvalue
Specify the CPU usage behavior for the deduplication-enabled
file system.
The following are the available values:
â– 
â– 
fs_name
IDLE - indicates that the deduplication process consumes as
much CPU processing as is available. For example, if the CPUs
are IDLE, then the deduplication process takes all of the idle
CPUs, and performs the deduplication job faster. CPU usage
may reach 100% on each available CPU.
YIELD (default) - indicates that the deduplication process yields
the CPU periodically; that is, even if the CPUs are not busy,
the deduplication process relinquishes the CPU. More time
may be taken for the same job in some scenarios. However,
the yield value ensures that the deduplication process does
not hang onto the CPU, or cause CPU usage spikes.
Specify the deduplication-enabled file system for which you want
to set the CPU usage.
Note: If a file system name is specified, the Storage> dedup
set cpu command sets the CPU value for that file system.
Otherwise, the CPU value is applicable to all file systems, which
have not overridden the CPU value.
307
Deduplicating data
Setting deduplication parameters
To set the deduplication memory allocation limit for the deduplication-enabled
file system
â—†
To set the deduplication memory limit in MB for the deduplication-enabled file
system, enter the following:
Storage> dedup set memory memvalue
where memvalue is the memory value in MB, for example, 1024.
The memvalue controls the maximum memory per deduplication process.
Note: Care must be taken to increase memvalue if large file systems are
present. Otherwise, deduplication efficiency may be affected. Since this is a
limit value, only the required memory is consumed for smaller file system
deduplication jobs. Note that scheduled deduplication jobs start deduplication
based on the available memory; therefore, if available RAM in the system falls
below the configured memory allocation limit for deduplication, the deduplication
scheduler on that system postpones the scheduled deduplication. At this point,
other systems with available memory starts deduplication. If the job remains
postponed for 1 hour, the job will be abandoned.
To set the deduplication priority for the deduplication-enabled file system
â—†
To set the deduplication priority (importance) for the deduplication-enabled file
system, enter the following:
Storage> dedup set priority priorityvalue [fs_name]
308
Deduplicating data
Removing deduplication
priorityvalue
Specify the importance of deduplication for the file system. The
setting of this parameter is local (specific to a file system). The
priorityvalue parameter is used by the deduplication scheduler to
evaluate if starting deduplication at the scheduled time is
appropriate or not based on the state of the file system at that
time.
priorityvalue is also a load-balancing mechanism whereby a
less-loaded system in the cluster may pick up a scheduled
deduplication job.
Available values are the following:
â– 
â– 
â– 
fs_name
LOW (default) - indicates that if the system has sustained CPU
usage of 50% or more in the last one hour, the file systems
marked as LOW have their deduplication schedules skipped
with a message in the syslog
NORMAL - indicates that if the system has sustained CPU
usage of 80% or more in the last one hour, the file systems
marked as NORMAL have their deduplication schedules
skipped with a message in the syslog
HIGH - indicates that starting deduplication is a must for this
file system, and without evaluating any system state,
deduplication is started at the scheduled time.
Specify the file system where you want to set the deduplication
priority.
Removing deduplication
To remove deduplication configuration-related information from the specified
file system
â—†
Enter the following:
Storage> dedup remove fs_name
where fs_name is the name of the file system for which you want to remove
deduplication.
This command removes all configurations and the deduplication database for
the specified file system.
Note: This operation cannot be undone, and re-running deduplication on your
file system may take a significant amount of time.
309
Deduplicating data
Verifying deduplication
Verifying deduplication
To obtain status information for a specified deduplicated-enabled file system
or all deduplicated-enabled file systems
â—†
Enter the following command:
Storage> dedup status [fs_name]
where fs_name is the specified deduplicated-enabled file system for which you
want to obtain current status information.
If you issue the command without fs_name, you get status information for all
deduplicated-enabled file systems.
If you issue the command with fs_name, you get the detailed status information
for the specified file system, along with any error messages or warnings.
The following table describes the output from the Storage> dedup status
command:
Filesystem
Displays the directory where the file system is mounted.
Savings
Displays the savings as a percentage. The value can mean
different things during the course of deduplication.
When the deduplication is in a COMPLETED state, or when the
deduplication process is computing the expected deduplication,
the value in this column shows the actual sharing in the file system.
However, when the expected deduplication calculation is complete,
this column value shows the expected deduplication. The expected
deduplication calculation is based on user data only; therefore, at
the end of deduplication, the saving percentage may vary from
the expected deduplication percentage. This is because the actual
file system deduplication percentage not only takes into
consideration the user data, but also file system and deduplication
metadata. This difference may be pronounced if the user data is
very small. For a failed deduplication, the value is undefined.
Status
Displays one of the following status values:
â– 
RUNNING
â– 
COMPLETED
â– 
STOPPED
â– 
FAILED
â– 
FAILOVER
â– 
NONE - indicates that deduplication has not been previously
run on this file system.
310
Deduplicating data
Verifying deduplication
Node
Indicates the node name where the deduplication job is either
running or has completed for a file system.
Type
The following are the types of deduplication jobs:
Details
â– 
MANUAL - the deduplication job is started by using either the
Storage> dedup start command or the Storage> dedup
dryrun command.
â– 
SCHEDULED - the deduplication job is started by the
deduplication scheduler.
Displays the status of the file system deduplication activity.
The deduplication process writes its status in the status log. The
relevant status log is displayed in this column. For a long-running
deduplication process, the status log may also show the actual
file system sharing as a progress indicator. This actual file system
sharing percentage along with the expected saving percentage in
the Saving column gives a good estimate of the progress. When
displaying deduplication status for a specific file system, any errors,
or warnings for the deduplication run are also shown. The Details
column gives a detailed idea of what to look for in case of any
issues.
311
Chapter
22
Compressing files
This chapter includes the following topics:
â– 
About compressing files
â– 
Use cases for compressing files
â– 
Best practices for using compression
â– 
Compression tasks
About compressing files
Compressing files reduces the space used, while retaining the accessibility of the
files and being transparent to applications. Compressed files look and behave
almost exactly like uncompressed files: the compressed files have the same name,
and can be read and written as with uncompressed files. Reads cause data to be
uncompressed in memory, only; the on-disk copy of the file remains compressed.
In contrast, after a write, the new data is uncompressed on disk.
Only user data is compressible. You cannot compress Veritas File System (VxFS)
metadata.
After you compress a file, the inode number does not change, and file descriptors
opened before the compressions are still valid after the compression.
Compression is a property of a file. Thus, if you compress all files in a directory, for
example, any files that you later copy into that directory do not automatically get
compressed. You can compress the new files at any time by compressing the files
in the directory again.
You compress files with the Storage> compress command.
See “Compression tasks” on page 314.
See the storage_compress(1) manual page.
Compressing files
About compressing files
To compress files, you must have VxFS file systems with disk layout Version 8 or
later.
Note: When you back up compressed files to tape, the backup program stores the
data in an uncompressed format. The files are uncompressed in memory and
subsequently written to the tape. This results in increased CPU and memory usage
when you back up compressed files.
About the compressed file format
A compressed file is a file with compressed extents. A compress call compresses
all extents of a file. However, writes to the file cause the affected extents to get
uncompressed; the result can be files with both compressed and uncompressed
extents.
About the file compression attributes
When you compress a file with the Storage> compress command, compress
attaches the following information to the inode:
â– 
Compression algorithm
â– 
Compression strength, which is a number from 1 to 9
â– 
Compression block size
This information is referred to as the file compression attributes. The purpose of
the attributes are to collect the parameters used to create the compressed file. The
information can then be read by a backup program.
The file compression attributes guarantee that a particular compressed file can only
use one type and strength of compression. Recompressing a file using different
attributes fails. To change the file compression attributes, you must explicitly
uncompress first, and then recompress with the new options, even in the case
where all extents are already uncompressed.
The file compression attributes do not indicate if all extents are compressed. Some
extents might be incompressible, and other extents or even all extents might be
uncompressed due to writes, but the file compression attributes remain. Only an
explicit file uncompression can remove the attributes.
About the file compression block size
The file compression algorithm compresses data in the specified block size, which
defaults to 1MB. Each compression block has its own extent descriptor in the inode.
313
Compressing files
Use cases for compressing files
If the file or the last extent is smaller than the compression block size, then that
smaller size gets compressed. The maximum block size is 1MB.
Extents with data that cannot be compressed are still marked as compressed
extents. Even though such extents cannot be compressed, marking these extents
as compressed allows successive compression runs to skip these extents to save
time. Shared extents cannot be compressed and do not get marked as compressed.
Since the file compression algorithm looks at fixed-size blocks, the algorithm finds
these incompressible extents in units of the file compression block size.
Use cases for compressing files
The following list contains common use case categories:
â– 
If files are old and not accessed frequently. For example:
â– 
Compress database archive logs which are older than 8 days.
â– 
Compress jpeg files which are not accessed in 30 days.
Best practices for using compression
Best practices for using compression:
â– 
Schedule compression during non-peak hours.
Compression tasks
Table 22-1
Compression tasks
How to
Task
How to compress a file or all files in a
directory
See “Compressing files” on page 315.
How to scheduled compression jobs
See “Scheduling compression jobs”
on page 315.
How to list compressed files
See “Listing compressed files” on page 317.
How to show the scheduled compression job See “Scheduling compression jobs”
on page 315.
How to uncompress a file or all files in a
directory
See “Uncompressing files” on page 317.
314
Compressing files
Compression tasks
Table 22-1
315
Compression tasks (continued)
How to
Task
How to modify the scheduled compression
See “Modifying the scheduled compression”
on page 318.
How to remove the specified schdule.
See “Removing the specified schedule”
on page 319.
How to stop the schedule for a file system.
See “Stopping the schedule for a file system”
on page 320.
How to remove the pattern-related rule for a See “Removing the pattern-related rule for a
file system
file system” on page 320.
How to remove the modification age
(age-based) related rule for a file system
See “Removing the modified age related rule
for a file system” on page 320.
Compressing files
You can compress a file or compress all files in a directory.
To compress a file
â—†
Compress a file:
Storage> compress file fs_name file_or_dir resource_level algorithm
where fs_name is the name of the file system.
where file_or_dir is the name of the file or directory.
where resource_level is either low, medium, or high.
where algorithm is the file compression algorithm strength [1-9]. For example,
you specify strength gzip-3 compression as "3".
See “About the file compression attributes” on page 313.
To compress all files in a directory
â—†
Compress all files in a directory:
Storage> compress file fs_name file_or_dir resource_level algorithm
Scheduling compression jobs
Schedule compression jobs lets you compress pattern-based and age-based
compression.
Compressing files
Compression tasks
To schedule compression
1
Create a scheduled compression:
Storage> compress schedule create new_schedule duration min \
[hour] [day_of_month] [month] [day_of_week] [node]
where new_schedule is the name of the schedule.
where duration is the duration specified in hours (1 or more).
where min is the minutes.
where hour is the hours.
where day is the day of the month.
where month is the month.
where day_of_week is the day of the week.
where node is the name of the node or you can use "any".
For example:
Storage> compress schedule create schedule1 3 0 1 * * 6
This creates a schedule called "schedule1" that starts compression at 1:00
am every Friday and runs for only 3 hours.
2
Start the schedule for a given file system:
Storage> compress schedule start fs_name schedule_name
resource_level algorithm
where fs_name is the name of the file system.
where schedule_name is the name of the schedule.
where resource_level is either low, medium, or high.
where algorithm is the file compression algorithm strength [1-9]. For example,
you specify strength gzip-3 compression as "3".
3
Show the scheduled compression:
Storage> compress schedule show new_schedule
4
(Optional) Create a pattern for the file system.
Storage> compress pattern create fs_name pattern
where pattern is the extensions of the file names separated by "," For example,
*.arc,*.dbf,*.tmp.
316
Compressing files
Compression tasks
5
(Optional) Create a modification age rule for the file system.
Storage> compress modage create fs_name mod_age
where mod_age is the modification age (age-based) specified units are in days.
6
If you performed step 4 or 5, you can list the schedule details for the file system:
Storage> compress schedule list fs_name
Listing compressed files
To list compressed files
â—†
List compressed files:
Storage> compress list fs_name file_or_dir
where fs_name is the name of the file system.
where file_or_dir is the name of the file or directory.
Showing the scheduled compression job
To show the scheduled compression job
â—†
Show the scheduled compression job
Storage> compress schedule show new_schedule
where new_schedule is the name of the schedule.
Uncompressing files
To uncompress a file
â—†
Uncompress a file:
Storage> uncompress file fs_name file_or_dir resource_level
where fs_name is the name of the file system.
where file_or_dir is the name of the file or directory.
where resource_level is either low, medium, or high.
To uncompress all files in a directory
â—†
Uncompress all files in a directory:
Storage> uncompress file fs_name file_or_dir resource_level
317
Compressing files
Compression tasks
Modifying the scheduled compression
To change the scheduled compression
1
Stops the schedule for the file system:
Storage> compress schedule stop fs_name
where fs_name is the name of the file system.
For example:
Storage> compress schedule stop tpcc_data1
2
Remove specified schedule:
Storage> compress schedule remove new_schedule
318
Compressing files
Compression tasks
3
Create a scheduled compression:
Storage> compress schedule create new_schedule duration min \
[hour] [day_of_month] [month] [day_of_week] [node]
where new_schedule is the name of the schedule.
where duration is the duration specified in hours (1 or more).
where min is the minutes.
where hour is the hours.
where day is the day of the month.
where month is the month.
where day_of_week is the day of the week.
where node is the name of the node or you can use "any".
For example:
Storage> compress schedule create schedule2 3 0 2 * * 6
This creates a schedule called "schedule2" that starts compression at 2:00
am every Friday and runs for only 3 hours.
4
Start the schedule for a given file system:
Storage> compress schedule start fs_name schedule_name
resource_level algorithm
where fs_name is the name of the file system.
where schedule_name is the name of the schedule.
where resource_level is either low, medium, or high.
where algorithm is the file compression algorithm strength [1-9]. For example,
you specify strength gzip-3 compression as "3".
Removing the specified schedule
To remove the specified schedule
â—†
Enter the following:
Storage> compress schedule remove new_schedule
where new_schedule is the name of the schedule.
319
Compressing files
Compression tasks
Stopping the schedule for a file system
To stop the schedule for a file system
â—†
Enter the following:
Storage> compress schedule stop fs_name
where fs_name is the name of the file system.
Removing the pattern-related rule for a file system
To remove the pattern-related rule a named file system
â—†
Enter the following:
Storage> compress pattern remove fs_name
where fs_name is the name of the file system.
Removing the modified age related rule for a file system
To remove the modified age related rule for a file system
â—†
Enter the following:
Storage> compress modage remove fs_name
where fs_name is the name of the file system.
320
Chapter
23
Configuring SmartTier
This chapter includes the following topics:
â– 
About Veritas Access SmartTier
â– 
How Veritas Access uses SmartTier
â– 
Adding tiers to a file system
â– 
Adding or removing a column from a secondary tier of a file system
â– 
Configuring a mirror to a tier of a file system
â– 
Listing all of the files on the specified tier
â– 
Displaying a list of SmartTier file systems
â– 
About tiering policies
â– 
About configuring the policy of each tiered file system
â– 
Configuring the policy of each tiered file system
â– 
Best practices for setting relocation policies
â– 
Relocating a file or directory of a tiered file system
â– 
Displaying the tier location of a specified file
â– 
About configuring schedules for all tiered file systems
â– 
Configuring schedules for tiered file systems
â– 
Displaying the files that may be moved or pruned by running a policy
â– 
Allowing metadata information on the file system to be written on the secondary
tier
â– 
Restricting metadata information to the primary tier only
Configuring SmartTier
About Veritas Access SmartTier
â– 
Removing a tier from a file system
About Veritas Access SmartTier
The Veritas Access SmartTier feature makes it possible to allocate two tiers of
storage to a file system.
The following features are part of the Veritas Access SmartTier solution:
â– 
Relocate files between primary and secondary tiers automatically as files age
and become less business critical.
â– 
Prune files on secondary tiers automatically as files age and are no longer
needed.
â– 
Promote files from a secondary storage tier to a primary storage tier based on
I/O temperature.
â– 
Retain original file access paths to eliminate operational disruption, for
applications, backup procedures, and other custom scripts.
â– 
Let you manually move folders, files and other data between storage tiers.
â– 
Enforce the policies that automatically scan the file system and relocate files
that match the appropriate tiering policy.
In Veritas Access, there are two predefined tiers for storage:
â– 
Current active tier 1 (primary) storage.
â– 
Tier 2 (secondary) storage for aged or older data.
Note: The SmartTier functionality is available only for a Cluster File System and
not for a scale-out file system.
To configure Veritas Access SmartTier, add tier 2 (secondary) storage to the
configuration. Specify where the archival storage resides (storage pool) and the
total size.
Files can be moved from the active storage after they have aged for a specified
number of days, depending on the policy selected. The number of days for files to
age (not accessed) before relocation can be changed at any time.
Note: An aged file is a file that exists without being accessed.
Figure 23-1 depicts the features of Veritas Access and how it maintains application
transparency.
322
Configuring SmartTier
How Veritas Access uses SmartTier
SmartTier features
Figure 23-1
/one-file-system
/sales
/current
/forecast
/financial
/sales
/current
/2007
/development
/sales
/forecast
/2008
/current
/new
/forecast
/history
storage
Primary Tier
Secondary Tier
mirrored
RAID5
If you are familiar with Veritas Volume Manager (VxVM), every Veritas Access file
system is a multi-volume file system (one file system resides on two volumes). The
SmartTier tiers are predefined to simplify the interface. When an administrator wants
to add storage tiering, a second volume is added to the volume set, and the existing
file system is encapsulated around all of the volumes in the file system.
How Veritas Access uses SmartTier
Veritas Access provides two types of tiers:
â– 
Primary tier
323
Configuring SmartTier
Adding tiers to a file system
â– 
Secondary tier
Each newly created file system has only one primary tier initially. This tier cannot
be removed.
For example, the following operations are applied to the primary tier:
Storage> fs addmirror
Storage> fs growto
Storage> fs shrinkto
The Storage> tier commands manage file system tiers.
All Storage> tier commands take a file system name as an argument and perform
operations on the combined construct of that file system.
The Veritas Access file system default is to have a single storage tier. An additional
storage tier can be added to enable storage tiering. A file system can only support
a maximum of two storage tiers.
Storage> tier commands can be used to perform the following:
â– 
Adding/removing/modifying the secondary tier
â– 
Setting policies
â– 
Scheduling policies
â– 
Locating tier locations of files
â– 
Listing the files that are located on the primary or the secondary tier
â– 
Moving files from the secondary tier to the primary tier
â– 
Allowing metadata information on the file system to be written on the secondary
tier
â– 
Restricting metadata information to the primary tier only
Adding tiers to a file system
You can add the following types of tiers to file systems:
â– 
simple
â– 
mirrored
â– 
striped
â– 
mirrored-stripe
â– 
striped-mirror
324
Configuring SmartTier
Adding tiers to a file system
Note: If a rollback exists for the file system, adding a tier can cause inconsistencies
in the rollback hierarchy. The recommended method is to create the tier first and
then create the rollback.
To add a second tier to a file system
â—†
To add a tier to a file system where the layout is "simple" (concatenated), enter
the following:
Storage> tier add simple fs_name size pool1[,disk1,...]
To add a mirrored tier to a file system
â—†
To add a mirrored tier to a file system, enter the following:
Storage> tier add mirrored fs_name size nmirrors
pool1[,disk1,...] [protection=disk|pool]
To add a striped tier to a file system
â—†
To add a striped tier to a file system, enter the following:
Storage> tier add striped fs_name size ncolumns
pool1[,disk1,...] [stripeunit=kilobytes]
To add a mirrored-striped tier to a file system
â—†
To add a mirrored-striped tier to a file system, enter the following:
Storage> tier add mirrored-stripe fs_name size nmirrors ncolumns
pool1[,disk1,...] [protection=disk|pool] [stripeunit=kilobytes]
To add a striped-mirror tier to a file system
â—†
To add a striped-mirror tier to a file system, enter the following:
Storage> tier add striped-mirror fs_name size nmirrors ncolumns
pool1[,disk1,...] [protection=disk|pool] [stripeunit=kilobytes]
fs_name
Specifies the name of the file system to which the mirrored tier is
added. If the specified file system does not exist, an error message
is displayed.
size
Specifies the size of the tier to be added to the file system (for
example, 10m, 10M, 25g, 100G).
ncolumns
Specifies the numbers of columns to add to the striped tiered file
system.
325
Configuring SmartTier
Adding or removing a column from a secondary tier of a file system
nmirrors
Specifies the number of mirrors to be added to the tier for the
specified file system.
pool1[,disk1,...]
Specifies the pool(s) or disk(s) that is used for the specified tiered
file system. If the specified pool or disk does not exist, an error
message is displayed. You can specify more than one pool or disk
by separating the pool or the disk name with a comma, but do not
include a space between the comma and the name.
The disk needs to be part of the pool or an error message is
displayed.
protection
If no protection level is specified, disk is the default protection
level.
The protection level of the second tier is independent of the
protection level of the first tier.
Available options are:
â– 
â– 
disk - If disk is entered for the protection field, then mirrors
are created on separate disks. The disks may or may not be
in the same pool.
pool - If pool is entered for the protection field, then mirrors
are created in separate pools. If not enough space is available,
then the file system is not created.
stripeunit=kilobytes Specifies a stripe width of either 128K, 256k, 512K, 1M, or 2M.
The default stripe width is 512K.
Adding or removing a column from a secondary
tier of a file system
You can add a column to a secondary tier of a file system.
326
Configuring SmartTier
Configuring a mirror to a tier of a file system
To add a column to a secondary tier of a file system
â—†
To add a column to a secondary tier of a file system, enter the following:
Storage> tier addcolumn fs_name ncolumns pool_or_disk_name
fs_name
Specifies the file system for which you want to add a column
to a secondary tier of the file system.
ncolumn
Specifies the number of columns that you want to add to the
secondary tier of the file system.
Note: In the case of striped file systems, the number of disks
that are specified should be equal to the number of columns
(ncolumns).
Note: In the case of mirrored-striped and striped-mirrored
file systems, the disks that are specified should be equal to
(ncolumns * number_of_mirrors_in_fs).
pool_or_disk_name
Specifies the pool or the disk name for the tiered file system.
To remove a column from a secondary tier of a file system
â—†
To remove a column from a secondary tier of a file system, enter the following:
Storage> tier rmcolumn fs_name
where fs_name is the name of the tiered file system, the secondary tier of which
you want to remove the column from.
Configuring a mirror to a tier of a file system
To add a mirror to a tier of a file system
â—†
To add a mirror to a tier of a file system, enter the following:
Storage> tier addmirror fs_name pool1[,disk1,...]
[protection=disk|pool]
fs_name
Specifies the file system to which a mirror is added. If the specified
file system does not exist, an error message is displayed.
327
Configuring SmartTier
Configuring a mirror to a tier of a file system
pool1[,disk1,...]
Specifies the pool(s) or disk(s) that are used as a mirror for the
specified tiered file system. You can specify more than one pool
or disk by separating the name with a comma. But do not include
a space between the comma and the name.
The disk needs to be part of the pool or an error message is
displayed.
protection
If no protection level is specified, disk is the default protection
level.
Available options are:
â– 
â– 
disk - If disk is entered for the protection field, then mirrors
are created on separate disks. The disks may or may not be
in the same pool.
pool - If pool is entered for the protection field, then mirrors
are created in separate pools. If not enough space is available,
then the file system is not created.
To remove a mirror from a tier of a file system
â—†
To remove a mirror from a tier of a file system, enter the following:
Storage> tier rmmirror fs_name
where fs_name specifies the name of the tiered file system from which you
want to remove a mirror.
This command provides another level of detail for the remove mirror operation.
You can use the command to specify which mirror you want to remove by
specifying the pool name or disk name.
The disk must be part of a specified pool.
328
Configuring SmartTier
Listing all of the files on the specified tier
To remove a mirror from a tier spanning a specified pool or disk
â—†
To remove a mirror from a tier that spans a specified pool or disk, enter the
following:
Storage> tier rmmirror fs_name [pool_or_disk_name]
fs_name
Specifies the name of the file system from which to remove a
mirror. If the specified file system does not exist, an error message
is displayed.
pool_or disk_name Specifies the pool or disk from which the mirror of the tiered file
system spans.
The syntax for the Storage> tier rmmirror command is the same for both
pool and disk. If you try to remove a mirror using Storage> fs rmmirror fs1
abc, Veritas Access first checks for the pool with the name abc, then Veritas
Access removes the mirror spanning on that pool. If there is no pool with the
name abc, then Veritas Access removes the mirror that is on the abc disk. If
there is no disk with the name abc, then an error message is displayed.
Listing all of the files on the specified tier
You can list all of the files that reside on either the primary tier or the secondary
tier.
Note: If the tier contains a large number of files, it may take some time before the
output of this command is displayed.
To list all of the files on the specified tier
â—†
To list all of the files on the specified tier, enter the following:
Storage> tier listfiles fs_name {primary|secondary}
where fs_name indicates the name of the tiered file system from which you
want to list the files. You can specify to list files from either the primary or the
secondary tier.
Displaying a list of SmartTier file systems
You can display a list of SmartTier file systems using the Storage> fs list
command.
329
Configuring SmartTier
About tiering policies
About tiering policies
Each tier can be assigned a policy.
The tiering policies include:
â– 
Specify on which tier (primary or secondary) the new files get created.
â– 
Relocate files from the primary tier to the secondary tier based on any number
of days of inactivity of a file.
â– 
Relocate files from the secondary tier to the primary tier based on the access
temperature of the file.
â– 
Prune files on the secondary tier based on the number of days of inactivity of a
file.
Prune policies are related to the secondary tier. Prune policies are not related
to the cloud tier.
About configuring the policy of each tiered file
system
You can configure the policy of each tiered file system.
Table 23-1
Tier policy commands
Command
Definition
tier policy list
Displays the policy for each tiered file system. You can have one policy
for each tiered file system.
See “Configuring the policy of each tiered file system” on page 332.
tier policy modify
Modifies the policy of a tiered file system.
The new files are created on the primary tier. If a file has not been
accessed for more than seven days, the files are moved from the
primary tier to the secondary tier. If the access temperature is more
than five for of the files in the secondary tier, these files are moved from
the secondary tier to the primary tier. The access temperature is
calculated over a three-day period.
See “Configuring the policy of each tiered file system” on page 332.
330
Configuring SmartTier
About configuring the policy of each tiered file system
Table 23-1
Tier policy commands (continued)
Command
Definition
tier policy prune
Specifies the prune policy of a tiered file system.
Once files have aged on the secondary tier, the prune policy can be
set up to delete those aged files automatically.
The sub-commands under this command are:
â– 
tier policy prune list
â– 
tier policy prune modify
â– 
tier policy prune remove
See “Configuring the policy of each tiered file system” on page 332.
tier policy run
Runs the policy of a tiered file system.
See “Configuring the policy of each tiered file system” on page 332.
tier policy remove
Removes the policy of a tiered file system.
See “Configuring the policy of each tiered file system” on page 332.
331
Configuring SmartTier
Configuring the policy of each tiered file system
Configuring the policy of each tiered file system
To display the policy of each tiered file system
â—†
To display the policy of each tiered file system, enter the following:
Storage> tier policy list
Each tiered file system can be assigned a policy. A policy that is assigned to
a file system has three parts:
file creation
Specifies on which tier the new files are created.
inactive files
Indicates when a file has to be moved from the primary tier to the
secondary tier. For example, if the days option of the tier is set
to 10, and if a file has not been accessed for more than 10 days,
then it is moved from the primary tier of the file system to the
secondary tier.
access
temperature
Measures the number of I/O requests to the file during the period
as designated by the period. In other words, it is the number of
read or write requests that are made to a file over a specified
number of 24-hour periods, divided by the number of periods. If
the access temperature of a file exceeds minacctemp (where
the access temperature is calculated over a period of time
previously specified) then this file is moved from the secondary
tier to the primary tier.
332
Configuring SmartTier
Configuring the policy of each tiered file system
To modify the policy of a tiered file system
â—†
To modify the policy of a tiered file system, enter the following:
Storage> tier policy modify fs_name {primary|secondary} days
minacctemp period
fs_name
The name of the tiered file system from which you want to modify
a policy.
tier
Causes the new files to be created on the primary or the secondary
tier. You need to input either primary or secondary.
days
Number of days from which the inactive files move from the primary
to the secondary tier.
minacctemp
The minimum access temperature value for moving files from the
secondary to the primary tier.
period
The number of past days used for calculating the access
temperature.
To display the prune policy of a tiered file system
â—†
To display the prune policy of a tiered file system, enter the following:
Storage> tier policy prune list
By default, the prune policy status of a tiered file system is disabled. The
delete_after indicates the number of days after which the files can be deleted.
To modify the prune policy of a tiered file system
â—†
To modify the prune policy of a tiered file system, enter the following:
Storage> tier policy prune modify fs_name delete_after
fs_name
Name of the tiered file system from which you want to modify the
prune policy.
delete_after
Number of days after which the inactive files are deleted.
333
Configuring SmartTier
Best practices for setting relocation policies
To remove the prune policy of a tiered file system
â—†
To remove the prune policy of a tiered file system, enter the following:
Storage> tier policy prune remove fs_name
where fs_name is the name of the tiered file system from which you want to
remove the prune policy.
To run the policy of a tiered file system
â—†
To run the policy of a tiered file system, enter the following:
Storage> tier policy run fs_name
where fs_name indicates the name of the tiered file system for which you want
to run a policy.
To remove the policy of a tiered file system
â—†
To remove the policy of a tiered file system, enter the following:
Storage> tier policy remove fs_name
where fs_name indicates the name of the tiered file system from which you
want to remove a policy.
You can run the policy of a tiered file system, which would be similar to
scheduling a job to run your policies, except in this case running the policy is
initiated manually. The Storage> tier policy run command moves the
older files from the primary tier to the secondary tier, or prunes the inactive
files on the secondary tier, according to the policy setting.
Best practices for setting relocation policies
Consider the following relocation policy. The following clauses for relocating the
files are:
â– 
Clause 1: If the files on the primary tier are not accessed for 10 days, relocate
the files to the secondary tier.
â– 
Clause 2: If the Access Temperature of the files on the secondary tier is more
than 100 in the last 15 days, then relocate the files to the primary tier.
Storage> tier policy list
FS
Create on
========================= =========
non_pgr
primary
Days
====
10
MinAccess Temp
==============
100
PERIOD
======
15
334
Configuring SmartTier
Relocating a file or directory of a tiered file system
Setting such policies where the "PERIOD" is more than the "Days" may result in
files moving between the tiers after running the Storage> tier policy run
command. For example, if a file a.txt was being used a lot between 1-5 days, and
the number of inputs/outputs rises to 3000. After the fifth day, the file was not used
for 10 days and the Storage> tier policy run command was issued. The file
a.txt has the Access Temp as 3000/15, which is equal to 200. As the file has not
been used in the last ten days, the file is moved to the secondary tier. If the Storage>
tier policy run command is run again, the file moves to the primary tier, as the
Min Access Temperature is more than 100.
A best practice is to keep the period for calculating the Minimum Access
Temperature to lower than the number of days for checking the access age.
Relocating a file or directory of a tiered file system
The Storage> tier relocate command relocates a file or directory from a
secondary tier to a primary tier. This command does not relocate the NDS (Named
Data Stream) files that also include extended attributes to the primary tier.
Note: Relocation is not possible if the primary tier of the file system is full. No error
message displays.
To relocate a file or directory
â—†
To relocate a file or directory, enter the following:
Storage> tier relocate fs_name dirPath
fs_name
The name of the tiered file system from which you want to relocate
a file or directory. The relocation of the file or directory is done
from the secondary tier to the primary tier.
dirPath
Enter the relative path of the directory (dirPath) you want to
relocate. Or enter the relative path of the file (file_path) that
you want to relocate.
335
Configuring SmartTier
Displaying the tier location of a specified file
Displaying the tier location of a specified file
To display the tier location of a specified file
â—†
To display the tier location of a specified file, enter the following:
Storage> tier mapfile fs_name file_path
fs_name
Specifies the name of the file system for which the specified file
on the tiered file system resides. If the specified file system does
not exist, an error message is displayed.
file_path
Specifies the tier location of the specified file. The path of the file
is relative to the file system.
For example, to show the location of a.txt, which is in the root directory of
the fs1 file system, enter the following:
tier mapfile fs1 /a.txt
About configuring schedules for all tiered file
systems
The tier schedule commands display, modify, and remove the tiered file systems.
Table 23-2
Tier schedule commands
Command
Definition
tier schedule
modify
Modifies the schedule of a tiered file system.
tier schedule list
Displays the schedules for all tiered file systems.
See “Configuring schedules for tiered file systems” on page 337.
You can have one schedule for each tiered file system.
You cannot create a schedule for a non-existent or a non-tiered file
system.
See “Configuring schedules for tiered file systems” on page 337.
tier schedule
remove
Removes the schedule of a tiered file system.
See “Configuring schedules for tiered file systems” on page 337.
336
Configuring SmartTier
Configuring schedules for tiered file systems
Configuring schedules for tiered file systems
To modify the schedule of a tiered file system
â—†
To modify the schedule of a tiered file system, enter the following:
Storage> tier schedule modify fs_name minute hour
day_of_the_month month day_of_the_week
node_name
The default node name is the master node.
Note: If a previous schedule operation is still running, a new schedule is not created
until the previous schedule operation is completed.
fs_name
Specifies the file system where the schedule of the tiered file
system resides. If the specified file system does not exist, an error
message is displayed.
minute
This parameter may contain either an asterisk, (*), which implies
"every minute," or a numeric value between 0-59.
You can enter */(0-59), a range such as 23-43, or only the *.
hour
This parameter may contain either an asterisk, (*), which implies
"run every hour," or a number value between 0-23.
You can enter */(0-23), a range such as 12-21, or only the *.
day_of_the_month
This parameter may contain either an asterisk, (*), which implies
"run every day of the month," or a number value between 1-31.
You can enter */(1-31), a range such as 3-22, or only the *.
month
This parameter may contain either an asterisk, (*), which implies
"run every month," or a number value between 1-12.
You can enter */(1-12), a range such as 1-5, or only the *. You
can also enter the first three letters of any month (must use
lowercase letters).
day_of_the_week
This parameter may contain either an asterisk (*), which implies
"run every day of the week," or a numeric value between 0-6. The
number 0 is interpreted as Sunday. You can also enter the first
three letters of the week (must use lowercase letters).
337
Configuring SmartTier
Displaying the files that may be moved or pruned by running a policy
node_name
Specifies the node on which the schedule of the tiered file system
will be run. When creating a schedule for a tiered file system, if
you do not input a node name, the schedule will be run on the
master node. If you specify a node name, the schedule will be run
on the specified node.
To display schedules for all tiered file systems
â—†
To display schedules for all tiered file systems, enter the following:
Storage> tier schedule list [fs_name]
where fs_name indicates the name of the tiered file system for which you want
to run a policy.
To remove the schedule of a tiered file system
â—†
To remove the schedule of a tiered file system, enter the following:
Storage> tier schedule remove fs_name
where fs_name is the name of the tiered file system from which you want to
remove a schedule.
Displaying the files that may be moved or pruned
by running a policy
Before a policy runs, you can display a list of the files that the policy may move or
prune. This feature is useful as a "what if" type of analysis. The command does not
physically move any file blocks.
To display a list of files that may be moved or pruned by running a policy
â—†
To display a list of files that may be moved or pruned by running a policy, enter
the following:
Storage> tier query fs_name
where fs_name is the name of the tiered file system for which you want to
display the list.
338
Configuring SmartTier
Allowing metadata information on the file system to be written on the secondary tier
339
Allowing metadata information on the file system
to be written on the secondary tier
The Storage> tier allowmetadata yes command allows the metadata information
on the specified file system to be written on the secondary tier as well. By default,
the secondary tier is not configured for storing metadata information on the file
system. Tiers configured with this option show metaOK in the column SECONDARY
TIER of the Storage> fs list command output.
To allow metadata information on the file system to be written on the
secondary tier
â—†
To allow metadata information on the file system to be written on the secondary
tier, enter the following:
Storage> tier allowmetadata yes fs_name
where fs_name is the name of the file system where metadata information can
be written on the secondary tier.
For example:
Storage> tier allowmetadata yes fs1
ACCESS fs SUCCESS V-288-0 Configured the secondary tier for storing
metadata information.
Restricting metadata information to the primary
tier only
The Storage> tier allowmetadata no command restricts the metadata information
to the primary tier only. If the primary tier gets full, the write operations to the
secondary tier are not served as the metadata updates. They are restricted to the
primary tier only.
To restrict metadata information to the primary tier only
â—†
To restrict metadata information to the primary tier only, enter the following:
Storage> tier allowmetadata no fs_name
where fs_name is the name of the file system where the metadata information
is restricted to the primary tier only.
Configuring SmartTier
Removing a tier from a file system
Removing a tier from a file system
The Storage> tier remove command removes a tier from the file system and
releases the storage that is used by the file system back to the storage pool. All
the files on the secondary tier are relocated to the primary tier. The file system must
be online when using the Storage> tier remove command.
Note: If the storage tier to be removed contains any data residing on it, then the
tier cannot be removed from the file system.
Note: Ensure that you remove the policy first by running the Storage> tier policy
remove command prior to running the Storage> tier remove command.
See “Configuring the policy of each tiered file system” on page 332.
To remove a tier from a file system
â—†
To remove a tier from a file system, enter the following:
Storage> tier remove fs_name
where fs_name specifies the name of the tiered file system that you want to
remove.
340
Chapter
24
Configuring SmartIO
This chapter includes the following topics:
â– 
About SmartIO for solid-state drives
â– 
About configuring SmartIO
â– 
About SmartIO read caching for applications running on Veritas Access file
systems
â– 
About SmartIO writeback caching for applications running on Veritas Access
file systems
About SmartIO for solid-state drives
Solid-state drives (SSDs) are devices that do not have spinning disks. Today's
solid-state technologies, such as DRAM and NAND flash, provide faster data access,
are more efficient, and have a smaller footprint than traditional spinning disks. The
data center uses solid-state technologies in many form factors: in-server, all flash
arrays, all flash appliances, and mixed with traditional HDD arrays. Each form factor
offers a different value proposition. SSDs also have many connectivity types: PCIe,
FC, SATA, and SAS.
Due to the current cost per gigabyte of SSD devices, the best value of SSDs is not
as high capacity storage devices. The benefit of adopting SSDs is to improve
performance and reduce the cost per I/O per second (IOPS). Data efficiency and
placement is critical to maximizing the returns on any data center's investment in
solid state.
The SmartIO feature of Veritas Access enables data efficiency on your SSDs through
I/O caching. Using SmartIO to improve efficiency, you can optimize the cost per
IOPS. SmartIO does not require in-depth knowledge of the hardware technologies
underneath. SmartIO uses advanced, customizable heuristics to determine what
Configuring SmartIO
About configuring SmartIO
data to cache and how that data gets removed from the cache. The heuristics take
advantage of Veritas Access' knowledge of the characteristics of the workload.
SmartIO uses a cache area on the target device or devices. The cache area is the
storage space that SmartIO uses to store the cached data and the metadata about
the cached data. To start using SmartIO, you can create a cache area with a single
command, while the application is online.
When the application issues an I/O request, SmartIO checks to see if the I/O can
be serviced from the cache. As applications access data from the underlying volumes
or file systems, certain data is moved to the cache based on the internal heuristics.
Subsequent I/Os are processed from the cache.
SmartIO supports read and write caching for the VxFS file systems that are mounted
on VxVM volumes, in several caching modes and configurations.
See “About SmartIO read caching for applications running on Veritas Access file
systems” on page 342.
See “About SmartIO writeback caching for applications running on Veritas Access
file systems ” on page 343.
About configuring SmartIO
The SmartIO commands control the SmartIO caching functionality of the Veritas
Access software.
About SmartIO read caching for applications
running on Veritas Access file systems
Veritas Access supports read caching on solid-state drives (SSDs) for applications
running on Veritas Access file systems. In this scenario, application reads are
satisfied from the cache whenever possible. As the application accesses the file
system, the file system loads data from the disk into the cache. Application writes
go to the disk in the usual way. With each write, the file system synchronizes the
cache to ensure that applications never see stale data. If a cache device fails, a
file that is cached in read mode is completely present on the disk. Therefore, the
cache failure does not affect the application I/Os for the file and the application I/Os
continue without interruption.
By default, the cache area is enabled for caching. All file systems on the system
are cached unless you explicitly disable caching for that file system. You do not
need to explicitly enable caching on a file system.
342
Configuring SmartIO
About SmartIO writeback caching for applications running on Veritas Access file systems
About SmartIO writeback caching for applications
running on Veritas Access file systems
Veritas Access supports writeback caching on solid-state drives (SSDs) for
applications running on Veritas Access file systems. In this scenario, application
reads and writes are satisfied from the cache whenever possible.
SmartIO provides write caching in the writeback mode. In writeback mode, an
application write returns success after the data is written to the SmartIO cache,
which is usually on an SSD. At a later time, SmartIO flushes the cache, which writes
the dirty data to the disk. Writeback caching expects to improve the latencies of
synchronous user data writes. Write order fidelity is not guaranteed while flushing
the dirty data to the disk.
Writeback caching is superset of read caching. When writeback caching is enabled,
read caching is implicitly enabled. Reads are satisfied from the cache if possible,
and the file system transparently loads file data into the cache. Both read and
writeback caching may be enabled for the same file at the same time.
The writeback caching mode gives good performance for writes, but also means
that the disk copy may not always be up to date. If a cache device fails, a file that
is cached in writeback mode may not be completely present on the disk. SmartIO
has a mechanism to flush the data from the cache device when the device comes
back online. Veritas Access provides additional protection from data loss with cache
reflection. Cache reflection is enabled by default.
Writeback caching requires a cluster with exactly two nodes. Writeback caching
cannot be enabled if the cluster has more than two nodes or if the cluster has a
single node.
When writeback caching is enabled, SmartIO mirrors the writeback data at the file
system level to the other node's SSD cache. This behavior, called cache reflection,
prevents loss of writeback data if a node fails. If a node fails, the other node flushes
the mirrored dirty data of the lost node as part of reconfiguration. Cache reflection
ensures that writeback data is not lost even if a node fails with pending dirty data.
After writeback caching is enabled on the mount point, the qualified synchronous
writes in that file system are cached. SmartIO determines if a write qualifies for
writeback caching, using criteria such as the following:
â– 
The write request must be PAGESIZE aligned (multiple of 4 KB).
â– 
The write request is not greater than 2 MB.
â– 
The file on which the writes are happening is not mapped.
â– 
Writeback caching is not explicitly disabled by the administrator.
343
Configuring SmartIO
About SmartIO writeback caching for applications running on Veritas Access file systems
â– 
Writeback caching is not qualified if the cluster has more than two nodes.
You can also customize which data is cached, by adding advisory information to
assist the SmartIO feature in making those determinations.
344
Chapter
25
Configuring replication
This chapter includes the following topics:
â– 
About Veritas Access file-level replication
â– 
How Veritas Access replication works
â– 
About Veritas Access sync replication
â– 
How Veritas Access sync replication works
â– 
Starting Veritas Access replication
â– 
Setting up communication between the source and the destination clusters
â– 
Setting up the file systems to replicate
â– 
Setting up files to exclude from a replication unit
â– 
Scheduling the replication
â– 
Defining what to replicate
â– 
About the maximum number of parallel replication jobs
â– 
Managing a replication job
â– 
Replicating compressed data
â– 
Displaying replication job information and status
â– 
Synchronizing a replication job
â– 
Behavior of the file systems on the replication destination target
â– 
Accessing file systems configured as replication destinations
â– 
Creating a recovery point objective (RPO) report
Configuring replication
About Veritas Access file-level replication
â– 
Replication job failover and failback
About Veritas Access file-level replication
The Veritas Access Replication solution provides high performance, scalable data
replication and is ideal for use as a content distribution solution, and for use to
create hot standby copies of important data sets.
Veritas Access Replication lets you asynchronously replicate a file system from
one node in a source cluster to another node in a destination cluster at regularly
timed intervals. This allows for content sharing, replication, and distribution.
The Veritas Access Replication functionality allows episodic replication with a
minimum timed interval update of 15 minutes and no set maximum. Unlike many
replication solutions, Veritas Access Replication also allows the destination file
system to be online for reads while replication is active.
Major features of Veritas Access Replication include:
â– 
Online access (read-only) to replicated data.
â– 
Immediate read/write access to destination replicated data in the unlikely event
that the source file system goes offline for a sustained period of time.
â– 
Load balancing across replication links.
â– 
Transport failover of replication service from one node to another.
â– 
Unlimited simultaneous replication operations.
The Veritas Access Replication feature is designed to copy file systems only between
Veritas Access clusters.
Note: The Veritas Access Replication feature does not support user modifications
to the target file system if replication is configured.
Figure 25-1 describes the workflow for configuring replication between two Veritas
Access clusters.
346
Configuring replication
How Veritas Access replication works
Figure 25-1
Replication workflow
How Veritas Access replication works
Veritas Access Replication is an incremental file-level replication service that runs
on top of the Cluster File System that is used by Veritas Access which is, in turn,
based on the Veritas File System (VxFS). Veritas Access Replication uses two file
system specific features: File Change Log (FCL) and Storage Checkpoint services,
to retrieve file changes between replication periods.
For a given period, the FCL records every change made to the file system. By
scanning the FCL, Veritas Access Replication quickly identifies the file(s) that have
changed and generates the modified file list. This avoids the expensive file system
scanning that is normally associated with file-based replication, and which typically
results in sub-optimal performance.
Next, Veritas Access Replication uses VxFS Storage Checkpoint's metadata
comparison feature to retrieve the modified extent list of each changed file. It does
not need to access the file data.
The Veritas Access Replication transport layer works in conjunction with, and
interfaces to the well-known rsync remote file synchronization tool. Using this existing
network transportation program makes the network configuration much easier in
the enterprise domain: the Secure Socket Shell (SSH) port (22) required by rsync
is opened by default on almost all enterprise firewalls. rsync is also a reliable solution
for a low bandwidth or unreliable link environment.
347
Configuring replication
About Veritas Access sync replication
Note: Veritas Access uses the rsync protocol to provide transportation of Veritas
Access Replication encapsulated files. The use of rsync is not exposed in Veritas
Access, and cannot be administered outside of the Veritas Access Replication
feature set.
About Veritas Access sync replication
The Veritas Access sync replication solution provides high performance, robustness,
ease of use, and synchronous replication capability which is designed to contribute
to an effective disaster recovery plan.
Veritas Access sync replication lets you synchronously replicate volumes from one
node in the source cluster to another node in the destination cluster. The
synchronous replication enables you to maintain a consistent copy of application
data at one remote location. It replicates the application writes on the volumes at
the source location to a remote location across any distance.
If a disaster occurs at the source location, you can use the copy of the application
data at the remote location and restart the application at the remote location. The
host at the source location on which the application is running is known as the
primary host and the host at the target location is known as the secondary host.
The volumes on the primary host must be synchronized initially with the volumes
on the secondary host.
Major features of Veritas Access sync replication include:
â– 
Performs replication of volumes in synchronous mode, ensuring data integrity
and consistency.
â– 
Maintains write-order fidelity, which applies writes on the secondary host in the
same order that they were issued on the primary host.
â– 
Enables easy recovery of the application at the remote site.
â– 
Provides both command-line interface (CLISH) and the graphical user interface
(GUI) for online management of the synchronous replication.
See the sync(1) manual page for more information.
How Veritas Access sync replication works
The Veritas Access synchronous replication sends writes to the secondary in the
order in which they were received on the primary. The secondary receives writes
from the primary and writes to local volumes.
348
Configuring replication
Starting Veritas Access replication
While replication is active, you should not use the application directly on the data
volumes on the secondary. The application on the secondary is used only if a
disaster occurs on the primary. If the primary fails, the application that was running
on the primary can be brought up on the secondary, and the primary can use the
data volumes on the secondary.
Starting Veritas Access replication
This section lists the specific commands that are needed to run Veritas Access
Replication on your clusters.
Ensure the following before starting replication:
â– 
Before you set up your clusters for replication, you must first identify which is
the source cluster and which is the destination cluster. All of the commands are
performed on the source cluster first.
â– 
Make sure both the source cluster and the destination cluster have the same
version of Veritas Access.
â– 
To use Veritas Access Replication, you must first create an online file system
on the Veritas Access source cluster and an online file system on the Veritas
Access destination cluster.
â– 
Assign a virtual IP (VIP) address to both the source and the destination clusters.
The Veritas Access Replication service requires VIP addresses not already in
use for the two clusters to communicate.
349
Configuring replication
Starting Veritas Access replication
To start Veritas Access Replication on the source cluster
1
To bind a virtual IP address for the replication service on the source cluster,
enter the following:
Replication> config bind ip_addr [device] [netmask]
2
ip_addr
Virtual IP address for the replication service on the destination
cluster.
device
The public network interface name that you want the replication
IP address to use.
netmask
Netmask for the replication IP address.
To start the replication service, enter the following on the source node:
Replication> service start [nodename]
nodename
3
The name of the node in the local cluster where you want to start
the replication service.
To check the status of the replication service, enter the following:
Replication> service status
4
To confirm the IP address is up and running, enter the following:
Replication> config show ip
The definitions of the headings are as follows:
Note: Alternately, you can use the network> ip addr show command to
confirm that the IP address is up and running.
350
Configuring replication
Setting up communication between the source and the destination clusters
To start Veritas Access Replication on the destination cluster
1
To bind a virtual IP address for the replication service on the destination cluster,
enter the following:
Replication> config bind ip_addr [device] [netmask]
2
ip_addr
Virtual IP address for the replication service on the source cluster.
device
The public network interface name that you want the replication
IP address to use.
netmask
Netmask for the replication IP address.
To start the replication service, enter the following on the destination node:
Replication> service start [nodename]
nodename
3
The name of the node in the local cluster where you want to start
the replication service.
To check the status of the replication service, enter the following:
Replication> service status
4
To confirm that the IP address is up and running, enter the following:
Replication> config show ip
You next need to set up communication between the source and the destination
clusters.
Setting up communication between the source
and the destination clusters
You need to set up communication between your source and your destination
clusters.
Make sure that you already created an online file system on the Veritas Access
source cluster and an online file system on the Veritas Access destination cluster.
Veritas Access Replication authentication strategy is based on RSA-key
authentication, and both the source and the destination clusters have to export their
replication public keys. The source cluster imports the destination cluster's public
key and the destination cluster imports the source cluster's public key.
351
Configuring replication
Setting up communication between the source and the destination clusters
After you have determined which two Veritas Access clusters to use, you need to
authenticate them.
The Replication> config commands must be executed in a specific order.
â– 
Use the Replication> config del_keys after the Replication> config
deauth command, or it fails.
â– 
You can only run the Replication> config unbind command (to unbind the
virtual IP) after you have run the Replication> service stop command.
â– 
You need to run the Replication> config bind command (to bind the virtual
IP) before you can run the Replication> service start command.
â– 
You need to run the Replication> config export_keys and Replciation>
config import_keys to export and import the keys of both the source and the
destination clusters.
â– 
You can only run the Replication> config auth command after both the
source and destination have imported each others keys.
â– 
You need to run the Replication> config auth command to create a link
from every cluster to any remaining cluster that is used for replication irrespective
of their role as a source or a destination cluster.
After the source and the destination clusters have successfully imported each other's
public keys, you need to run the Replication> config auth command on the
source cluster to complete the authentication between the two clusters. This
command checks the two-way communication between the source and the
destination cluster, and authenticates the clusters allowing the Veritas Access
Replication service to begin.
Note: The Replication> config auth command must be executed from the
source cluster.
This section provides a walk-through for the creation and export/import of these
encrypted keys for both the source and the destination cluster.
Note: Without the correct authentication of the source and the destination encryption
keys, Veritas Access Replication does not function correctly.
352
Configuring replication
Setting up communication between the source and the destination clusters
To export the source cluster's key to the destination cluster
1
To export the source cluster's key to the destination cluster, enter the following:
Replication> config export_keys [URL]
URL
The location you want to copy the public keys to.
If you do not want to enter a URL, you can copy the output from the
Replication> config export_keys command into the Replication> config
import_keys command at the destination cluster.
By default, the output is displayed to your computer screen.
The SCP and FTP protocols are supported.
2
To import the source cluster's key to the destination cluster, enter the following:
Replication> config import_keys [URL/keyfile]
URL
The location you want to copy the public keys from.
keyfile
The file name of the key that is generated by the export.
If you did not enter a URL during the Replication> config export_keys
command, you can cut and paste the output and enter it into the Replication>
config import_keys command.
3
To verify that the key has been imported correctly, enter the following:
Replication> config show
353
Configuring replication
Setting up communication between the source and the destination clusters
To export the destination cluster's key to the source cluster
1
To export the destination cluster's key to the source cluster, enter the following:
Replication> config export_keys [URL]
URL
The location you want to copy the public keys to.
The SCP and FTP protocols are supported.
If you do not want to enter a URL, you can cut and paste the output from the
Replication> config export_keys command to the Replication> config
import_keys command. By default, the output is displayed to your computer
screen.
2
To import the destination cluster's key to the source cluster, enter the following:
Replication> config import_keys [URL/keyfile]
URL
Enter the URL of the location you want to copy the public keys from.
keyfile
Enter the file name of the key that is generated by the export.
If you did not enter a URL during the Replication> config export_keys
command, you can cut and paste the output and enter it into the Replication>
config import_keys command.
3
To verify that the key has been imported correctly, enter the following:
Replication> config show
354
Configuring replication
Setting up the file systems to replicate
To authenticate source cluster and destination clusters for replication
1
This command should be executed on the source cluster as well as on the
destination cluster. To authenticate the public keys on the source cluster and
the destination clusters, enter the following:
Replication> config auth conIP link_name
2
conIP
Enter the destination cluster console IP address.
link_name
Both the source cluster and the destination cluster need to be
assigned a unique identifier (name). This identifier is used to
identify the link that is established between the source and the
destination clusters. You can use the link name instead of the
virtual IP addresses of the source and the destination clusters
when using the other replication commands. For example:
Pune_Shanghai.
To confirm the authentication, enter the following:
Replication> config show
Note: These steps must be executed on the destination side cluster to authenticate
the public keys on the source and the destination cluster.
Once you have configured the clusters and links, you need to set up the file systems
you want to replicate.
Setting up the file systems to replicate
You need to set up the file systems you want to replicate using the Replication>
repunit commands. The Replication> repunit commands let you define the
type of data that you replicate from the source cluster to the destination cluster. All
files and folders belonging to a replication unit are replicated together from the
source cluster to the destination cluster.
Note: The maximum number of replication units supported in Veritas Access
Replication is 128.
Make sure that you already set up communication between your source and the
destination clusters.
355
Configuring replication
Setting up the file systems to replicate
See “Setting up communication between the source and the destination clusters”
on page 351.
A replication unit is defined as an ordered set of entries, where each entry is one
of the following:
â– 
A single file system
â– 
A single subdirectory
â– 
A single file
Note: The replication source has to be one of the entry types shown. It cannot be
a snapshot or a Storage Checkpoint (ckpt).
Veritas Access Replication requires that the source and the destination replication
units of a job definition have the same type of ordered entries, that is, every entry
pair (one entry from the source and one entry from the destination replication unit)
must be of a similar type.
Both can be files, or both can be directories, as shown in the following example:
Replication unit Name
=====================
ru1
ru2
Replication unit Entries
========================
fs1,fs2/dir1,fs2/f1
fs4,fs6/dir2,fs5/f2
The entry is identified by the file system name, optionally followed by a slash '/',
followed by the path of the directory or the file inside the file system. Member entries
are ordered inside a replication unit and such ordering information is used to
determine the replication entity pair mapping from the source replication unit to the
destination replication unit.
Note: Make sure that the paths in the destination replication unit exist in the
destination cluster.
Note: The commands in this section apply only to the source replication unit.
356
Configuring replication
Setting up files to exclude from a replication unit
To create a replication unit
1
From the source cluster, to create a replication unit, enter the following:
Replication> repunit create repunit_name
repunit_entry[,repunit_entry,...]
repunit_name
The name of the replication unit you want to create.
repunit_entry
The file system file, file, folder, or directory.
Note: Destination replication units should be created only at the source cluster
using the Replication> repunit create command.
2
To confirm the creation of the replication unit, enter the following:
Replication> repunit show verbose
You can use the Replication> repunit add_entry, Replication> repunit
modify_entry, Replication> repunit remove_entry, and Replication> repunit
destroy commands to manage your replication units.
Note: The Replication> repunit modify_entry, Replication> repunit
remove_entry, and Replication> repunit destroy operations are not allowed
for the replication units that are included in any job definitions.
Setting up files to exclude from a replication unit
Once you have set up the files systems you want to replicate, you can define a set
of directories or files to exclude from a replication unit. This step is optional. The
exclunit entry has higher priority over the repunit entry. If any file name matches
the exclunit entry, the file is not replicated to the target.
To work with exclusion units:
â– 
Use the Replication> exclunit create command to name the excluding unit
and configure the directories and files you want to exclude from a replication.
The excluding unit you create can be used in multiple replication jobs. A single
excluding unit can span across multiple directories.
â– 
Use the Replication> job exclude command to add the excluding unit to a
replication job. You cannot add an excluding unit to a job that is active. You
must disable the job first.
357
Configuring replication
Setting up files to exclude from a replication unit
â– 
You can use the following commands: Replication> exclunit add_entry,
Replication> exclunit modify_entry, and Replication> exclunit
remove_entry to make changes to an excluding unit, provided the excluding
unit you want to modify is not included in any job definitions.
â– 
Use the Replication> job show command to show which excluding units are
configured for a job. Use the Replication> exclunit show command to show
the names and contents of all excluding units that are defined for the cluster.
â– 
Use the Replication> exclunit destroy command to permanently delete
the excluding unit. You can only destroy an excluding unit if the excluding unit
you want to destroy is not included in any job definitions.
If a replication is defined for a directory, an excluding unit should be a subset of
that directory. The excluding unit cannot be the same directory as the replication
and it cannot be a parent directory of the replication. For example, if a replication
is configured for fs1/dir1/dir2, a valid exclusion could be dir1/dir2/file or
dir1/dir2/dir3, but not /dir1 (the parent directory for the replication).
By default, Veritas Access excludes some common directories and files from all
replication units. These directories and files include:
â– 
lost+found
â– 
.placement_policy.xml
â– 
quotas
â– 
quotas.grp
â– 
quotas.64
â– 
quotas.grp.64
In addition, you can use the Replication> exclunit commands to specify
additional directories and files to exclude.
The directories and files you specify for an excluding unit are applied based on the
overall definition of the replication. For example, a replication job that contains an
fs1 replication unit and an dir3 excluding unit, replicates all the files in fs1, except
for the files in fs1/dir3.
358
Configuring replication
Scheduling the replication
To create an excluding unit:
1
To create an excluding unit, enter the following:
Replication> exclunit create exclunit_name
exclunit_entry[,exclunit_entry,..]
2
exclunit_name
Enter the name of the excluding unit.
exclunit_entry
Enter the comma-separated list of directories and files you want to
exclude from a replication.
To confirm the creation of the excluding unit enter the following:
Replication> exclunit show verbose
You can use the Replication> exclunit add_entry, Replication> exclunit
modify_entry, Replication> exclunit remove_entry, and Replication>
exclunit destroy commands to manage your excluding units.
Note: The Replication> exclunit add_entry, Replication> exclunit
modify_entry, Replication> exclunit remove_entry, and Replication>
exclunit destroy operations are not allowed for excluding units that are included
in any job definitions.
Scheduling the replication
You use the Replication> schedule commands to create a schedule for replicating
files from the source to the destination cluster.
Veritas Access Replication supports periodic replications, where the data gets
replicated from the source to the destination cluster at regular intervals as defined
by the schedule. Veritas Access Replication uses the following parameters to
schedule the replication jobs: minute, hour, day-of-the-month, month, and
day-of-the-week.
Make sure that you already set up the file systems you want to replicate.
See “Setting up the file systems to replicate” on page 355.
359
Configuring replication
Scheduling the replication
To create a replication schedule
â—†
To create a replication schedule, enter the following:
Replication> schedule create schedule_name minute
[hour] [day_of_the_month] [month] [day_of_the_week]
schedule_name
Specify the name of the schedule to be created.
minute
Enter a numeric value between 0-59, or an asterisk (*), which
represents every minute. This variable is not optional.
hour
Enter a numeric value between 0-23, or an asterisk (*), which
represents every hour.
day_of_the_month
Schedule the day of the month you want to run the replication.
Enter a numeric value between 1-31, or an asterisk (*), which
represents every day of the month.
month
Schedule the month you want to run the replication. Enter a
numeric value between 1-12, or an asterisk (*), which
represents every month. You can also use the names of the
month. Enter the first three letters of the month (not case
sensitive).
day_of_the_week
Schedule the day of the week you want to run the replication.
Enter a numeric value between 0-6, or an asterisk (*), which
represents every day of the week. Sunday is interpreted as
0. You can also enter the first three letters of the week (you
must use lower case letters).
You can enter an interval (two numbers separated by a hyphen) for the minute,
hour, day-of-month, month, and day-of-week. If you want to run the schedule
between 1:00 a.m. and 4:00 a.m., you can enter a value of 1-4 for the hour
variable. The range is inclusive
The parameters also accept a set of numbers separated by a comma. For
example, 1,3,5,7 or 1-4,5-10.
To display the list of schedules
â—†
To display the schedule you have set up for replication, enter the following:
Replication> schedule show
You can also use the Replication> schedule modify and Replication>
schedule delete to manage your replication schedules.
360
Configuring replication
Defining what to replicate
Note: The Replication> schedule modify and Replication> schedule delete
operations are not allowed for the schedules that are included in any job definition.
You next need to define what is replicated.
See “Defining what to replicate” on page 361.
Defining what to replicate
You use the Replication> job commands to set up a job definition. This defined
job determines what to replicate and when, using the settings from the previous
commands.
Make sure that you created a schedule for replicating files from the source to the
destination cluster.
See “Scheduling the replication” on page 359.
361
Configuring replication
Defining what to replicate
To set up the replication job
1
To create a replication job, enter the following:
Replication> job create job_name src_repunit tgt_repunit
link_name schedule_name
2
job_name
Specify a name for the replication job you want to create.
src_repunit
Specify the source replication unit. The replication unit
determines the exact item (such as a file system) that
you want to replicate.
tgt_repunit
Specify target replication units.
link_name
Specify the link name used when you ran the
Replication> config auth command between
the local cluster and the remote cluster. Both the source
cluster and the destination cluster need to be assigned
a unique identifier (name). This identifier is used to
identify the link that is established between the source
and the destination clusters. You can use the link name
instead of the virtual IP addresses of the source and
the destination clusters when using the other replication
commands.
schedule_name
Specify the name of the replication schedule you want
to apply to the replication job.
To add an excluding unit to the job, enter the following command. This step is
optional.
Replication> job exclude job_name exclunit_name
3
By default, the job is disabled. To enable the job, enter the following:
Replication> job enable job_name
4
To check if the job was enabled, enter the following:
Replication> job show [job_name]
362
Configuring replication
About the maximum number of parallel replication jobs
About the maximum number of parallel replication
jobs
The maximum number of replication jobs is 64, but there are stricter limits on the
number of replication jobs that can be running in parallel at the same time.
Replication uses a RAM-based file system for storing the transit messages. Each
GB of this RAM-based file system can accommodate up to eight parallel running
jobs. The default size of this file system depends upon the amount of physical
memory of the node on which replication is running. If the physical memory is less
than 5 GB, replication limits its maximum usage for storing messages to 1 GB of
memory, which means the user can run up to eight replication jobs in parallel at
the same time. If the physical memory is between 5 GB to 10 GB, replication limits
its maximum usage for storing messages to 2 GB of memory, which means you
can run up to 16 replication jobs in parallel. If the physical memory is greater than
10 GB, replication limits its maximum usage for storing messages to 4 GB of
memory, which means you can run up to 32 replication jobs in parallel at the same
time.
Managing a replication job
You can manage a replication job using the Replication> job commands. The
commands are required only on the source system.
The Replication> job enable, Replication> job sync, Replication> job
disable, Replication> job abort, Replication> job pause, and Replication>
job resume commands change the status of an existing replication job.
You can use the Replication> job modify, and Replication> job destroy
commands to modify or destroy a replication job definition.
The Replication> job enable command starts replication immediately and
initiates replication after every subsequent set frequency interval. When a replication
job is created it is disabled by default, and you must enable the job to start
replication.
To enable a replication job
â—†
To enable a replication job, type the following command:
Replication> job enable job_name
job_name
Specify the name of the replication job you want to enable.
363
Configuring replication
Managing a replication job
At each frequency interval, a fresh file system Storage Checkpoint is taken and
replication is started against the new Storage Checkpoint. If a previous replication
run has not completed, a new Storage Checkpoint is not taken and the current run
is skipped.
Note: Running the Replication> job enable command on a previously aborted
replication job automatically restarts the job.
The Replication> job sync command lets you start a replication job, but then
stops the replication job after one iteration (full or incremental) is complete. You
can use this command to recover from the secondary site in the event that the
primary file system is completely destroyed. This command can also be used if you
want to run a replication job at a predefined time using a script or a cron job.
See “Synchronizing a replication job” on page 368.
The Replication> job disable command drops the replication job from the
schedule and waits for any already running iterations to complete. The Replication>
job disable command disables a job definition which is in one of these states:
ENABLED, PAUSED, or FAILED. This process can take some time if the network
is slow or if a large amount of data has changed since the last replication run.
To disable a replication job
â—†
To disable a replication job, type the following command:
Replication> job disable job_name
job_name
Specify the name of the replication job you want to stop.
The Replication> job abort command forcefully cancels a replication job even
if it is in progress. Aborting a replication job may leave Storage Checkpoints mounted
on the source system and the target file system may be left in an intermediate state.
To abort a replication job
â—†
To abort a replication job, type the following command:
Replication> job abort job_name
job_name
Specify the name of the replication job you want to abort.
The Replication> job pause command immediately stops the replication job.
You must use the Replication> job resume command to resume the replication
job from where it was paused. When replication is resumed, the replication job
364
Configuring replication
Managing a replication job
replicates the set of selected files before pausing the job, and attempts to replicate
as much of the latest data as possible. This action allows the customer to have two
recovery point objectives (RPO). When the replication job is paused, the replication
frequency option is disabled. Once the replication job is resumed, the frequency
option resumes for subsequent iterations. The pause and the resume functions let
you manage the replication job based on workload requirements.
To pause and resume a replication job
1
To pause a replication job, type the following command:
Replication> job pause job_name
where job_name is the name of the replication job you want to pause.
2
To resume a replication job, type the following command:
Replication> job resume job_name
where job_name is the name of the replication job you want to resume.
Note: You cannot start or sync a paused job. You can abort a paused job. However,
if synchronization is performed on a paused job that has been aborted, the last
RPO for the paused job is not available.
The Replication> job modify command lets you modify debugging or setting
tubnables on a replication job definition.
The addition or removal of a filesystem from the source replication unit or the
destination replication unit is not supported. To remove a specific filesystem from
the replication unit you must destroy the replication job and recreate the replication
job with the new set of file systems in the replication unit. To add a specific filesystem
from an existing replication unit, you can either create a new replication job with a
new source replication unit and target replication unit, or destroy the replication job
and recreate it with the new set of file systems in the replication unit to use the
same job name
The Replication> job modify debug command lets you enable or disable
debugging on a given job.
To modify debugging on a replication job
â—†
To modify debugging on a replication job definition, enter the following
command:
Replication> job modify debug job_name on|off
job_name
Specify the replication job name you want to modify.
365
Configuring replication
Managing a replication job
The Replication> job modify tunables command allows you to modify the job
configuration to use multiple network connections (sockets) for replicating data from
source to target. In configurations where WAN latency is high, it is recommended
to use multiple connections for significantly increased throughput. After the tunables
are set for a job, only one job is supported.
To modify tunables on a replication job
â—†
To modify tunables on a replication job definition, enter the following command:
Replication> job modify tunables job_name netconn rw_count
job_name
Specify the replication job name you want to modify.
netconn
Specify the number of connections.
rw_count
Specify the number of threads.
The increased number of connections is effective in case of a relatively small number
of large files. For large number of small files, full sync performance may be slower
with increased number of connections.
The Replication> job destroy command destroys a job definition. This command
completely removes the specified job from the configuration, cleans up any saved
job-related statistics, and removes any Storage Checkpoints. The replication job
must be disabled before the job definition can be destroyed.
To destroy a replication job definition
â—†
To destroy a job definition, enter the following command:
Replication> job destroy job_name
Where job_name is the name of the job definition you want to delete. Make
sure that the job is not enabled.
Using the Replication> job destroy command with the force option removes
the local job irrespective of the job state, and all replication units are
disassociated from the job. Cluster configurations, which are part of the job,
are not modified.
Note: When setting up replication, Veritas does not advise you to make any
modifications or deletions on the target side of the file system. In the event that
some or all of the target data is modified or deleted, you must re-create the
replication job from the source cluster to resume replication services
366
Configuring replication
Replicating compressed data
To re-create a replication job
1
To re-create a replication job, you must first delete the job definition. Enter the
following command on the source cluster:
Replication> job destroy job_name
Where job_name is the name of the job definition you want to delete. Make
sure that the job is not enabled.
2
Re-create the job definition:
Replication> job create job_name src_repunit tgt_repunit
link_name schedule_name
You can reuse the source replication unit, target replication unit, link, and
schedule names.
Replicating compressed data
Using the vxcompress utility, Replication is able to replicate any compressed file
that is created at the source to the target, while maintaining the same compression
characteristics. The compression characteristics include the algorithm, the strength,
and the block size. The data is read in the compressed format from the source,
sent over the network, and written to the target system in the same format. This
form of compression reduces the amount of storage that is required on the target
system.
Note: Compressed files that are created using archive utilities such as .tar or .zip,
are treated as normal data files and not compressed during replication.
Displaying replication job information and status
The Replication> job show and Replication> job status commands display
job definition information, which allows you to confirm any changes that are made
to your replication job and view current job status.
The Replication> job show command displays single job definition, or all of the
job definitions for a destination cluster.
367
Configuring replication
Synchronizing a replication job
To display the job definitions
â—†
To display the job definitions, enter the following command:
Replication> job show [job_name]
job_name
Enter the name of the job you want to display. If you want to list
all of the job definitions, enter the command without a job name.
The Replication> job status command displays the status of one or all of the
jobs that are copied during replication and the time the replication occurred.
To display the status of a replication job
â—†
To display the status of a replication job or all the jobs, enter the following
command:
Replication> job status job_name
job_name
Enter the name of the job you want to display status for.
If a job is not specified, all status of all the jobs is displayed.
If the Job State displays Trying_to_enable, then the job enable command
is in progress. Check the job status again after a few minutes.
Synchronizing a replication job
To synchronize an enabled replication job
â—†
To synchronize an enabled replication job, enter the following:
Replication> job sync job_name
job_name
Specify the name of the replication job you want to synchronize.
Behavior of the file systems on the replication
destination target
Destination file systems are mounted as read-write. Read-only access is allowed,
but you are not expected to modify the destination file system content. While
replication occurs, destination file systems may not be in a consistent state. To
provide consistent images of the destination file systems at different stages of the
368
Configuring replication
Accessing file systems configured as replication destinations
replication, the replication service creates and manages Storage Checkpoints of
each destination file system.
The replication service creates a new destination Storage Checkpoint:
â– 
Before the first session (before a full-sync)
â– 
After every successful replication session (after every incremental sync)
Storage Checkpoints are automatically mounted under the .checkpoint directory
inside the target file system, for example:
/vx/target_mount/.checkpoint/ckpt_name
where target_mount is the name of the target file system and ckpt_name is the
name of the Storage Checkpoint.
You can use the Storage> snapshot list command to view these Storage
Checkpoints and you can use Veritas Access commands to export any of these
Storage Checkpoints for read-only purposes. The replication Storage Checkpoint
names are prefixed with vxfsrepl_ and also contain the Storage Checkpoint
creation time.
Accessing file systems configured as replication
destinations
Destination Storage Checkpoints are automatically mounted and therefore cannot
be brought online or taken offline using the Storage> snapshot commands. The
destination Storage Checkpoints can only be accessed through the .checkpoint
directory. This accessibility also applies to any user created Storage Checkpoints
on the replication destination file system.
Creating a recovery point objective (RPO) report
A recovery point is the last stable version of a set of data that can be used to recover
from a site failure. A recovery point objective (RPO) is a goal set by an organization
to define how much data an organization is willing to lose in the event of an outage.
RPO is measured in terms of minutes (or hours) of data lost. For example, a
company might have an RPO goal of two hours, which means no more than two
hours of data should be lost because of a site failure.
Veritas Access replication tracks the times that are associated with replication jobs
and enables you to generate an RPO report to help you manage your organization's
RPO objectives. The RPO report shows the current RPO times for a replication job.
The report also shows the average RPO times over the last 24 hours, the last week,
and the last month.
369
Configuring replication
Replication job failover and failback
To create an RPO report
â—†
To create an RPO report, enter the following:
Replication> rpo show job_name [duration]
job_name
Enter the name of the replication job that is associated with the
RPO report.
duration
To show the average RPO over a specific number of hours, enter
a numeric value to indicate the duration in hours. For example,
enter 5 to show the average RPO values over the last five hours.
The range of hours you can specify is 1 to 712.
Replication job failover and failback
Typically, the source cluster drives a replication session. However, in some
situations, it may be useful for the destination cluster to drive the replication session.
Veritas Access supports a failover and a failback feature for replication jobs. This
feature enables control of replication jobs to be temporarily relocated from the
source cluster to the destination (target) cluster.
Job failover and failback is useful for:
â– 
Planned failover
In cases where the source cluster is taken down for routine maintenance or for
moving applications to another cluster, a planned failover procedure is available
for moving replication jobs from the source cluster to the destination cluster.
â– 
Disaster recovery
In cases where the source cluster fails unexpectedly, an unplanned failover
procedure is available for moving replication jobs to the destination cluster.
Note: In the event of a planned or unplanned failover from the source cluster to the
destination cluster, there should be at least one successful sync attempt. The
successful sync ensures that a consistent point in time image is present on the
destination cluster that can be used for the failover.
With job failover and failback, you use the Replication> job failover command
to move control from the source cluster to the destination cluster. You use the
Replication> job failback to restore control to the source cluster. The link_name
is the link of one of the destination clusters. The link_name argument can be empty
when the source cluster is not available, in which case the job failover can be
executed from one of the destination clusters.
370
Configuring replication
Replication job failover and failback
Essentially, job failover takes job and replication unit definitions from the replication
database on the source cluster and copies them to the replication database on the
destination cluster.
Warning: Job failover assumes that all replication job names and replication unit
names are unique across all Veritas Access clusters on your network. Before you
use the replication failover feature, make sure that these names are unique.
After a job failover or failback, you must manually start or enable the replication job
to start pre-configured schedules. Link throttle information should be reconfigured
after the job failover or failback.
Job failover does not automatically move the NFS or the CIFS share information
that is associated with job failover replication units from the source cluster to the
destination cluster. Share information has to be done manually.
Table 25-1
Job failover and failback commands
Command
Definition
job failover
Transfer control of a replication job from the source cluster to the
destination cluster.
job failback
Return control of a replication job from the destination cluster to
the source cluster.
Process summary
The steps you take for job failover and failback vary depending the type of failover
or failback you perform. Failover and failback types include:
â– 
Planned failover
â– 
Unplanned failover
â– 
Failback after a planned failover
â– 
Failback after an uplanned failover
Each process is summarized in the following sections. Typically, you would use the
planned failover and planned failback processes in most situations.
Overview of the planned failover process
For planned failovers, most of the failover steps are executed from the source
cluster.
â– 
From the source cluster:
371
Configuring replication
Replication job failover and failback
â– 
â– 
Stop all applications that access the replicated files. This step is
recommended, but not required.
â– 
Use the Replication> job sync job_name command to execute the job
and make sure files on the source cluster and destination cluster are
synchronized.
â– 
Use the Replication> job disable job_name command to disable the
job.
â– 
Use the Replication> job failover force=yes/no job_name
current_cluster_link command to move control of the job from the source
cluster to the destination cluster.
From the destination cluster:
â– 
Use the Replication> job enable job_name command to enable the job
or run a sync on the destination cluster.
â– 
Use the Replication> job sync job_name command to ensure that the
replication job is in a well-defined state and incremental replication can be
resumed.
Once the job is failed over, job control remains on the destination cluster until a
planned failback is activated.
Overview of the planned failback process
After a job failover has been accomplished and the source cluster is ready to take
back control of the replication task, you can use the job failback feature to release
control from the destination cluster and return it to the source cluster
â– 
â– 
From the destination cluster:
â– 
Stop all applications that access the replicated files. This step is
recommended, but not required.
â– 
Use the Replication> job sync job_name command to execute the job
and make sure files on the source cluster and destination cluster are
synchronized.
â– 
Use the Replication> job disable job_name command to disable the
job.
From the source cluster:
â– 
Use the Replication> job failback force=yes/no job_name
current_cluster_link command to move control of the job from the
destination cluster back to the original source cluster.
372
Configuring replication
Replication job failover and failback
â– 
Use the Replication> job enable job_name command to enable the job
or run a sync on the source cluster.
â– 
Use the Replication> job sync job_name command to ensure that the
replication job is in a well-defined state and incremental replication can be
resumed.
Overview of the unplanned failover process
In some cases (for example, unexpected equipment failure), you may need to
execute an unplanned failover for replication jobs. The unplanned failover process
differs from the planned failover process.
This section shows an overview of the steps you take to perform an unplanned
failover.
For unplanned failovers, all the commands are executed from the destination cluster.
â– 
Make sure that you are logged into the destination cluster.
â– 
Use the Replication> job disable job_name command to disable the job
from the destination cluster. When you execute the Replication> job disable
command from the destination cluster.
â– 
Use the Replication> job failover force=yes/no job_name command to
failover the job.
Overview of the unplanned failback process
After an unplanned failover, when the source cluster comes up, you can use the
following unplanned failback process to return control to the original source cluster:
â– 
Make sure that you are logged into the source cluster.
â– 
Use the Replication> job failover force=yes/no job_name
current_cluster_link command to configure the current source cluster as a
valid target to the new source cluster. This command should be executed from
the old source cluster.
â– 
Use the Replication> job sync job_name command from the new source
cluster to synchronize file system data with the newly added destination cluster.
â– 
Use the Replication> job failback force=yes/no job_name
current_cluster_link command to move control of the replication job from
the destination cluster back to the source cluster.
â– 
Use the Replication> job sync job_name command to ensure that the
replication job is in a well-defined state and incremental replication can be
resumed.
373
Configuring replication
Replication job failover and failback
Note: An administrator can use the Replication> job destroy force command
to clean up local job configuration. Configuration of the other clusters, which are
part of the job, will not be modified and any replication units will be disassociated
from job. The Replication> job destroy force and Replication> repunit
destroy force commands should be used in the event of an unrecoverable
configuration or replication direction mismatch.
374
Chapter
26
Using snapshots
This chapter includes the following topics:
â– 
About snapshots
â– 
Creating snapshots
â– 
Displaying snapshots
â– 
Managing disk space used by snapshots
â– 
Bringing snapshots online or taking snapshots offline
â– 
Restoring a snapshot
â– 
About snapshot schedules
â– 
Configuring snapshot schedules
â– 
Managing automated snapshots
About snapshots
A snapshot is a virtual image of the entire file system. You can create snapshots
of a parent file system on demand. Physically, it contains only data that corresponds
to the changes that are made in the parent, and so consumes significantly less
space than a detachable full mirror.
Note: You cannot create a snapshot of a scale-out file system.
See “About scale-out file systems” on page 208.
Snapshots are used to recover from data corruption. If files, or an entire file system,
are deleted or become corrupted, you can replace them from the latest uncorrupted
snapshot. You can mount a snapshot and export it as if it were a complete file
Using snapshots
Creating snapshots
system. Users can then recover their own deleted or corrupted files. You can limit
the space snapshots consume by setting a quota on them. If the total space that
snapshots consume exceeds the quota, Veritas Access rejects attempts to create
additional ones.
You can create a snapshot by either using the snapshot create command or by
creating a schedule to create the snapshot at a specified time.
Creating snapshots
The snapshot create command quickly creates a persistent image of a file system
at an exact point in time. Snapshots minimize the use of disk space by using a
Storage Checkpoint within the same free space available to the file system. After
you create a snapshot of a mounted file system, you can also continue to create,
remove, and update files on the file system without affecting the logical image of
the snapshot. A snapshot preserves not only the name space (directory hierarchy)
of the file system, but also the user data as it existed at the moment the file system
image was captured.
You can use a snapshot in many ways. For example, you can use them to:
â– 
Create a stable image of the file system that can be backed up to tape.
â– 
Provide a mounted, on-disk backup of the file system so that end users can
restore their own files in the event of accidental deletion. This is especially useful
in a home directory, engineering, or email environment.
â– 
Create an on-disk backup of the file system that can be used in addition to a
traditional tape-based backup to provide faster backup and restore capabilities.
To create a snapshot
â—†
To create a snapshot, enter the following:
Storage> snapshot create snapshot_name
fs_name [removable]
snapshot_name
Specifies the name for the snapshot.
Note: The following are reserved words for snapshot name:
flags, ctime, and mtime.
fs_name
Specifies the name for the file system.
376
Using snapshots
Displaying snapshots
removable
Valid values are:
â– 
yes
â– 
no
If the removable attribute is yes, the snapshot is removed
automatically if the file system runs out of space.
The default value is removable=no.
For example:
Storage> snapshot create snapshot1 fs1
100% [#] Create snapshot
Displaying snapshots
You can display all snapshots, or the snapshots taken of a specific file system or
specific schedule of a file system. The output displays the snapshot name and the
properties of the snapshots such as creation time and size.
377
Using snapshots
Displaying snapshots
To display snapshots
â—†
To display snapshots, enter the following:
Storage> snapshot list [fs_name] [schedule_name]
fs_name
Displays all of the snapshots of the specified file system. If you do not specify a file system, snapshots
of all of the file systems are displayed.
schedule_name Displays the schedule name. If you do not specify a schedule name, then snapshots created under
fs_name are displayed.
Storage> snapshot list
Snapshot
===================================
snap2
sc1_24_Jul_2009_21_34_01_IST
sc1_24_Jul_2009_19_34_02_IST
presnap_sc1_24_Jul_2009_18_34_02_IST
sc1_24_Jul_2009_17_34_02_IST
ctime
====================
2009.Jul.27.02:40:43
2009.Jul.24.21:34:03
2009.Jul.24.19:34:04
2009.Jul.24.18:34:04
2009.Jul.24.17:34:04
FS
==
fs1
fs1
fs1
fs1
fs1
mtime
====================
2009.Jul.27.02:40:57
2009.Jul.24.21:34:03
2009.Jul.24.19:34:04
2009.Jul.24.18:34:04
2009.Jul.24.17:34:04
Status
======
offline
offline
offline
offline
offline
Removable
=========
no
yes
yes
yes
yes
Preserved
=========
No
No
No
Yes
No
Size
====
190.0M
900.0M
7.0G
125M
0K
Snapshot
Displays the name of the created snapshots.
FS
Displays the file systems that correspond to each created snapshots.
Status
Displays whether or not the snapshot is mounted (that is, online or offline).
ctime
Displays the time the snapshot was created.
mtime
Displays the time the snapshot was modified.
Removable
Determines if the snapshot should be automatically removed in case the underlying file system runs
out of space. You entered either yes or no in the snapshot create snapshot_name fs_name
[removable]
Preserved
Determines if the snapshot is preserved when all of the automated snapshots are destroyed.
Size
Displays the size of the snapshot.
378
Using snapshots
Managing disk space used by snapshots
Managing disk space used by snapshots
To manage the disk space used by snapshots, you can set a snapshot quota or
capacity limit for the file system. When all of the snapshots for the file system exceed
the capacity limit, snapshot creation is disabled for the file system.
You can also remove unnecessary snapshots to conserve disk space.
379
Using snapshots
Managing disk space used by snapshots
To enable snapshot quotas
1
To display snapshot quotas, enter the following:
Storage> snapshot quota list
FS
Quota
Capacity Limit
==
=====
==============
fs1
on
1G
fs2
off
0
fs3
off
0
2
To enable a snapshot quota, enter the following:
Storage> snapshot quota on fs_name [capacity_limit]
fs_name
Specifies the name of the file system.
capacity_limit
Specifies the number of blocks used by all the snapshots for
the file system. Enter a number followed by K, M, G, or T (for
kilo, mega, giga, or terabyte). The default value is 0.
For example, to enable the snapshot quota, enter the following:
Storage> snapshot quota on fs1 1024K
Storage> snapshot quota list
FS
Quota
Capacity Limit
==
=====
==============
fs1
ON
1024K
3
If necessary, you can disable snapshot quotas. You can retain the value of the
capacity limit. To disable a snapshot quota, enter the following:
Storage> snapshot quota off [fs_name] [remove_limit]
fs_name
Specifies the name of the file system.
remove_limit
Specifies whether to remove the capacity limit when you
disable the quota. The default value is true, which means
that the quota capacity limit is removed. The value of false
indicates that the quota is disabled but the value of the
capacity limit remains unchanged for the file system.
For example, to disable the snapshot quota, enter the following:
Storage> snapshot quota off fs1
380
Using snapshots
Bringing snapshots online or taking snapshots offline
To destroy a snapshot
â—†
To destroy a snapshot, enter the following:
Storage> snapshot destroy snapshot_name
fs_name
snapshot_name
Specifies the name of the snapshot to be destroyed.
fs_name
Specifies the name of the file system from which the snapshot
was taken. Snapshots with the same name could exist for more
than one file system. In this case, you must specify the file system
name.
For example:
Storage> snapshot destroy snapshot1 fs1
100% [#] Destroy snapshot
Bringing snapshots online or taking snapshots
offline
If you want to mount a snapshot through NFS or export a CIFS snapshot, you must
bring the snapshot online. You can then create a CIFS or an NFS share using the
snapshot name as the path. For example: /vx/fs1/:snap1. The snapshot can only
be mounted through NFS or exported through CIFS if it is online.
To bring a snapshot online
â—†
To bring a snapshot online:
Storage> snapshot online snapshot_name fs_name
snapshot_name
Specifies the name of the snapshot.
fs_name
Specifies the name of the file system from which the snapshot
was taken. Snapshots with the same name could exist for
more than one file system. In this case, you must specify the
file system name.
For example, to bring a snapshot online, enter the following:
Storage> snapshot online snapshot1 fs1
100% [#] Online snapshot
381
Using snapshots
Restoring a snapshot
To take a snapshot offline
â—†
To take a snapshot offline:
Storage> snapshot offline snapshot_name fs_name
snapshot_name
Specifies the name of the snapshot.
fs_name
Specifies the name of the file system from which the snapshot
was taken. Snapshots with the same name could exist for
more than one file system. In this case, you must specify the
file system name.
For example, to take a snapshot offline, enter the following:
Storage> snapshot offline snapshot1 fs1
100% [#] Offline snapshot
Restoring a snapshot
This operation restores the file system to the state that is stored in the specified
snapshot. When you restore the file system to a particular snapshot, snapshots
taken after that point in time are no longer relevant. The restore operation also
deletes these snapshots.
The restore snapshot operation prompts you for confirmation. Be sure that you want
to restore the snapshot before responding yes.
382
Using snapshots
About snapshot schedules
To restore a snapshot
â—†
To restore a snapshot, enter the following:
Storage> snapshot restore snapshot_name
fs_name
snapshot_name
Specifies the name of the snapshot to be restored.
fs_name
Specifies the name of the file system to be restored.
For example:
Storage> snapshot restore snapshot0 fs0
ACCESS snapshot WARNING V-288-0 Snapshot created after snapshot0
will be deleted
ACCESS snapshot WARNING V-288-0 Are you sure you want to restore
file system fs0 with snapshot snapshot0? (yes/no)
yes
ACCESS snapshot SUCCESS V-288-0 File System fs0 restored
successfully by snapshot snapshot0.
About snapshot schedules
The Storage> snapshot schedule commands let you automatically create or
remove snapshots for a file system at a specified time. The schedule indicates the
time for the snapshot operation as values for minutes, hour, day-of-the-month,
month, and day-of-the-week. The schedule stores these values in the crontab along
with the name of the file system.
For example, snapshot schedule create schedule1 fs1 30 2 * * *
automatically creates a snapshot every day at 2:30 AM, and does not create
snapshots every two and a half hours. If you wanted to create a snapshot every
two and a half hours with at most 50 snapshots per schedule name, then run
snapshot schedule create schedule1 fs1 50 */30 */2 * * *, where the
value */2 implies that the schedule runs every two hours. You can also specify a
step value for the other parameters, such as day-of-month or month and day-of-week
as well, and you can use a range along with a step value. Specifying a range in
addition to the numeric_value implies the number of times the crontab skips for a
given parameter.
Automated snapshots are named with the schedule name and a time stamp
corresponding to their time of creation. For example, if a snapshot is created using
383
Using snapshots
Configuring snapshot schedules
the name schedule1 on February 27, 2016 at 11:00 AM, the name is:
schedule1_Feb_27_2016_11_00_01_IST.
Note: If the master node is being rebooted, snapshot schedules will be missed if
scheduled during the reboot of the master node.
Configuring snapshot schedules
You can use snapshot schedules to automate creation of snapshots at regular
intervals. The snapshot limit defines how many snapshots to keep for each schedule.
In some instances, snapshots may skip scheduled runs.
This may happen because of the following:
â– 
When a scheduled snapshot is set to trigger, the snapshot needs to gain a lock
to begin the operation. If any command is issued from the CLI or is running
through schedules, and if the command holds a lock, the triggered snapshot
schedule is not able to obtain the lock, and the scheduled snapshot fails.
â– 
When a scheduled snapshot is set to trigger, the snapshot checks if there is
any instance of a snapshot creation process running. If there is a snapshot
creation process running, the scheduled snapshot aborts, and a snapshot is not
created.
To create a snapshot schedule
â—†
To create a snapshot schedule, enter the following:
Storage> snapshot schedule create schedule_name
fs_name
max_snapshot_limit
minute [hour] [day_of_the_month]
[month] [day_of_the_week]
For example, to create a schedule for an automated snapshot creation of a
given file system at 3:00 am every day, enter the following:
Storage> snapshot schedule create schedule1 fs1 100 0 3 * * *
When an automated snapshot is created, the entire date value is appended,
including the time zone.
384
Using snapshots
Configuring snapshot schedules
schedule_name
Specifies the name of the schedule corresponding to the automatically
created snapshot.
The schedule_name cannot contain an underscore ('_') as part of its value.
For example, sch_1 is not allowed.
fs_name
Specifies the name of the file system. The file system name should be a
string.
max_snapshot_limit
Specifies the number of snapshots that can be created for a given file system
and schedule name. The value is a numeric value between 1-366.
When the number of snapshots reaches the limit, then the oldest snapshot
is destroyed. If you decrease the limit for an existing schedule, then multiple
snapshots may be destroyed (oldest first) until the number of snapshots is
less than the maximum snapshot limit value.
Note: If you need to save daily snapshots for up to one year, the
max_snapshot_limit is 366.
minute
This parameter may contain either an asterisk like '*/15'', which implies every
15 minutes, or a numeric value between 0-59.
Note: If you are using the '*/xx' format, the smallest value for 'xx' is 15.
You can enter */(15-59) or a range such as 23-43. An asterisk (*) is not
allowed.
hour
This parameter may contain either an asterisk, (*), which implies "run every
hour," or a number value between 0-23.
You can enter */(0-23), a range such as 12-21, or just the *.
day_of_the_month
This parameter may contain either an asterisk, (*), which implies "run every
day of the month," or a number value between 1-31.
You can enter */(1-31), a range such ass 3-22, or just the *.
month
This parameter may contain either an asterisk, (*), which implies "run every
month," or a number value between 1-12.
You can enter */(1-12), a range such as 1-5, or just the *. You can also enter
the first three letters of any month (must use lowercase letters).
day_of_the_week
This parameter may contain either an asterisk (*), which implies "run every
day of the week," or a numeric value between 0-6. Crontab interprets 0 as
Sunday. You can also enter the first three letters of the week (must use
lowercase letters).
For example, the following command creates a schedule schedule1 for automated
snapshot creation of the fs1 file system every 3 hours each day, and maintains
only 30 snapshots:
385
Using snapshots
Configuring snapshot schedules
386
Storage> snapshot schedule create schedule1 fs1 30 0 */3 * * *
To modify a snapshot schedule
â—†
To modify a snapshot schedule, enter the following:
Storage> snapshot schedule modify schedule_name
fs_name
max_snapshot_limit
minute [hour]
[day_of_the_month] [month] [day_of_the_week]
For example, to modify the existing schedule so that a snapshot is created at
2:00 am on the first day of the week, enter the following:
Storage> snapshot schedule modify schedule1 fs1 *2**1
To display a snapshot schedule
â—†
To display all of the schedules for automated snapshots, enter the following:
Storage> snapshot schedule show [fs_name] [schedule_name]
fs_name
Displays all of the schedules of the specified file system. If no file
system is specified, schedules of all of the file systems are
displayed.
schedule_name
Displays the schedule name. If no schedule name is specified,
then all of the schedules created under fs_name are displayed.
For example, to display all of the schedules for creating or removing snapshots
to an existing file system, enter the following:
Storage> snapshot schedule show fs3
FS
Schedule Name Max Snapshot Minute
=== ============= ============ ======
fs3 sched1
30
*/20
fs3 sched2
20
*/45
Hour
====
*
*
Day
===
*
*
Month
=====
*
*
WeekDay
=======
*
*
For example, to list the automated snapshot schedules for all file systems,
enter the following:
Storage> snapshot schedule show
FS
Schedule Name Max Snapshot
=== ============= ============
fs6 sc1
10
fs1 sc1
10
Minute
======
*/50
*/25
Hour
====
*
*
Day
===
*
*
Month
=====
*
*
WeekDay
=======
*
*
Using snapshots
Managing automated snapshots
387
Managing automated snapshots
You can remove all of the automated snapshots created by a schedule, specify that
certain snapshots be preserved, or delete a schedule for a file system.
To remove all snapshots
â—†
To automatically remove all of the snapshots created under a given schedule
and file system name (excluding the preserved and online snapshots), enter
the following:
Storage> snapshot schedule destroyall schedule_name
fs_name
The destroyall command only destroys snapshots that are offline. If some
of the snapshots in the schedule are online, the command exists at the first
online snapshot.
Note: The Storage> snapshot schedule destroyall command may take
a long time to complete depending on how many snapshots are present that
were created using schedules.
Preserved snapshots are never destroyed automatically or as part of the
destroyall command.
Example 1: If you try to destroy all automated snapshots when two of the
automated snapshots are still mounted, Veritas Access returns an error. No
snapshots under the given schedule and file system are destroyed.
Storage> snapshot schedule destroyall schedule1 fs1
ACCESS snapshot ERROR V-288-1074 Cannot destroy snapshot(s)
schedule1_7_Dec_2009_17_58_02_UTC schedule1_7_Dec_2009_16_58_02_UTC
in online state.
Example 2: If you try to destroy all automated snapshots (which are in an offline
state), the operation completes successfully.
Storage> snapshot schedule destroyall schedule2 fs1
100% [#] Destroy automated snapshots
Using snapshots
Managing automated snapshots
To preserve snapshots
â—†
To preserve the specified snapshots corresponding to an existing schedule
and specific file system name, enter the following:
Storage> snapshot schedule preserve schedule_name
fs_name snapshot_name
snapshot_name is a comma-separated list of snapshots..
For example, to preserve a snapshot created according to schedule1 for the
file system fs1, enter the following:
Storage> snapshot schedule preserve schedule1 fs1
schedule1_Feb_27_16_42_IST
To delete a snapshot schedule
â—†
To delete a snapshot schedule, enter the following:
Storage> snapshot schedule delete fs_name [schedule_name]
For example:
Storage> snapshot schedule delete fs1
388
Chapter
27
Using instant rollbacks
This chapter includes the following topics:
â– 
About instant rollbacks
About instant rollbacks
Instant rollbacks are volume-level snapshots. All rollback commands take a file
system name as an argument and perform operations on the underlying volume of
that file system.
Note: If you plan to add a tier to the file system, add the tier first and then create
the rollback. If you add the tier after a rollback exists, the rollback hierarchy would
have inconsistencies because the rollback is not aware of the tier.
Both space-optimized and full-sized rollbacks are supported by Veritas Access.
Space-optimized rollbacks use a storage cache, and do not need a complete copy
of the original volume's storage space. However, space-optimized rollbacks are not
suitable for write-intensive volumes, because the copy-on-write mechanism may
degrade the performance of the volume. Full-sized rollbacks use more storage, but
that has little impact on write performance after synchronization is completed.
Both space-optimized rollbacks and full-sized rollbacks can be used instantly after
operations such as create, restore, or refresh.
Note: When instant rollbacks exist for a volume, you cannot disable the FastResync
option for a file system.
When creating instant rollbacks for volumes bigger than 1T, there may be error
messages such as the following:
Using instant rollbacks
About instant rollbacks
ACCESS instant_snapshot ERROR V-288-1487 Volume prepare for full-fs1-1
failed.
An error message may occur because the default amount of memory allocated for
a Data Change Object (DCO) may not be large enough for such big volumes. You
can use the vxtune command to change the value. The default value is 6M, which
is the memory required for a 1T volume.
To change it to 15M, use the following command:
vxtune volpagemod_max_memsz `expr 15 \* 1024 \* 1024`
390
Chapter
28
Configuring Veritas
Access with the
NetBackup client
This chapter includes the following topics:
â– 
About Veritas Access as a NetBackup client
â– 
Prerequisites for configuring the NetBackup client
â– 
About the NetBackup Snapshot Client
â– 
About NetBackup snapshot methods
â– 
About NetBackup instant recovery
â– 
Enabling or disabling the NetBackup SAN client
â– 
Workflow for configuring Veritas Access for NetBackup
â– 
Registering a NetBackup master server, an EMM server, or adding an optional
media server
â– 
Displaying the excluded files from backup
â– 
Displaying the included and excluded files for backups
â– 
Adding or deleting patterns to the list of files in backups
â– 
Configuring or resetting the virtual IP address used by NetBackup
â– 
Configuring the virtual name of NetBackup
â– 
Displaying the status of NetBackup services
Configuring Veritas Access with the NetBackup client
About Veritas Access as a NetBackup client
â– 
Configuring backup operations using NetBackup or other third-party backup
applications
â– 
Performing a backup or restore of a Veritas Access file system over a NetBackup
SAN client
â– 
Performing a backup or restore of a snapshot
â– 
Installing or uninstalling the NetBackup client
â– 
Configuring Veritas Access for NetBackup cloud storage
About Veritas Access as a NetBackup client
Veritas Access is integrated with Veritas NetBackup so that a NetBackup
administrator can back up your Veritas Access file systems to NetBackup master
or media servers and retain the data as per your company policy. Once data is
backed up, a storage administrator can delete unwanted data from Veritas Access.
The NetBackup master and media servers that run on separate computers from
Veritas Access are licensed separately from Veritas Access.
You configure NetBackup domain information using any one of the following Veritas
Access interfaces:
â– 
CLISH
The Veritas Access CLISH has a dedicated Backup> menu. From the Backup>
menu, register the NetBackup client with the NetBackup domain. Information is
saved in the bp.conf file on Veritas Access.
â– 
GUI
Settings > NetBackup Configuration
See the online Help for how to configure NetBackup using the GUI.
â– 
RESTful APIs
See the Veritas Access RESTful API Guide.
Consolidating storage reduces the administrative overhead of backing up and
restoring many separate file systems. Critical file data can be backed up and restored
through the NetBackup client on Veritas Access.
392
Configuring Veritas Access with the NetBackup client
Prerequisites for configuring the NetBackup client
Figure 28-1
Configuration of Veritas Access with NetBackup
Prerequisites for configuring the NetBackup client
Before configuring the NetBackup client for Veritas Access, you must have
completed the following:
â– 
You must have a NetBackup master server external to your Veritas Access
cluster. The NetBackup administrator configures the NetBackup master server.
See the NetBackup product documentation for more information.
â– 
Add the valid licenses on the NetBackup master server.
â– 
Make sure that the Veritas Access server and the NetBackup server can resolve
the host name.
About the NetBackup Snapshot Client
A snapshot is a point-in-time, read-only, disk-based copy of a client volume. After
the snapshot is created, NetBackup backs up data from the snapshot, not directly
from the client’s primary or original volume. Users and client operations can access
the primary data without interruption while data on the snapshot volume is backed
up. The contents of the snapshot volume are cataloged as if the backup was
produced directly from the primary volume. After the backup is complete, the
snapshot-based backup image on storage media is indistinguishable from a
traditional, non-snapshot backup.
About NetBackup snapshot methods
NetBackup can create different types of snapshots. Each snapshot type that you
configure in NetBackup is called a snapshot method. Snapshot methods enable
393
Configuring Veritas Access with the NetBackup client
About NetBackup instant recovery
NetBackup to create snapshots within the storage stack (such as the file system,
volume manager, or disk array) where the data resides. If the data resides in a
logical volume, NetBackup can use a volume snapshot method to create the
snapshot. If the data resides in a file system, NetBackup can use a file system
method, depending on the client operating system and the file system type.
You select the snapshot method in the backup policy as explained in the Veritas
NetBackup Snapshot Client Administrator's Guide.
Note: When using Veritas Access with NetBackup, select the VxFS_Checkpoint
snapshot method.
About NetBackup instant recovery
This feature makes backups available for quick recovery from disk. Instant recovery
combines snapshot technology—the image is created with minimal interruption of
user access to data—with the ability to do rapid snapshot-based restores. The
snapshot is retained on disk as a full backup image.
Enabling or disabling the NetBackup SAN client
You can enable or disable the NetBackup SAN client on Veritas Access. The
NetBackup SAN client should only be enabled on Veritas Access if the required
licenses are installed on the NetBackup Master Server. If you do not have the
required license for the NetBackup SAN client, then you must disable the SAN client
on Veritas Access. Otherwise, the Veritas Access backup service fails to start.
To enable or disable the NetBackup SAN client
â—†
To enable or disable the NetBackup SAN client, enter the following:
Backup> netbackup sanclient enable | disable
enable
Enables the NetBackup SAN client.
disable
Disables the NetBackup SAN client.
Backup> netbackup sanclient enable
Success.
394
Configuring Veritas Access with the NetBackup client
Workflow for configuring Veritas Access for NetBackup
Workflow for configuring Veritas Access for
NetBackup
To back up your data with NetBackup, you must register the installed and configured
NetBackup master server with Veritas Access.
To configure NetBackup for Veritas Access, perform the following tasks in the order
shown:
Make sure that the
prerequisites are met.
See “Prerequisites for configuring the NetBackup client”
on page 393.
Register the NetBackup
master server.
See “Registering a NetBackup master server, an EMM server,
or adding an optional media server ” on page 396.
Display the current status of
the NetBackup client.
See “Displaying the status of NetBackup services”
on page 401.
Reset the values for the
NetBackup master server or
the NetBackup EMM server.
See “Registering a NetBackup master server, an EMM server,
or adding an optional media server ” on page 396.
Display the current status of
the NetBackup client.
See “Displaying the status of NetBackup services”
on page 401.
Reset the NetBackup virtual
name.
See “Configuring the virtual name of NetBackup” on page 400.
Register the NetBackup
master server with the
NetBackup client.
See “Registering a NetBackup master server, an EMM server,
or adding an optional media server ” on page 396.
Configure the virtual name
that the NetBackup master
server uses for the
NetBackup client.
See “Configuring the virtual name of NetBackup” on page 400.
Display the current status of
the NetBackup client.
See “Displaying the status of NetBackup services”
on page 401.
Verify that Veritas Access is
configured with the
NetBackup client.
See “Displaying the status of NetBackup services”
on page 401.
Configure the /etc/hosts
file to ping the NetBackup
master or media server.
395
Configuring Veritas Access with the NetBackup client
Registering a NetBackup master server, an EMM server, or adding an optional media server
Specify the files to back up or See “Performing a backup or restore of a Veritas Access file
restore.
system over a NetBackup SAN client” on page 404.
Specify the snapshot to back See “Performing a backup or restore of a snapshot”
up or restore
on page 405.
Uninstalling or installing the
NetBackup client.
See “Installing or uninstalling the NetBackup client”
on page 405.
Configuring Veritas Access
See “Configuring Veritas Access for NetBackup cloud
for NetBackup cloud storage storage” on page 409.
Registering a NetBackup master server, an EMM
server, or adding an optional media server
You register the NetBackup master server or the EMM server so that it can
communicate with Veritas Access. If necessary, you can reset the values of the
NetBackup master server or the EMM server to their default configurations. You
can optionally add a media server.
The NetBackup EMM server is generally the master server. The NetBackup master
server can be the NetBackup media server, but it is not mandatory that the
NetBackup master server be the NetBackup media server. In production
environments, the NetBackup media server is separate from the NetBackup master
server.
See the backup_netbackup(1) man page for detailed examples.
To register the NetBackup master server or the NetBackup EMM server with
Veritas Access
1
Register the NetBackup master server with Veritas Access.
For a NetBackup master server:
Backup> netbackup master-server set server
For a NetBackup EMM server:
2
Register the NetBackup EMM server with Veritas Access.
Backup> netbackup emm-server set server
396
Configuring Veritas Access with the NetBackup client
Registering a NetBackup master server, an EMM server, or adding an optional media server
To reset the value for the NetBackup master server or the NetBackup EMM
server
1
Reset the value for the NetBackup master server.
For a NetBackup master server:
Backup> netbackup master-server reset
For a NetBackup EMM server:
2
Reset the value for the NetBackup EMM server.
Backup> netbackup emm-server reset
To add an optional NetBackup media server
â—†
Add an optional NetBackup media server.
If the NetBackup master server is also acting as a NetBackup media server,
then add the NetBackup media server using the NetBackup master server
hostname.
For example:
Backup> netbackup media-sever add FQDN of master server
To delete an already configured NetBackup media server
â—†
Delete an already configured NetBackup media server.
Backup> netbackup media-server delete server
397
Configuring Veritas Access with the NetBackup client
Displaying the excluded files from backup
Displaying the excluded files from backup
To display the entries in the excluded list from backup
â—†
Display the entries in the excluded list from backup.
Backup> netbackup exclude_list show [policy] [schedule]
policy
Lists the excluded entries by specifying a NetBackup policy.
schedule
If a NetBackup policy schedule is specified, then the excluded list
entries for the specified NetBackup policy and NetBackup policy
schedule are displayed.
Backup> netbackup exclude_list show
Pattern
------hosts
iscsid.conf
iscsid.conf
/vx/fs100/as*
/vx/fs100/*mp3
/vx/fs200/bs*
Policy
-----NBU_access12
policy
policy
policy2
Schedule
-------sched
The hyphens in the command output indicate that no values have been entered.
Displaying the included and excluded files for
backups
You can specify a policy pattern that lets you specify which files to include or exclude
from NetBackup backups. For example, you can specify that only .gif files are
backed up, and .iso files are excluded. You can then display those files.
See the backup_netbackup(1) man page for detailed examples.
398
Configuring Veritas Access with the NetBackup client
Adding or deleting patterns to the list of files in backups
399
To display files included or excluded for backups
â—†
Display the files that are included or excluded for backups.
For included files:
Backup> netbackup include_list show [policy] [schedule]
For excluded files:
Backup> netbackup exclude_list show [policy] [schedule]
Adding or deleting patterns to the list of files in
backups
You can add or delete specified patterns to or from the files that you want to include
or exclude from NetBackup backups. For example, you can create a backup policy
with different patterns such that only .gif files are backed up and .iso files are
excluded.
See the backup_netbackup(1) man page for detailed examples.
To add or delete the given pattern to the list of files included for backup
â—†
Add the specified pattern to the files that are included for backup.
For adding specified patterns to included files:
Backup> netbackup include_list add pattern [policy] [schedule]
For deleting specified patterns from included files:
Backup> netbackup include_list delete pattern [policy] [schedule]
To add or delete a given pattern to the list of files excluded from backup
â—†
Add a given pattern to the list of files that are excluded from backup.
For adding a given pattern to excluded files:
Backup> netbackup exclude_list add pattern [policy] [schedule]
For deleting the given pattern from excluded files:
Backup> netbackup exclude_list delete pattern [policy] [schedule]
Configuring Veritas Access with the NetBackup client
Configuring or resetting the virtual IP address used by NetBackup
Configuring or resetting the virtual IP address
used by NetBackup
You can configure or reset the virtual IP address of NetBackup. This address is a
highly-available virtual IP address in the cluster.
Note: Configure the virtual IP address using the Backup> virtual-ip set
command so that it is different from all of the virtual IP addresses, including the
console server IP address and the physical IP addresses that are used to install
Veritas Access. Use the Network> ip addr show command to display the currently
assigned virtual IP addresses on Veritas Access.
See the backup_virtual-ip(1) man page for detailed examples.
To configure or reset the virtual IP address used by NetBackup
1
Configure the virtual IP address of NetBackup on Veritas Access.
Backup> virtual-ip set ipaddr [device]
2
Reset the virtual IP address of NetBackup on Veritas Access.
Backup> virtual-ip reset
See “Configuring the virtual name of NetBackup” on page 400.
Configuring the virtual name of NetBackup
You can either configure the virtual name for the NetBackup master server, or you
can reset the value to its default or unconfigured state.
See the backup_virtual-name(1) man page for detailed examples.
400
Configuring Veritas Access with the NetBackup client
Displaying the status of NetBackup services
To set or reset the NetBackup virtual name
â—†
Set or reset the NetBackup virtual name.
For setting the virtual name:
Backup> virtual-name set name
For resetting the virtual name:
Backup> virtual-name reset
Make sure that name can be resolved through DNS, and its IP address can
be resolved back to name through the DNS reverse lookup. Also, make sure
that name resolves to an IP address that is configured by using the Backup>
virtual-ip command.
See “Configuring or resetting the virtual IP address used by NetBackup”
on page 400.
Displaying the status of NetBackup services
Use the backup commands to display the status of the NetBackup services.
See the following man pages for detailed examples:
â– 
backup_show(1)
â– 
backup_status(1)
â– 
backup_start(1)
â– 
backup_stop(1)
401
Configuring Veritas Access with the NetBackup client
Displaying the status of NetBackup services
To display NetBackup services
â—†
Display the current NetBackup services.
Backup> show
Example:
Backup> show
Virtual Name: nbuclient.veritas.com
Virtual IP: 10.10.10.10/24
NetBackup Master Server: nbumaster.veritas.com
NetBackup EMM Server: nbumaster.veritas.com
NetBackup Media Server(s): nbumaster.veritas.com
Backup Device: pubeth1
NetBackup Client Version: 8.0
NetBackup global log level: not configured
NetBackup database log level: not configured
Enable robust logging: not configured
Enable critical process logging: not configured
If the settings were configured while the backup and the restore services were
online, they may not be in use by Veritas Access. To display all of the configured
settings, first run the Backup> stop command, then run the Backup> start
command.
To display the status of backup services
â—†
Display the status of backup services.
Backup> status
Example:
Backup> status
Virtual IP state:
up
Backup service online node: node_01
NetBackup client state:
online
NetBackup SAN client state: online
No backup or restore jobs running.
If the NetBackup server is started and online, then Backup> status displays
any on-going backup or restore jobs.
402
Configuring Veritas Access with the NetBackup client
Configuring backup operations using NetBackup or other third-party backup applications
To start or stop backup services
1
Start the backup services.
Backup> start [nodename]
You can also change the status of a virtual IP address to online after it has
been configured using the Backup> virtual-ip command. This command
applies to any currently active node in the cluster that handles backup and
restore jobs.
See “Configuring or resetting the virtual IP address used by NetBackup”
on page 400.
2
Stop the backup services.
Backup> stop
You can also change the status of a virtual IP address to offline after it has
been configured using the Backup> virtual-ip command.
See “Configuring or resetting the virtual IP address used by NetBackup”
on page 400.
The Backup> stop command does nothing if any backup jobs are online that
involve Veritas Access file systems.
Configuring backup operations using NetBackup
or other third-party backup applications
You can backup Veritas Access using NetBackup client capability, or backup
applications from other third-party companies that use the standard NFS mount to
backup over the network.
For information on NetBackup, refer to the NetBackup product documentation set.
The Backup commands configure the local NetBackup installation of Veritas Access
to use an external NetBackup master server, Enterprise Media Manager (EMM)
server, or media server. When NetBackup is installed on Veritas Access, it acts as
a NetBackup client to perform IP-based backups of Veritas Access file systems.
Note: A new public IP address, not an IP address that is currently used, is required
for configuring the NetBackup client. Use the Backup> virtual-ip and Backup>
virtual-name commands to configure the NetBackup client.
403
Configuring Veritas Access with the NetBackup client
Performing a backup or restore of a Veritas Access file system over a NetBackup SAN client
Performing a backup or restore of a Veritas
Access file system over a NetBackup SAN client
You can perform a backup or restore of a Veritas Access file system over a
NetBackup SAN client. A NetBackup SAN client is a NetBackup client for which
Fibre Transport services are activated.
Backup and restore operations are done on the NetBackup master server by a
NetBackup administrator using the NetBackup Administration Console. If the
NetBackup master server can connect to the NetBackup client on Veritas Access,
the NetBackup master server starts the backup or restore operations.
Before performing a backup or restoration of a Veritas Access file system over a
NetBackup SAN client, do the following:
â– 
Verify that the virtual IP address is online.
â– 
Verify that the NetBackup client state is online.
â– 
Configure the Fibre Transport media server.
See the Veritas NetBackup SAN client and Fibre Transport Guide for more
information on configuring the NetBackup Fibre Transport media server.
See the backup(1) man page for detailed examples.
To perform a backup of a file system over a NetBackup SAN client
1
Check the status of the NetBackup client.
Backup> status
2
Enable the SAN client from the CLI.
Backup> netbackup sanclient enable
3
Verify if the SAN client has been enabled or not from the CLI.
Backup> status
4
Using the NetBackup Administration Console, start a backup operation.
404
Configuring Veritas Access with the NetBackup client
Performing a backup or restore of a snapshot
To perform a restore of a file system over a NetBackup SAN client
1
Check the status of the NetBackup client.
Backup> status
2
Using the NetBackup Administration Console, start a restore operation.
3
Check the status of the NetBackup client.
Backup> status
Performing a backup or restore of a snapshot
Using the NetBackup Administration Console, a NetBackup administrator can
perform a backup or restore of a snapshot.
Veritas Access as a NetBackup client supports the VxFS_Checkpoint snapshot
method. See the Veritas NetBackup Snapshot Client Administrator's Guide for more
information on configuring snapshot policies.
To perform a backup of a snapshot
â—†
Using the NetBackup Administration Console, start a snapshot backup.
The snapshot triggered by the NetBackup job can be seen from the CLI.
Storage> snapshot list
To perform a restore of a snapshot
1
Using the NetBackup Administration Console, navigate to Backup, Archive,
and Restore.
2
Click the Restore Files tab.
3
Click the Restore option.
4
Specify the directory location for performing the restore operation.
/vx/name_of_file_system/Restores
Installing or uninstalling the NetBackup client
The NetBackup master server version should be higher or equal to the NetBackup
client version. To upgrade the NetBackup client, uninstall the currently installed
version of the NetBackup client and then install the specified version of the
405
Configuring Veritas Access with the NetBackup client
Installing or uninstalling the NetBackup client
NetBackup client. The uninstall command runs on all the nodes of the Veritas Access
cluster.
Veritas Access supports two major versions of the NetBackup client, version 7.7
and 8.0. By default, Veritas Access comes with the 8.0 version of the NetBackup
client.
See the backup(1) man page for detailed examples.
406
Configuring Veritas Access with the NetBackup client
Installing or uninstalling the NetBackup client
407
To install the NetBackup client
1
Display the currently installed version of the NetBackup client.
Backup> show
2
Install the specified NetBackup client.
Backup> install version [URL]
You must specify the version of the NetBackup client that you want to install.
If you do not specify a URL, the Backup> install command has the information
on the file system for the location it needs. Specify the major release version
(8.0 or 7.7) as the second parameter. You can specify the NetBackup package
minor release or patches (7.7.1 for a 7.7 major release) as the third parameter
to install.
If the base NetBackup client version is 7.7.
Backup> install 7.7 scp://[email protected]:/home/NetBackup_7.7_
CLIENTS2.tar.gz
If the base NetBackup client version is 8.0.
Backup> install 8.0 scp://[email protected]:/home/NetBackup_8.0_
CLIENTS2.tar.gz
If there are minor releases or patches from the NetBackup client.
Backup> install 7.7 scp://[email protected]:/home/NetBackup_7.7.1_
CLIENTS2.tar.gz
Backup> install 7.7 scp://[email protected]:/home/NetBackup_7.7.2_
CLIENTS2.tar.gz
For example, consider that the NetBackup client binaries are placed on the
following host:
192.168.2.10
Backup> install 7.7 scp://[email protected]:/home/NetBackup_7.7.3_
CLIENTS2.tar.gz
Where 192.168.2.10 is the host IP address on which the NetBackup client
packages are placed.
NetBackup_7.7.3_CLIENTS2.tar.gz
This is the NetBackup client package.
Configuring Veritas Access with the NetBackup client
Installing or uninstalling the NetBackup client
3
Double check if the Red Hat compatible NetBackup client is available in this
package.
support:
system user and specify password when prompted.
4
Verify that the specified NetBackup client is installed.
Backup> show
To uninstall the NetBackup client
1
Display the currently installed version of the NetBackup client.
Backup> show
2
Uninstall the existing version of the NetBackup client.
Backup> uninstall
3
Display the current running version of the NetBackup client.
Backup> show
4
Verify that the NetBackup client is not installed.
Backup> show
408
Configuring Veritas Access with the NetBackup client
Configuring Veritas Access for NetBackup cloud storage
Configuring Veritas Access for NetBackup cloud
storage
To configure Veritas Access for NetBackup cloud storage
1
Log on to the NetBackup Console and select Configure Cloud Storage Server.
409
Configuring Veritas Access with the NetBackup client
Configuring Veritas Access for NetBackup cloud storage
2
From the Cloud Storage Server Configuration wizard, select Veritas Access
(S3).
410
Configuring Veritas Access with the NetBackup client
Configuring Veritas Access for NetBackup cloud storage
3
Add the required information such as Service host, Service endpoint,
HTTP/HTTPS port, Access key ID, Secret access key, and follow the rest
of the wizard prompts.
411
Section
Reference
â– 
Appendix A. Veritas Access documentation
â– 
Appendix B. Veritas Access tuning
11
Appendix
A
Veritas Access
documentation
This appendix includes the following topics:
â– 
Using the Veritas Access product documentation
â– 
About accessing the online man pages
Using the Veritas Access product documentation
The latest version of the Veritas Access product documentation is available on the
Veritas Services and Operations Readiness Tools (SORT) website.
https://sort.veritas.com/documents
You need to specify the product and the platform and apply other filters for finding
the appropriate document.
Make sure that you are using the current version of documentation. The document
version appears on page 2 of each guide. The publication date appears on the title
page of each document. The documents are updated periodically for errors or
corrections.
The following documents are available on the SORT site:
â– 
Veritas Access Administrator's Guide
â– 
Veritas Access Cloud Storage Tiering Solutions Guide
â– 
Veritas Access Command Reference Guide
â– 
Veritas Access Getting Started Guide
â– 
Veritas Access Installation Guide
â– 
Veritas Access NetBackup Solutions Guide
Veritas Access documentation
About accessing the online man pages
â– 
Veritas Access Quick Start Guide
â– 
Veritas Access Release Notes
â– 
Veritas Access RESTful API Guide
â– 
Veritas Access Third-Party License Agreements
â– 
Veritas Access Troubleshooting Guide
â– 
Veritas Access Enterprise Vault Solutions Guide
414
About accessing the online man pages
You access the online man pages by typing man name_of_command at the command
line.
The example shows the result of entering the Network> man ldap command.
Network> man ldap
NAME
ldap - configure LDAP client for authentication
SYNOPSIS
ldap
ldap
ldap
ldap
enable
disable
show [users|groups|netgroups]
set {server|port|basedn|binddn|ssl|rootbinddn|users-basedn|
groups-basedn|netgroups-basedn|password-hash} value
ldap get {server|port|basedn|binddn|ssl|rootbinddn|
users-basedn|groups-basedn|netgroups-basedn|password-hash}
You can also type a question mark (?) at the prompt for a list of all the commands
that are available for the command mode that you are in. For example, if you are
within the admin mode, if you type a question mark (?), you will see a list of the
available commands for the admin mode.
ACCESS> admin ?
Entering admin mode...
ACCESS.Admin>
exit
logout
man
passwd
show
--return to the previous menus
--logout of the current CLI session
--display on-line reference manuals
--change the administrator password
--show the administrator details
Veritas Access documentation
About accessing the online man pages
supportuser
user
--enable or disable the support user
--add or delete an administrator
To exit the command mode, enter the following: exit.
For example:
ACCESS.Admin> exit
ACCESS>
To exit the system console, enter the following: logout.
For example:
ACCESS> logout
415
Appendix
B
Veritas Access tuning
This appendix includes the following topics:
â– 
File system mount-time memory usage
File system mount-time memory usage
Mounting a file system on a computer system allocates system memory that is not
freed until the file system is unmounted. The amount of memory allocated at mount
time is directly proportional to the size of the file system being mounted. The amount
of memory that is allocated at mount-time is therefore important information to help
determine the system memory requirements for a Veritas Access environment. The
mount-time memory requirement is different if you expect to mount a total of 1 PB
of storage or 2 PBs of storage. The number of files currently in the file system does
not affect the amount of memory allocated at mount-time. The amount of memory
allocated at mount-time is also inversely proportional to the file system block size.
The information required to determine the amount of memory allocated at mount
time is the total size of all the file systems that are mounted on the same computer
system at the same time and the block size of each file system.
The amount of memory allocated at mount time can therefore be estimated by
obtaining the total size of all the file systems that are mounted on a system according
to the file system block size. So four totals in all, one for each file system block size
of 1 KB, 2 KB, 4 KB, and 8 KB.
Table B-1
File system mount-time memory usage
File system
block size
Total size of mounted file
systems
Memory allocation at mount
time
1 KB
‘a’ TBs
‘w’MBs allocated per TB
2 KB
‘b’ TBs
‘x’MBs allocated per TB
Veritas Access tuning
File system mount-time memory usage
Table B-1
File system mount-time memory usage (continued)
File system
block size
Total size of mounted file
systems
Memory allocation at mount
time
4 KB
‘c’ TBs
‘y’MBs allocated per TB
8 KB
‘d’ TBs
‘z’MBs allocated per TB
The mount-time memory requirement is therefore:
((a*w) + (b*x) + (c*y) + (d*z))
A file system using a 1 KB block size (the smallest file system block size) allocates
approximately eight times more memory at mount time than a file system of the
same size using a 8 KB block size (the largest file system block size). For this
reason, the Veritas Access file system defaults to a block size of 8 KB if a block
size is not specified when creating a file system.
Some customers might like to create small file systems using a 1 KB file system
block size and subsequently grow the file system size significantly, as the file system
block size cannot be changed after the file system is created. This procedure can
result in very large file systems using a 1 KB block size that can result in an
unexpectedly large allocation of system memory at mount time.
A Clustered File System (CFS) primary mount requires slightly more memory
allocated at mount-time than a CFS secondary. The performance team recommends
that the memory utilization of a CFS primary be used as the guideline for calculating
the file system mount-time memory requirement.
Table B-2
Memory footprint of 16 file systems with 32 TB size each - CFS
primary mount
32 TB each file system
Block size/file
system
CFS primary mount
Memory used (MB)
1 KB
2 KB
4 KB
8 KB
1
329
164
82
41
2
659
328
165
82
3
988
491
248
125
4
1326
657
337
166
5
1649
821
414
210
417
Veritas Access tuning
File system mount-time memory usage
Table B-2
Memory footprint of 16 file systems with 32 TB size each - CFS
primary mount (continued)
32 TB each file system
6
1977
985
498
249
7
2306
1150
581
291
8
2635
1329
665
333
9
2964
1483
747
375
10
3293
1646
829
418
11
3624
1810
913
459
12
3953
1975
995
534
13
4281
2140
1077
546
14
4614
2307
1161
589
15
4942
2471
1243
629
16
5272
2636
1325
671
Table B-3
Memory footprint of 16 file systems with 32 TB size each - CFS
secondary mount
32 TB each file system
Block size/file
system
CFS secondary mount
Memory used (MB)
1 KB
2 KB
4 KB
8 KB
1
187
93
47
21
2
372
186
94
48
3
558
279
139
71
4
742
371
186
94
5
929
465
233
117
6
1113
557
280
140
7
1300
650
326
164
418
Veritas Access tuning
File system mount-time memory usage
Table B-3
Memory footprint of 16 file systems with 32 TB size each - CFS
secondary mount (continued)
32 TB each file system
8
1485
743
373
187
9
1670
837
419
213
10
1854
928
465
237
11
2040
1020
512
259
12
2224
1114
558
286
13
2410
1208
606
306
14
2596
1301
652
330
15
2780
1393
701
353
16
2966
1485
747
376
Figure B-1 provides the guideline for the system memory utilization at mount time.
419
Veritas Access tuning
File system mount-time memory usage
Figure B-1
Mount-time memory consumption for 32 TB file systems
420
Index
A
About
cloud gateway 222
compressed file format 313
compressing files 312
concurrent access 242
configuring SmartIO 342
FastResync 216
file compression attributes 313
file compression block size 313
integration with OpenStack Cinder 274
policies for scale-out file systems 229
SNMP notifications 201
about
Active Directory (AD) 118
bonding Ethernet interfaces 34
buckets and objects 187
changing share properties 266
configuring CIFS for AD domain mode 121
configuring disks 74
configuring routing tables 48
configuring storage pools 65
configuring the policy of each tiered file
system 330
configuring Veritas Access for CIFS 115
configuring Veritas Access to use jumbo
frames 42
creating and maintaining file systems 207
data deduplication 293
Ethernet interfaces 39
event notifications 193
FTP 170
FTP local user set 178
FTP set 172
I/O fencing 77
iSCSI 82
job failover and failback 370
leaving AD domain 124
managing CIFS shares 260
Multi-protocol support for NFS with S3 189
NFS file sharing 247
about (continued)
scale-out file systems 208
setting trusted domains 128
shares 241
snapshot schedules 383
snapshots 375
storage provisioning and management 62
storing account information 141
striping file systems 213
the IP addresses for the Ethernet interfaces 39
Veritas Access file-level replication 346
Veritas Access sync replication 348
About Configuring
network 34
about maximum IOPS 219
accessing
man pages 414
replication destinations 369
the Veritas Access CLI 22
Veritas Access product documentation 413
Active Directory
setting the trusted domains for 140
Active Directory (AD)
about 118
joining Veritas Access to 120
verifying Veritas Access has joined
successfully 121
AD domain mode
changing domain settings 125
configuring CIFS 121
security settings 125
setting domain 122
setting domain user 122
setting security 122
starting CIFS server 122
AD interface
using 126
AD trusted domains
disabling 140
add local user
FTP 177
Index
adding
a column to a tiered file system 326
a severity level to an email group 195
a syslog server 198
an email address to a group 195
an email group 195
CIFS share 266
disks 74
external NetBackup master server to work with
Veritas Access 396
filter to a group 195
IP address to a cluster 40
Master, System Administrator, and Storage
Administrator users 29
mirror to a tier of a file system 327
mirrored tier to a file system 324
mirrored-striped tier to a file system 324
NetBackup Enterprise Media Manager (EMM)
server 396
NetBackup media server 396
second tier to a file system 324
SNMP management server 201
striped tier to a file system 324
striped-mirror tier to a file system 324
users 29
VLAN interfaces 43
adding a mapping
between CIFS and NFS users 149
adding and configuring
Veritas Access to the Kerberos realm 112
adding or deleting patterns
included in the list of files for backups 399
aio_fork option
setting 158
allowing
metadata information to be written on the
secondary tier 339
specified users and groups access to the CIFS
share 268
Authenticate
NFS clients 111
authenticating
NFS clients using Kerberos 111
authentication
configuring the LDAP client using the CLI 139
B
back up
Veritas Access file system over a SAN client 404
backup
displaying excluded files 398
backup or restore
NetBackup snapshot 405
backup services
displaying the status of 401
starting 401
stopping 401
Best practices
using compression 314
best practices
creating file systems 211
for using the Veritas Access deduplication
feature 296
setting relocation policies 334
bind distinguished name
setting for LDAP server 53
bonding
Ethernet interfaces 35
bonding Ethernet interfaces
about 34
buckets and objects
about 187
C
changing
an IP address to online
on any running node 40
domain settings for AD domain mode 125
local CIFS user password 159
security settings 118
security settings after CIFS server is stopped 118
share properties about 266
checking
I/O fencing status 78
on the status of the NFS server 108
CIFS
allowing specified users and groups access to
the CIFS share 268
configuring schema extensions 135
denying specified users and groups access to
the CIFS share 269
export options 262
mapuser commands 148
standalone mode 116
using multi-domain controller support 124
CIFS aio_fork option
setting 158
422
Index
CIFS and NFS protocols
share directories 243
CIFS clustering modes
about 115
CIFS data migration
enabling 161
CIFS home directories
quotas 69
CIFS operating modes
about 115
CIFS server
changing security settings after stopped 118
configuring with the LDAP backend 139
starting 145
trusted domains that are allowed access 128
CIFS server status
standalone mode 117
CIFS service
standalone mode 117
CIFS share
adding 266
deleting 270
exporting as a directory 260
exporting the same file system/directory as a
different CIFS share 261
making shadow copy aware 272
modifying 271
CIFS share and home directory
migrating from ctdb to normal clustering
mode 157
CIFS shares and home directories
migrating from ctdb clustering modes 156
migrating from normal to ctdb clustering
mode 157
CIFS snapshot
exporting 270
CIFS/NFS sharing
mapping user names 147
clearing
DNS domain names 37
DNS name servers 37
LDAP configured settings 53
CLI
configure and manage storage 70
client configurations
displaying 56
LDAP server 56
Cloud gateway
about 222
cloud tier
obtaining data usage statistics 236
Cluster
Excluding PCI IDs 47
cluster
adding an IP address to 40
changing an IP address to online for any running
node 40
displaying all the IP addresses for 40
clustering modes
ctdb 155
clusters
FSS 62
columns
adding or removing from a tiered file system 326
command history
displaying 26
Command-Line Interface (CLI)
getting help on how to use 24
communicating
source and destination clusters 351
Compressed file format
about 313
Compressing files
about 312
use cases 314
Concurrent access
about 242
concurrent access
object access service 246
Configure and manage storage
CLI 70
Configuring
Object Store server 183
configuring
AD schema with CIFS-schema extensions 135
backups using NetBackup or other third-party
backup applications 403
CIFS for standalone mode 116
CIFS server with the LDAP backend 139
IP routing 48
iSCSI devices 84
iSCSI discovery 86
iSCSI initiator 83
iSCSI initiator name 83
iSCSI targets 92
job resynchronization 368
NetBackup virtual IP address 400
NetBackup virtual name 400
423
Index
configuring (continued)
NFS client for ID mapping 111
NIC devices 44
NSS lookup order 58
the policy of each tiered file system 330
Veritas Access for CIFS 115
VLAN interfaces 43
Windows Active Directory as an IDMAP
backend 134
configuring CIFS share
secondary storage for an Enterprise Vault
store 261
configuring disks
about 74
configuring Ethernet interfaces
about 39
configuring NetBackup
prerequisites 393
configuring routing tables
about 48
Configuring SmartIO
about 342
configuring storage pools
about 65
configuring the cloud as a tier feature
scale-out file systems 224
Configuring the Object Store server
use case 183
Configuring Veritas Access
ID mapping for NFS version 4 110
Configuring Veritas Access for NetBackup
workflow 395
configuring Veritas Access to use jumbo frames
about 42
Considerations
creating a file system 211
coordinator disks
replacing 78
creating
local CIFS group 160
local CIFS user 159
OpenStack Manila file share 288
OpenStack Manila share snapshot 291
OpenStack Manila share type 287
RPO report 369
share backend on the OpenStack controller
node 286
snapshot schedules 384
snapshots 376
creating (continued)
storage pools 65
Creating a file system
considerations 211
creating and maintaining file systems
about 207
creating and scheduling a policy
scale-out file system 232
creating directory
FTP 171
creating file systems
best practices 211
ctdb clustering mode
about 155
directory-level share support 260
switching the clustering mode 156
current Ethernet interfaces and states
displaying 40
current users
displaying list 29
customizing server options
FTP 175
D
data deduplication
about 293
deduplication
use-cases 294
deduplication scenario
end-to-end 297
deduplication workflow
overview 295
default
passwords
resetting Master, System Administrator, and
Storage Administrator users 29
defining
what to replicate 361
delete local user
FTP 177
deleting
already configured SNMP management
server 201
CIFS share 270
configured mail server 195
configured NetBackup media server 396
email address from a specified group 195
email groups 195
filter from a specified group 195
424
Index
deleting (continued)
home directories 154
home directory of given user 154
local CIFS group 160
local CIFS user 159
NFS options 253
route entries from routing tables of nodes in
cluster 48
severity from a specified group 195
snapshot schedules 387
syslog server 198
users 29
VLAN interfaces 43
denying
specified users and groups access to the CIFS
share 269
description of Veritas Access Replication 347
description of Veritas Access sync replication 348
destroying
I/O fencing 78
snapshots 379
storage pools 65
directories
displaying exported 252
unexporting the share 253
directory-level share support
ctdb clustering mode 260
disabling
AD trusted domains 140
creation of home directories 154
DNS settings 37
I/O fencing 78
LDAP clients
configurations 56
NetBackup SAN client 394
NIS clients 57
NTLM 127
quota limits used by snapshots 379
disk
formatting 75
removing 76
Disk quotas
CIFS 68
file systems 68
usage 68
disks
adding 74
removing 74
displaying
all the IP addresses for cluster 40
command history 26
current Ethernet interfaces and states 40
current list of SNMP management servers 201
DNS settings 37
events on the console 200
excluded files from backup 398
existing email groups or details 195
exported directories 252
file systems that can be exported 248
files moved or pruned by running a policy 338
home directory usage information 154
included files for backups 398
information for all disk devices for the nodes in a
cluster 63
LDAP client configurations 56
LDAP configured settings 53
list of current users 29
list of SmartTier systems 329
list of syslog servers 198
local CIFS group 160
local CIFS user 159
NetBackup configurations 401
NFS statistics 109
NIS-related settings 57
NSS configuration 58
policy of each tiered file system 332
routing tables of the nodes in the cluster 48
schedules for tiered file systems 337
share properties 267
snapshot quotes 379
snapshot schedules 386
snapshots 377
snapshots that can be exported 248
status of backup services 401
tier location of a specified file 336
time interval or number of duplicate events for
notifications 204
values of the configured SNMP notifications 201
values of the configured syslog server 198
VLAN interfaces 43
displaying a mapping
between CIFS and NFS users 149
displaying WWN information 71
DNS
domain names
clearing 37
425
Index
DNS (continued)
name servers
clearing 37
specifying 37
settings
disabling 37
displaying 37
enabling 37
domain
setting 145
setting user name 145
domain controller
setting 145
domain name
for the DNS server
setting 37
E
email address
adding to a group 195
deleting from a specified group 195
email groups
adding 195
deleting 195
displaying existing and details 195
enabling
CIFS data migration 161
DNS settings 37
I/O fencing 78
LDAP client configurations 56
NetBackup SAN client 394
NIS settings 57
NTLM 127
quota limits used by snapshots 379
Enterprise Vault store
configuring CIFS share as secondary storage 261
Ethernet interfaces
bonding 35
event notifications
about 193
displaying time interval for 204
event reporting
setting events for 204
events
displaying on the console 200
excluded files
displaying 398
excluding directories and files
setting up 357
Excluding PCI IDs
cluster 47
export options
CIFS 262
exporting
an NFS share 248
CIFS snapshot 270
directory as a CIFS share 260
events in syslog format to a given URL 200
NFS snapshot 258
same file system/directory as a different CIFS
share 261
SNMP MIB file to a given URL 201
exporting for Kerberos authentication
NFS share 255
F
failover 303
failover and failback
about 370
FastResync
about 216
File compression attributes
about 313
File compression block size
about 313
File system mount-time
memory usage 416
file systems
SmartTier
displaying 329
that can be exported
displayed 248
filter
about 194
adding to a group 195
deleting from a specified group 195
forcefully
importing new LUNs for new or existing pools 73
formatting
a disk 75
FSS
functionality 62
FTP
about 170
add local user 177
creating directory 171
customizing server options 175
delete local user 177
426
Index
FTP (continued)
local user password 177
local user set download bandwidth 179
local user set home directory 179
local user set maximum connections 179
local user set maximum disk usage 179
local user set maximum files 179
local user set upload bandwidth 179
logupload 176
server start 171
server status 171
server stop 171
session show 176
session showdetail 176
session terminate 176
show local users 177
FTP local user set
about 178
FTP set
about 172
G
group membership
managing 159
H
hiding
system files when adding a CIFS normal
share 267
history command
using 26
home directories
setting up 152
home directory file systems
setting 150
home directory of given user
deleting 154
home directory usage information
displaying 154
hostname or IP address
setting for LDAP server 53
how to use
Command-Line Interface (CLI) 24
I
I/O fencing
about 77
checking status 78
I/O fencing (continued)
destroying 78
disabling 78
enabling 78
ID mapping for NFS version 4
configuring Veritas Access 110
importing
new LUNs forcefully for new or existing pools 73
included files
for backups 398
increasing
LUN storage capacity 73
initiating host discovery of LUNs 72
installing
NetBackup client 405
instant recovery
NetBackup 394
instant rollbacks
about 389
integration of Veritas Access
with OpenStack 273
Integration with OpenStack Cinder
about 274
integration with Veritas Access
NetBackup 392
IP addresses
adding to a cluster 40
displaying for the cluster 40
modifying 40
removing from the cluster 40
IP addresses for the Ethernet interfaces
about 39
IP routing
configuring 48
iSCSI
about 82
iSCSI devices
configuring 84
iSCSI discovery
configuring 86
iSCSI initiator
configuring 83
iSCSI initiator name
configuring 83
iSCSI targets
configuring 92
427
Index
J
job failover and failback
about 370
job resynchronization
configuring 368
joining
Veritas Access to Active Directory (AD) 120
K
Kerberos authentication
authenticating NFS clients 111
Kerberos realm
adding and configuring Veritas Access for 112
Kerberos share
mounting from the NFS client 256
Kernel-based
NFS server 105
L
LDAP
before configuring 52
LDAP client
configuring for authentication using the CLI 139
LDAP password hash algorithm
setting password for 53
LDAP server
clearing configured settings 53
disabling client configurations 56
displaying client configurations 56
displaying configured settings 53
enabling client configurations 56
setting over SSL 53
setting port number 53
setting the base distinguished name 53
setting the bind distinguished name 53
setting the hostname or IP address 53
setting the password hash algorithm 53
setting the root bind DN 53
setting the users, groups, and netgroups base
DN 53
leaving
AD domain 124
list of SmartTier file systems
displaying 329
listing
all of the files on the specified tier 329
free space for storage pools 65
storage pools 65
local CIFS groups
creating 160
deleting 160
displaying 160
local CIFS user
creating 159
deleting 159
displaying 159
local CIFS user password
changing 159
local user and groups
managing 159
local user password
FTP 177
local user set download bandwidth
FTP 179
local user set home directory
FTP 179
local user set maximum connections
FTP 179
local user set maximum disk usage
FTP 179
local user set maximum files
FTP 179
local user set upload bandwidth
FTP 179
logupload
FTP 176
LUN storage capacity
increasing 73
LUNs
initiating host discovery 72
M
mail server
deleting the configured mail server 195
obtaining details for 195
setting the details of external 195
man pages
how to access 414
managing
CIFS shares 260
group membership 159
local users and groups 159
managing NFS shares
using netgroups 253
mapping
of UNIX users from LDAP to Windows users 150
428
Index
mapuser commands
about 148
Master, System Administrator, and Storage
Administrator users
adding 29
maximum number
parallel replication jobs 363
Memory usage
file system mount-time 416
metadata information
allowing to be written on the secondary tier 339
restricting to the primary tier only 339
migrating
CIFS share and home directory from ctdb to
normal clustering mode 157
CIFS shares and home directories 156
CIFS shares and home directories from normal
to ctdb clustering mode 157
mirrored tier
adding to a file system 324
mirrored-striped tier
adding to a file system 324
modifying
an IP address 40
CIFS share 271
policy of a tiered file system 332
schedule of a tiered file system 337
snapshot schedules 386
tunables for iSCSI 97
more command
using 27
mounting
NFS share from the NFS client 256
mounting snapshots 381
moving disks
from one storage pool to another 74
moving files between tiers
scale-out file system 226
Moving on-premises storage to cloud storage for NFS
shares
workflow 238
Multi-protocol support for NFS with S3
limitations 189
N
navigating CLI
Veritas Access 23
NetBackup
configuring 403
NetBackup (continued)
configuring NetBackup virtual IP address 400
configuring virtual name 400
displaying configurations 401
instant recovery 394
integration with Veritas Access 392
Snapshot Client 393
snapshot methods 393
NetBackup EMM server. See NetBackup Enterprise
Media Manager (EMM) server
NetBackup Enterprise Media Manager (EMM) server
adding to work with Veritas Access 396
NetBackup master server
configuring to work with Veritas Access 396
NetBackup media server
adding 396
deleting 396
NetBackup SAN client
disabling 394
enabling 394
NetBackup snapshot
backup or restore 405
network interfaces
swapping 45
NFS client
configuring for IP mapping 111
NFS clients
authenticating 111
NFS file sharing
about 247
NFS options
deleting 253
NFS server
about 104
checking on the status 108
kernel-based 105
starting 108
stopping 108
NFS Servers
switching 105
NFS share
exporting 248
exporting for Kerberos authentication 255
NFS shares
managing using netgroups 253
NFS snapshot
exporting 258
NFS statistics
displaying 109
429
Index
NFS statistics (continued)
resetting 109
NFS-Ganesha server
use 105
NFS-Ganesha version 3 and version 4
recommended tuning parameters 106
NIC devices
configuring 44
NIS
clients
disabling 57
enabling 57
domain name
setting on all the nodes of cluster 57
related settings
displaying 57
server name
setting on all the nodes of cluster 57
node
in a cluster
displaying information for all disk devices 63
node or storage connection failures
when using Oracle Direct NFS 165
NSS
displaying configuration 58
lookup order
configuring 58
NTLM
disabling 127
enabling 127
O
object server 182
Object Store server
configuring 183
objectstore buckets 189
obtaining
details of the configured email server 195
OpenStack
about the integration with OpenStack 273
OpenStack Manila
integration with Veritas Access 283
OpenStack Manila file share
creating 288
OpenStack Manila share snapshot
creating 291
OpenStack Manila share type
creating 287
Oracle Direct NFS
node or storage connection failures 165
overview
deduplication workflow 295
P
password
changing a user's password 29
patterns
adding or deleting from the list of files included
in backups 399
physical and logical data on a file system 294
planned failback
process overview 372
planned failover
process overview 371
policies
about 330
displaying files moved or pruned by running 338
displaying for each tiered file system 332
modifying for a tiered file system 332
relocating from a tiered file system 335
removing from a tiered file system 332
running for a tiered file system 332
Policies for scale-out file systems
about 229
prerequisites
configuring NetBackup 393
preserving
snapshot schedules 387
Private network
configure 34
privileges
about 31
Public network
configure 34
Q
quota limits
enabling or disabling snapshot 379
quotas
CIFS home directories 69
R
read caching 342
recommended tuning parameters
NFS-Ganesha version 3 and version 4 106
430
Index
registering
NetBackup master server or NetBackup EMM
server 396
Relationship between
physical and logical data on a file system 294
relocating
policy of a tiered file system 335
removing
a column from a tiered file system 326
a disk 76
disks 74
IP address from the cluster 40
mirror from a tier spanning a specified disk 327
mirror from a tier spanning a specified pool 327
mirror from a tiered file system 327
policy of a tiered file system 332
schedule of a tiered file system 337
snapshot schedules 387
tier from a file system 340
removing a mapping
between CIFS and NFS users 149
renaming
storage pools 65
replacing
coordinator disks 78
replicating file systems
setting up 355
replication destination file system behavior
about 368
replication destinations
accessing 369
replication job
displaying status 367
enabling compression 367
managing 363
show job 367
replication jobs
maximum number of parallel 363
replication unit
setting up files to exclude 357
resetting
default passwords
Master, System Administrator, and Storage
Administrator users 29
NetBackup master server or NetBackup EMM
server 396
NFS statistics 109
virtual IP address of NetBackup 400
restricting
metadata information to the primary tier only 339
roles
about 31
route entries
deleting from routing tables 48
routing tables
of the nodes in the cluster
displaying 48
RPO report
creating 369
running
policy of a tiered file system 332
S
SAN client
back up 404
scale-out file system
creating and scheduling a policy 232
move the files 226
scale-out file systems
about 208
configuring the cloud as a tier 224
data usage in the cloud tier 236
scale-out fsck
about 218
schedule
displaying for tiered file systems 337
modifying for a tiered file system 337
removing from a tiered file system 337
scheduling
replication 359
second tier
adding to a file system 324
security
standalone mode 117
security settings
AD domain mode 125
changing 118
server start
FTP 171
server status
FTP 171
server stop
FTP 171
session show
FTP 176
session showdetail
FTP 176
431
Index
session terminate
FTP 176
setting
AD domain mode 122
aio_fork option 158
base distinguished name for the LDAP server 53
bind distinguished name for LDAP server 53
details of the external mail server 195
domain 145
domain controller 145
domain name for the DNS server 37
domain user name 145
events for event reporting 204
filter of the syslog server 198
home directory file systems 150
IDMAP backend to ad for access to CIFS 133
IDMAP backend to hash for accessing CIFS 132
IDMAP backend to ldap for trusted domain access
to CIFS 131
IDMAP backend to rid for access to CIFS 129
LDAP password hash algorithm 53
LDAP server hostname or IP address 53
LDAP server over SSL 53
LDAP server port number 53
LDAP users, groups, and netgroups base DN 53
NIS domain name on all the nodes of cluster 57
prior to configuring LDAP 52
root bind DN for the LDAP server 53
severity of the syslog server 198
SNMP filter notifications 201
SNMP severity notifications 201
the NIS server name on all the nodes of
cluster 57
trusted domains 128
trusted domains for the Active Directory 140
setting domain user
AD domain mode 122
setting relocation policies
best practices 334
setting security
AD domain mode 122
setting up
deduplication 297
home directories 152
replicating file systems 355
setting up a replication unit
to exclude directories and files 357
severity levels
about 194
severity levels (continued)
adding to an email group 195
severity notifications
setting 201
shadow copy
making a CIFS share aware 272
share backend
creating on the OpenStack controller node 286
share directories
CIFS and NFS protocols 243
share properties
displaying 267
shares
about 241
show local users
FTP 177
showing
snapshot schedules 387
SmartIO
about 341
snapshot methods
NetBackup 393
snapshot schedules
about 383
creating 384
deleting 387
displaying 386
modifying 386
preserving 387
removing 387
showing 387
snapshots
about 375
creating 376
destroying 379
displaying 377
displaying quotas 379
enabling or disabling quota limits 379
mounting 381
that can be exported
displayed 248
unmounting 381
SNMP
filter notifications
setting 201
management server
adding 201
deleting configured 201
displaying current list of 201
432
Index
SNMP (continued)
MIB file
exporting to a given URL 201
notifications
displaying the values of 201
server
setting severity notifications 201
SNMP notifications
about 201
solid-state drives (SSDs)
about 341
source and destination clusters
communicating 351
specific workload
creating a tuned file system 217
specified group
deleting a severity from 195
specifying
DNS name servers 37
SSL
setting the LDAP server for 53
standalone mode
CIFS server status 117
CIFS service 117
security 117
starting
backup services 401
CIFS server 145
NFS server 108
Veritas Access Replication 349
starting CIFS server
AD domain mode 122
statistics
data usage in the cloud tier 236
stopping
backup services 401
NFS server 108
storage pools
creating 65
destroying 65
listing 65
listing free space 65
moving disks from one to another 74
renaming 65
storage provisioning and management
about 62
storing
account information 141
user and group accounts in LDAP 143
storing (continued)
user and group accounts locally 143
striped tier
adding to a file system 324
striped-mirror tier
adding to a file system 324
striping file systems
about 213
swapping
network interfaces 45
Switch
NFS servers 105
switching
ctdb clustering mode 156
syslog format
exporting events to a given URL 200
syslog server
adding 198
deleting 198
displaying the list of 198
displaying the values of 198
setting the filter of 198
setting the severity of 198
system files
hiding when adding a CIFS normal share 267
T
tier
adding a tier to a file system 327
displaying location of a specified file 336
listing all of the specified files on 329
removing a mirror from 327
removing a mirror spanning a specified pool 327
removing from a file system 340
removing from a tier spanning a specified
disk 327
trusted domains
allowing access to CIFS when setting an IDMAP
backend to ad 133
allowing access to CIFS when setting an IDMAP
backend to hash 132
allowing access to CIFS when setting an IDMAP
backend to ldap 131
allowing access to CIFS when setting an IDMAP
backend to rid 129
specifying which are allowed access to the CIFS
server 128
tunables for iSCSI
modifying 97
433
Index
tuned file system
creating for a specific workload 217
U
unexporting
share of exported directory 253
uninstalling
NetBackup client 405
UNIX users from LDAP to Windows users
automatic mapping of 150
unmounting snapshots 381
unplanned failback
process overview 373
unplanned failover
process overview 373
Use
NFS-Ganesha server 105
Use case
configuring the Object Store server 183
Use cases
compressing files 314
use-cases
deduplication 294
user and group accounts in LDAP
storing 143
user and group accounts locally
storing 143
user names
mapping for CIFS/NFS sharing 147
user roles and privileges
about 31
users
adding 29
changing passwords 29
deleting 29
using
AD interface 126
history command 26
more command 27
multi-domain controller support in CIFS 124
Using compression
best practices 314
Using NFS Server 104
V
verifying
Veritas Access has joined Active Directory
(AD) 121
Veritas Access
about 17
accessing the CLI 22
integration with OpenStack Manila 283
key features 17
product documentation 413
Veritas Access deduplication feature
best practices for using 296
Veritas Access file systems
read caching 342
writeback caching 343
Veritas Access file-level replication
about 346
Veritas Access Replication
compression 367
description of feature 347
scheduling 359
starting 349
Veritas Access SmartTier
about 322
Veritas Access sync replication
about 348
description of feature 348
Veritas Access to the Kerberos realm
adding and configuring 112
virtual IP address
configuring or changing for NetBackup 400
virtual name
configuring for NetBackup 400
VLAN
adding interfaces 43
configuring interfaces 43
deleting interfaces 43
displaying interfaces 43
W
what to replicate
defining 361
Windows Active Directory
configuring as an IDMAP backend 134
Workflow
configuring Veritas Access for NetBackup 395
moving on-premises storage to cloud storage for
NFS shares 238
workflow
object server 182
writeback caching 343
WWN information
displaying 71
434

advertisement

Key Features

  • Software-defined scale-out NAS
  • Multi-protocol access
  • Cloud integration
  • Data protection and replication
  • Data deduplication and compression
  • SmartTier for data tiering
  • SmartIO for SSD performance optimization
  • OpenStack integration
  • NetBackup integration

Frequently Answers and Questions

What is Veritas Access?
Veritas Access is a software-defined scale-out network-attached storage (NAS) solution for unstructured data that works on commodity hardware. It is a highly resilient and scalable storage solution that delivers multi-protocol access, data movement, and data protection to and from the public or private cloud based on policies.
What protocols are supported by Veritas Access?
Veritas Access supports multiple protocols including NFS, CIFS, FTP, Amazon S3, and WORM storage for Enterprise Vault archiving.
How does Veritas Access integrate with the cloud?
Veritas Access allows you to seamlessly move data to and from the public or private cloud, using a cloud gateway and cloud as a tier capabilities. It supports various cloud providers, including Amazon S3.
What are the key features of Veritas Access?
Veritas Access provides data deduplication, compression, SmartTier for data tiering, SmartIO for SSD performance optimization, and integration with OpenStack and NetBackup, enhancing its capabilities and flexibility.
Where can I find more information about Veritas Access?
For comprehensive information and documentation on Veritas Access, visit the Veritas website at https://www.veritas.com/support. You can also access online man pages and community forums for additional support and resources.

Related manuals

Download PDF

advertisement