OES 11 SP2: NSS File System Administration Guide for Linux

www.novell.com/documentation
Novell Storage Services File System
Administration Guide for Linux
TM
Open Enterprise Server 11 SP2
January 2014
Legal Notices
Novell, Inc., makes no representations or warranties with respect to the contents or use of this documentation, and specifically
disclaims any express or implied warranties of merchantability or fitness for any particular purpose. Further, Novell, Inc.,
reserves the right to revise this publication and to make changes to its content, at any time, without obligation to notify any
person or entity of such revisions or changes.
Further, Novell, Inc., makes no representations or warranties with respect to any software, and specifically disclaims any
express or implied warranties of merchantability or fitness for any particular purpose. Further, Novell, Inc., reserves the right
to make changes to any and all parts of Novell software, at any time, without any obligation to notify any person or entity of
such changes.
Any products or technical information provided under this Agreement may be subject to U.S. export controls and the trade
laws of other countries. You agree to comply with all export control regulations and to obtain any required licenses or
classification to export, re-export or import deliverables. You agree not to export or re-export to entities on the current U.S.
export exclusion lists or to any embargoed or terrorist countries as specified in the U.S. export laws. You agree to not use
deliverables for prohibited nuclear, missile, or chemical biological weaponry end uses. See the Novell International Trade
Services Web page (http://www.novell.com/info/exports/) for more information on exporting Novell software. Novell assumes
no responsibility for your failure to obtain any necessary export approvals.
Copyright © 2005–2014 Novell, Inc. All rights reserved. No part of this publication may be reproduced, photocopied, stored
on a retrieval system, or transmitted without the express written consent of the publisher.
Novell, Inc.
1800 South Novell Place
Provo, UT 84606
U.S.A.
www.novell.com
Online Documentation: To access the latest online documentation for this and other Novell products, see the Novell
Documentation Web page (http://www.novell.com/documentation/oes11/).
Novell Trademarks
For Novell trademarks, see the Novell Trademark and Service Mark list (http://www.novell.com/company/legal/trademarks/
tmlist.html).
Third-Party Materials
All third-party trademarks are the property of their respective owners.
Contents
About This Guide
19
1 Overview of NSS
1.1
1.2
1.3
1.4
1.5
1.6
23
Introduction to NSS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Benefits of NSS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Understanding NSS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.3.1
Storage Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.3.2
NSS Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
NSS Features and Capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.4.1
Use Less Memory and Gain More Speed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
1.4.2
Improve Storage Availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
1.4.3
Prevent Unauthorized Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
1.4.4
Protect Data from Corruption or Loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1.4.5
Maximize Available Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Comparison of NSS to Other File Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
What’s Next . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2 What’s New or Changed in NSS
2.1
2.2
2.3
29
What’s New or Changed in NSS (OES 11 SP2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
What’s New or Changed in NSS (OES 11 SP1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
What’s New or Changed in NSS (OES 11) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3 Installing and Configuring Novell Storage Services
3.1
3.2
3.3
3.4
3.5
3.6
33
Requirements for Installing NSS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.1.1
Device Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.1.2
Requirements for NSS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.1.3
Requirements for Storage-Related iManager Plug-Ins . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Installing and Configuring NSS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.2.1
Selecting the NSS Pattern During the OES 11 SP2 Install . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.2.2
Installing NSS on an Existing OES 11 SP2 Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.2.3
Enabling or Disabling NSS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Upgrading the Media Format for Hard Link Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Enabling Users for Linux Utilities and Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Updating NSS on OES 11 SP2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.5.1
Parameter Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.5.2
Reboot Server or Restart jstcpd, adminusd, and volmnd . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.5.3
Storage-Related Plug-Ins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Upgrading from OES 2 to OES 11 SP2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.6.1
Parameter Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.6.2
Read Ahead Blocks Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4 Upgrading the NSS Media Format
4.1
41
Guidelines for Upgrading the Media Format of NSS Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.1.1
Cross-Platform Support for the NSS Media Upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.1.2
Which NSS Volumes to Upgrade. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.1.3
Before Upgrading the Media Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Contents
3
4.2
4.3
4.1.4
After Upgrading the Media Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.1.5
If You Do Not Upgrade the Media Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Upgrading the Media Format for Existing NSS Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.2.1
Upgrading the Media Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.2.2
Enabling Hard Links After the Media Upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Upgrading the Media Format Automatically for New NSS Volumes . . . . . . . . . . . . . . . . . . . . . . . . . 45
5 Planning NSS Storage Solutions
5.1
5.2
5.3
5.4
5.5
5.6
5.7
5.8
5.9
Guidelines for NSS Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.1.1
Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.1.2
Software RAID Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
5.1.3
Device Partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
5.1.4
NSS Pools and Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
5.1.5
NSS Encrypted Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.1.6
Storage Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Compatibility and Interoperability Issues for NSS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Creating NSS Storage Objects in eDirectory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Naming NSS Storage Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
5.4.1
Case Insensitive Names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
5.4.2
Number of Characters Allowed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.4.3
Conventions for Valid Names of NSS Storage Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.4.4
Other Naming Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Access Control for NSS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
5.5.1
Administrator User and Root User Roles for NSS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
5.5.2
NSS File System Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
5.5.3
Novell Trustee Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
5.5.4
POSIX Permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
5.5.5
How NSS Uses Novell Linux User Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
File Access for Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
5.6.1
NCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
5.6.2
Novell AFP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
5.6.3
Novell CIFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
5.6.4
Novell Domain Services for Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
5.6.5
Samba . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
5.6.6
SSH (Secure Shell) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
5.6.7
Accessing Files with Linux Services, Utilities, and Protocols. . . . . . . . . . . . . . . . . . . . . . . . 59
Antivirus Support for NSS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Backup Support for NSS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
NSS Support for Memory Mapped Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
6 Using NSS in a Virtualization Environment
6.1
6.2
6.3
6.4
4
61
Guidelines for Using NSS in a Xen Virtualization Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
6.1.1
Host Server Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
6.1.2
Virtual Machine Issues. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
6.1.3
Guest Server Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Installing OES 11 SP2 on a Virtual Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Initializing New Virtual Disks on the Guest Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
What’s Next . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
7 Cross-Platform Issues for NSS
7.1
7.2
7.3
47
69
Cross-Platform Issues for NSS Pool Snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Cross-Platform Issues for NSS Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Cross-Platform Issues for NSS Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
7.3.1
Multipath I/O to Devices. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
OES 11 SP2: NSS File System Administration Guide for Linux
7.4
7.5
7.3.2
Removable Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
7.3.3
Transaction Tracking System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Cross-Platform Issues for File Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Cross-Platform Issues for Management Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
7.5.1
Storage-Related Plug-Ins for Novell iManager 2.7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
7.5.2
Interoperability of Protocols for the iManager Server and Target Server . . . . . . . . . . . . . . 71
7.5.3
Management Capabilities for Software RAIDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
8 Cluster-Enabling Shared NSS Devices and Pools with Novell Cluster Services
8.1
8.2
Cluster-Enabling NSS Pools and Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Guidelines for Cluster-Enabling NSS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
9 Management Tools for NSS
9.1
9.2
9.3
9.4
9.5
9.6
9.7
9.8
9.9
77
Novell iManager and Storage-Related Plug-Ins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
9.1.1
Understanding Storage-Related Plug-Ins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
9.1.2
Prerequisites for Using the Storage-Related Plug-Ins . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
9.1.3
Downloading and Installing Plug-In Updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
9.1.4
Accessing Novell iManager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
9.1.5
Accessing Roles and Tasks in iManager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
9.1.6
Selecting a Server to Manage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
9.1.7
Storage Plug-In Quick Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
9.1.8
Files and Folders Plug-In Quick Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
NSS Management Utility (NSSMU) Quick Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
NSS Commands and Utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
9.3.1
Command Consoles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
9.3.2
NSS Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
9.3.3
NSS Utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Novell NetStorage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
9.4.1
Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
9.4.2
Accessing NetStorage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
9.4.3
Configuring File System Trustees, Trustee Rights, and Attributes . . . . . . . . . . . . . . . . . . . 98
9.4.4
Purging and Salvaging Deleted Files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
9.4.5
Browsing Directories and Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
9.4.6
Additional Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Novell Remote Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
9.5.1
Prerequisites for Using Novell Remote Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
9.5.2
Novell Remote Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
9.5.3
Accessing Novell Remote Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
9.5.4
Starting, Stopping, or Restarting Novell Remote Manager . . . . . . . . . . . . . . . . . . . . . . . . 101
Novell Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
9.6.1
Novell Client. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
9.6.2
Novell Client for Windows XP/2003 and Vista . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
ConsoleOne . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
Virtual File Services, APIs, and Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Novell Linux Volume Manager (NLVM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
10 Managing Devices
10.1
10.2
10.3
10.4
73
105
Understanding Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
10.1.1 Device Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
10.1.2 Device Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
10.1.3 Device Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Viewing a List of Devices on a Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Viewing Details for a Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Scanning for Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Contents
5
10.4.1 Scanning for Devices using iManager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
Initializing a Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
Sharing Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
10.6.1 Understanding Sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
10.6.2 Planning for Device Sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
10.6.3 Configuring the Device’s Share State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
10.7 Viewing Partitions on a Device. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
10.8 Viewing Pools on a Device. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
10.9 Enabling Write-Through Cache Management on SCSI Devices and RAID Controllers. . . . . . . . . . 117
10.10 What’s Next . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
10.5
10.6
11 Migrating NSS Devices to OES 11 SP2
11.1
11.2
11.3
11.4
Guidelines for Moving Devices from NetWare 6.5 SP8 to OES 11 SP2 . . . . . . . . . . . . . . . . . . . . . 119
11.1.1 Media Format. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
11.1.2 Pool Snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
11.1.3 Cross-Platform Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Moving Non-Clustered Devices From NetWare 6.5 SP8 Servers to OES 11 SP2. . . . . . . . . . . . . . 120
11.2.1 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
11.2.2 Setting Up File Access For Users on the OES 11 SP2 Server . . . . . . . . . . . . . . . . . . . . . 121
11.2.3 Decommissioning Each NSS Pool and Its Volumes on the Original Server . . . . . . . . . . . 122
11.2.4 Recommissioning Each NSS Pool and Its Volumes on the Destination Server . . . . . . . . 123
11.2.5 Using Scripts to Decommission and Recommission NSS Volumes . . . . . . . . . . . . . . . . . 124
Moving Non-Clustered Devices From NetWare 6.5 SP8 to OES 11 SP2 . . . . . . . . . . . . . . . . . . . . 125
11.3.1 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
11.3.2 Setting Up File Access For Users on the OES 11 SP2 Server . . . . . . . . . . . . . . . . . . . . . 126
11.3.3 Decommissioning Each NSS Pool and Its Volumes on the Original Server . . . . . . . . . . . 127
11.3.4 Recommissioning Each NSS Pool and Its Volumes on the OES Server. . . . . . . . . . . . . . 127
Moving Clustered Devices with NSS Volumes OES 11 SP2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
12 Managing Partitions
12.1
12.2
12.3
12.4
12.5
6
129
Understanding Partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
12.1.1 NSS Partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
12.1.2 Understanding Types of Partitions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
12.1.3 Understanding Partition Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
Viewing a List of Partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
12.2.1 Viewing Partitions on the Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
12.2.2 Viewing Partitions on a Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
12.2.3 Viewing Partitions in a Software RAID Device. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
12.2.4 Viewing Partitions in an NSS Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
Viewing Details for a Partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
Deleting an NSS Partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
12.4.1 Deleting Partitions in a Pool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
12.4.2 Deleting Partitions in an NSS Software RAID Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Adding Other Types of File System Partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
13 Managing NSS Software RAID Devices
13.1
13.2
119
137
Understanding Software RAID Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Planning for a Software RAID Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
13.2.1 General Guidelines for Software RAID Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
13.2.2 Guidelines for Software RAID 1 Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
13.2.3 Drive Restrictions for NSS Software RAID 0, 1 and 5 Devices . . . . . . . . . . . . . . . . . . . . . 140
13.2.4 Choosing a Software RAID Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
13.2.5 Determining the Partition Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
13.2.6 Determining the Number of Partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
OES 11 SP2: NSS File System Administration Guide for Linux
13.3
13.4
13.5
13.6
13.7
13.8
13.9
13.10
13.11
13.12
13.13
13.14
13.15
13.16
13.17
13.18
13.19
13.20
13.2.7 Determining the Stripe Size for RAID 0 and RAID 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Viewing a List of Software RAID Devices on a Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Viewing Details of a Software RAID Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
Creating Software RAID Devices with iManager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
Creating Software RAID Devices with NSSMU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
Mirroring an Existing Pool with NSSMU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
Recovering a Mirror where All Elements Report ‘Not in Sync’ Using NSSMU . . . . . . . . . . . . . . . . . 150
Creating a RAID 1 Mirror to Duplicate Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
Creating a Software RAID 0+1 with NSSMU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
Creating a Software RAID 5+1 with NSSMU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
Renaming a Software RAID Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Increasing the Size of a Software RAID Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Restriping a Software RAID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
Replacing a Failed Segment in a Software RAID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
Deleting a Software RAID Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
Viewing Pools on a Software RAID Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
Viewing Partitions on a Software RAID Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
Deleting Partitions on a Software RAID Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
Managing Software RAID Devices with NSSMU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
14 Managing Multipath I/O to Devices
14.1
14.2
14.3
161
Understanding Multipath I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
NSS Errors When Linux Multipath Is Not Configured. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
Configuring Multipath . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
15 Managing NSS Pools
15.1
15.2
15.3
15.4
15.5
15.6
15.7
15.8
15.9
15.10
15.11
15.12
15.13
15.14
15.15
15.16
163
Guidelines for Creating a Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
Creating a Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
Activating and Deactivating Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
Increasing the Size of a Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
Renaming a Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
Deleting a Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
15.6.1 Prerequisites for Deleting a Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
15.6.2 Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
Viewing Pools on a Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
Viewing Pool Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
Viewing Partition Information for a Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
Viewing Volume Information for a Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
Viewing Device Information for a Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
Moving a Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
Preventing Pools from Activating on Multiple Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
15.13.1 Understanding MSAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
15.13.2 Enabling or Disabling MSAP for All NSS Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
15.13.3 Enabling or Disabling MSAP for a Given NSS Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
15.13.4 Rebuilding the MSAP Block for a Given NSS Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
15.13.5 Determining If MSAP Is Enabled or Disabled on NSS Pools. . . . . . . . . . . . . . . . . . . . . . . 179
15.13.6 Managing MSAP with XML or APIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
15.13.7 Additional Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
Updating eDirectory Pool Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
Updating eDirectory for Shared Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
What’s Next . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
Contents
7
16 Verifying and Rebuilding NSS Pools and Volumes
16.1
16.2
16.3
183
When to Use Verify and Rebuild . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
16.1.1 What Verifying Does Not Do . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
16.1.2 What Rebuilding Does Not Do . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
16.1.3 Before Verifying a Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
16.1.4 Before Rebuilding a Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
Verifying and Rebuilding an NSS Pool and Its Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
16.2.1 Mounting the Volume to Repair Journaled Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
16.2.2 Ruling Out Hardware Causes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
16.2.3 Verifying the Pool to Identify Metadata Inconsistencies . . . . . . . . . . . . . . . . . . . . . . . . . . 186
16.2.4 Reviewing Log Files for Metadata Consistency Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
16.2.5 Rebuilding NSS Pools to Repair Metadata Consistency . . . . . . . . . . . . . . . . . . . . . . . . . . 187
ReZIDing Volumes in an NSS Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
16.3.1 What Is a ZID? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
16.3.2 Understanding ReZID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
16.3.3 When to ReZID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
16.3.4 Viewing the Highest ZID for a Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
16.3.5 ReZIDing Volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
17 Managing NSS Pool Snapshots
193
17.1
Understanding Pool Snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
17.1.1 How Snapshots Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
17.1.2 Benefits of Using Snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
17.2 Guidelines for Using and Managing Pool Snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
17.2.1 Differences Between Snapshots on Linux and NetWare . . . . . . . . . . . . . . . . . . . . . . . . . . 196
17.2.2 Guidelines for Creating a Pool Snapshot. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
17.2.3 Guidelines for Creating Pool Snapshots of Clustered Pools . . . . . . . . . . . . . . . . . . . . . . . 197
17.2.4 Guidelines for Naming Pool Snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
17.2.5 Guidelines for Choosing the Stored-On Location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
17.2.6 Guidelines for Maintaining the Stored-On Location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
17.2.7 Guidelines for Onlining Pool Snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
17.2.8 Guidelines for Deleting Pool Snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
17.3 Creating a New Pool Snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
17.3.1 Prerequisites for Creating a Pool Snapshot. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
17.3.2 Using iManager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
17.3.3 Using NSSMU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
17.4 Viewing a List of Snapshots for a Given Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
17.5 Viewing Pool Snapshot Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
17.6 Modifying the Stored-On Location for Snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
17.7 Onlining or Offlining a Pool Snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
17.7.1 Using iManager to Online a Pool Snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
17.7.2 Using iManager to Offline a Pool Snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
17.7.3 Using NSSMU to Online or Offline a Pool Snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
17.8 Viewing and Managing an Online Pool Snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
17.9 Restoring Data from an Online Pool Snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
17.10 Deleting a Pool Snapshot. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
17.10.1 Using iManager to Delete a Pool Snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
17.10.2 Using NSSMU to Delete a Pool Snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
18 Managing NSS Volumes
18.1
18.2
8
209
Understanding Volume Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
18.1.1 Volume Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
18.1.2 Encryption Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
18.1.3 Enhanced Hard Link Support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
Guidelines for NSS Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
OES 11 SP2: NSS File System Administration Guide for Linux
18.3
18.4
18.5
18.6
18.7
18.8
18.9
18.10
18.11
18.12
18.13
18.14
18.15
18.16
18.17
18.18
18.19
18.20
18.21
18.22
18.2.1 Guidelines for Sizing Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
18.2.2 Guidelines for Name Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
18.2.3 Guidelines for NSS Volumes in a Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
18.2.4 Guidelines for NSS Volumes in a Mixed-Node Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
Creating Unencrypted NSS Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
Configuring Encrypted NSS Volumes with NSSMU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
Updating eDirectory Volume Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
Viewing the Details of an NSS Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
Viewing Properties of an NSS Volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
Modifying Attributes of an NSS Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
Modifying the NSS Volume Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
Configuring the Name Space for an NSS Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
Mounting NSS Volumes with Linux Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
Renaming an NSS Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
Renaming (Modifying) the Mount Point for an NSS Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
18.13.1 Renaming the Mount Point for a New Volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
18.13.2 Enabling the Mount Point for the NSS Volume to Be Renamed . . . . . . . . . . . . . . . . . . . . 227
18.13.3 Renaming the Mount Point for an Existing NSS Volume . . . . . . . . . . . . . . . . . . . . . . . . . . 227
Activating and Deactivating an NSS Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
Mounting and Dismounting an NSS Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
18.15.1 Dismounting an NSS Volume from the NCP Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
18.15.2 Mounting or Dismounting an NSS Volume with iManager . . . . . . . . . . . . . . . . . . . . . . . . 229
18.15.3 Mounting an Encrypted NSS Volume with NSSMU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
18.15.4 Dismounting an Encrypted NSS Volume with NSSMU . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
Exporting and Importing NSS Volumes for NFS Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
18.16.1 Understanding NFS Export and Mount Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
18.16.2 Exporting NSS Volumes for NFSv3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
18.16.3 Importing NSS Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
Deleting an NSS Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
Finding the Filename for a Given ZID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
Verifying or Rebuilding NSS Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
Moving Volumes with DFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
Splitting Volumes with DFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
What’s Next . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
19 Managing Encrypted NSS Volumes
19.1
19.2
19.3
19.4
19.5
19.6
19.7
19.8
239
Understanding Encrypted Volume Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
19.1.1 Encryption Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
19.1.2 Encryption Password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
19.1.3 How Encrypted Volume Support Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
19.1.4 Guidelines for Using Encrypted Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
Security Considerations for Encrypted Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
19.2.1 Choosing a Strong Encryption Password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
19.2.2 Backing Up Data from an Encrypted Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
19.2.3 Excluding the NSS Cache Memory from Core Dumps . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
19.2.4 Disabling Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
19.2.5 Archiving Files from an Encrypted Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
19.2.6 Using Direct I/O to an Encrypted Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
19.2.7 Sharing Encrypted NSS Volumes in a Cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
Creating an Encrypted Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
Mounting an Encrypted NSS Volume with NSSMU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
Mounting Encrypted NSS Volumes with NSS Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
Dismounting an Encrypted NSS Volume with NSSMU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
Using Encrypted Volumes in a Server Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
Removing Encrypted Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
Contents
9
19.9
What’s Next . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
20 Securing Access to NSS Volumes, Directories, and Files
20.1
20.2
20.3
20.4
247
Configuring File System Trustees, Trustee Rights, Inherited Rights Filters, and Attributes. . . . . . . 247
20.1.1 Prerequisites for Configuring Trustees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
20.1.2 Viewing Properties of a File or Folder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
20.1.3 Configuring File or Folder Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
20.1.4 Configuring Rights Properties (File System Trustees, Trustee Rights, and Inherited
Rights Filter) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
20.1.5 Viewing Effective Rights for a Trustee . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
20.1.6 Managing Rights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
Configuring the Security Equivalence Vector Update Frequency . . . . . . . . . . . . . . . . . . . . . . . . . . 255
20.2.1 Understanding the SEV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
20.2.2 Enabling or Disabling the Background SEV Update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
20.2.3 Configuring the Background SEV Update Interval. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
20.2.4 Forcing a Background SEV Update. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
Using Data Shredding to Prevent Access to Purged Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
20.3.1 Setting the Data Shredding Attribute When You Create a Volume . . . . . . . . . . . . . . . . . . 258
20.3.2 Setting the Data Shredding Attribute for an Existing Volume . . . . . . . . . . . . . . . . . . . . . . 258
20.3.3 Disabling Data Shredding for an Existing Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
Enabling or Disabling LAF Audit Log Messages for Trustee Events . . . . . . . . . . . . . . . . . . . . . . . . 258
20.4.1 Understanding NSS Audit Log Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
20.4.2 Enabling or Disabling LAF Audit Messages for Trustee Events . . . . . . . . . . . . . . . . . . . . 262
20.4.3 Viewing LAF Audit Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
20.4.4 Additional Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
21 Managing Compression on NSS Volumes
263
21.1
Understanding Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
21.1.1 Compression and Decompression Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
21.1.2 Compression Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
21.1.3 Guidelines for Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
21.1.4 Factors Affecting Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
21.1.5 Factors Affecting Decompression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
21.1.6 Monitoring Compression Activity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
21.2 Configuring Compression for a Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
21.2.1 Understanding Server-Level Compression Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . 268
21.2.2 Configuring Server-Level Compression Parameters with Commands. . . . . . . . . . . . . . . . 272
21.3 Configuring a Volume for Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
21.3.1 Enabling Compression for a New Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
21.3.2 Enabling Compression for an Existing Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
21.4 Suspending Compression for Volumes or Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
21.5 Disabling Compression for a Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
21.6 Restoring Data to a Uncompressed Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
21.7 Configuring Compression Preferences for Directories and Files . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
21.7.1 Using the Novell Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
21.7.2 Using ATTRIB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
21.8 Using NSS Commands to Configure and Monitor Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
21.9 Repairing Compressed Volumes with the Compfix Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
21.10 Backing Up Compressed Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
22 Managing Space Quotas for Volumes, Directories, and Users
22.1
22.2
22.3
10
281
Understanding Space Quotas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
Managing NSS Volume Quotas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
Managing Directory Quotas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
OES 11 SP2: NSS File System Administration Guide for Linux
22.4
22.3.1 Enabling or Disabling the Directory Quotas Attribute for an NSS Volume. . . . . . . . . . . . . 285
22.3.2 Configuring Directory Quotas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
22.3.3 Removing a Directory Quota . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
22.3.4 Removing All Directory Quotas for an NSS Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
Managing User Space Quotas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
22.4.1 Setting the User Space Quotas Attribute for an NSS Volume . . . . . . . . . . . . . . . . . . . . . . 288
22.4.2 Viewing User Space Quotas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
22.4.3 Configuring a User Space Quota . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
22.4.4 Modifying a User Space Quota . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
22.4.5 Deleting a User Space Quota . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
22.4.6 Configuring User Space Quotas on Volumes After Upgrading or Migrating from
OES 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
23 Salvaging and Purging Deleted Volumes, Directories, and Files
23.1
23.2
23.3
23.4
23.5
23.6
295
Understanding the NSS Salvage System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
23.1.1 Volume Salvage versus File Salvage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
23.1.2 Trustees and Rights Handling for Salvaged Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
23.1.3 Understanding Purging Triggers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
Configuring the Purge Behavior for NSS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
23.2.1 Setting the Purge Delay for All Deleted Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
23.2.2 Setting the Immediate Purge of Deleted Files for All NSS Volumes . . . . . . . . . . . . . . . . . 298
23.2.3 Setting the Low and High Salvage Watermarks for Automatically Purging Deleted
Directories and Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
23.2.4 Setting the Purge Immediate Attribute for a Directory or File . . . . . . . . . . . . . . . . . . . . . . 300
Enabling or Disabling the Salvage Attribute for an NSS Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
Viewing, Salvaging, or Purging Deleted NSS Volumes in a Pool. . . . . . . . . . . . . . . . . . . . . . . . . . . 301
Salvaging or Purging Deleted Files with iManager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
23.5.1 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
23.5.2 Salvaging the Deleted Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
23.5.3 Purging the Deleted Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
Salvaging or Purging Deleted Files with Other Tools. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
23.6.1 Using NetStorage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
23.6.2 Using the Novell Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
24 Managing Hard Links
24.1
24.2
24.3
24.4
24.5
24.6
307
Understanding Hard Links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
24.1.1 Hard Links and the Primary Link . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
24.1.2 Hard Links and Linux Users. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
24.1.3 Hard Links and File Ownership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
24.1.4 Hard Links and File System Trustees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
24.1.5 Hard Links and Directory Space Quotas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
24.1.6 Hard Links and User Space Quotas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
24.1.7 Hard Links and the Hard Links Attribute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
24.1.8 Hard Links and File Salvage and Purge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
24.1.9 Hard Links and DFS Move Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
Upgrading the Media Format for Hard Links Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
24.2.1 New Metadata Structure Supports Up to 65,535 Hard Links for a File . . . . . . . . . . . . . . . 311
24.2.2 Old Metadata Structure Supports Limited Hard Links for a File . . . . . . . . . . . . . . . . . . . . 311
Enabling or Disabling the Hard Links Attribute. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
24.3.1 Prerequisite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
24.3.2 Hard Links Attribute Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
24.3.3 Viewing the Hard Link Attribute Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
Creating a Hard Link Using Ln on an NSS Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
Creating a Hard Link Using a zLink API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
Creating a Hard Link for Testing Purposes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
24.6.1 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
Contents
11
24.7
24.8
24.9
24.6.2 Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
Viewing Hard Links for a File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
Deleting a Hard Link. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
Deleting a Primary Link . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
25 Managing Files and Folders on NSS Volumes
25.1
25.2
25.3
25.4
25.5
25.6
25.7
25.8
25.9
Creating a Folder on an NSS Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
25.1.1 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
25.1.2 Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
Deleting a File or Folder on an NSS Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
25.2.1 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
25.2.2 Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
Uploading Files to an NSS Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
25.3.1 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
25.3.2 Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
Downloading Files from an NSS Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
25.4.1 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
25.4.2 Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
Renaming a File on an NSS Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
25.5.1 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
25.5.2 Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
Moving a File to Different Folder on an NSS Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
25.6.1 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
25.6.2 Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
Viewing or Modifying File or Folder Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
Viewing or Modifying File Ownership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
Viewing, Adding, Modifying, or Removing a Directory Quota. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
26 Managing Backup and Restore for Data and Trustee Information
26.1
26.2
26.3
26.4
26.5
26.6
27.3
27.4
12
329
Using Novell Storage Management Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
Using the Event File List to Refine the Backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
Using METAMIG to Save and Restore Trustee Information on NSS and Linux POSIX File
Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
Using Extended Attributes (xAttr) Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
26.4.1 Enabling NSS Support for Linux xAttr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
26.4.2 Disabling NSS Support for Linux xAttr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
26.4.3 Additional Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
Backing Up Files Without Altering the Access Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
Additional Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
27 Tuning NSS Performance
27.1
27.2
317
333
Do I Need to Tune NSS? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
Tuning Cache Buffers for NSS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
27.2.1 Understanding How NSS Uses Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
27.2.2 Setting the Minimum Number of Cache Buffers to Use for Kernel Memory. . . . . . . . . . . . 334
27.2.3 Setting the Name Cache Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
Configuring or Tuning Group I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
27.3.1 Viewing the Metadata Area Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
27.3.2 Configuring the Journal Group Write Timer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
27.3.3 Configuring the Metadata Group Write Timer and Limit . . . . . . . . . . . . . . . . . . . . . . . . . . 336
27.3.4 Configuring the User Data Group Write Timer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
27.3.5 Viewing Group Write Policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
Tuning I/O Schedulers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
OES 11 SP2: NSS File System Administration Guide for Linux
27.4.1
27.4.2
27.4.3
27.4.4
Single I/O Context (SIOC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
REQ_NOIDLE (Submit Request Without Idling) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
Deadline Scheduler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
Completely Fair Queuing (CFQ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
28 Monitoring the Status of the NSS File System and Services
28.1
28.2
28.3
28.4
341
Monitoring Status of NSS Devices, Pools, and Volumes with iManager . . . . . . . . . . . . . . . . . . . . . 341
Monitoring Compression and Salvage Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
Monitoring Quotas and Space Usage for NSS Pools and Volumes. . . . . . . . . . . . . . . . . . . . . . . . . 344
Monitoring File System Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
28.4.1 Using iManager to Monitor NSS File System Parameters . . . . . . . . . . . . . . . . . . . . . . . . . 346
28.4.2 Using Novell Remote Manager to Browse Directories and Files . . . . . . . . . . . . . . . . . . . . 346
28.4.3 Using Novell NetStorage to Monitor NSS File System Parameters. . . . . . . . . . . . . . . . . . 346
29 Troubleshooting the NSS File System
29.1
29.2
29.3
29.4
29.5
29.6
29.7
29.8
29.9
29.10
29.11
29.12
29.13
29.14
29.15
29.16
29.17
29.18
29.19
29.20
29.21
29.22
29.23
29.24
29.25
29.26
29.27
29.28
347
Cannot Connect to Target Servers from iManager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
Cannot See NSS Devices, Pools, or Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
eDirectory Errors When Creating NSS Pools or Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
29.3.1 eDirectory Error 613 When Creating an NSS Pool or Volume . . . . . . . . . . . . . . . . . . . . . 348
29.3.2 eDirectory Error 672 When Creating an NSS Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
29.3.3 eDirectory Error 601 When Creating NSS Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
File Compression is Not Working . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
Linux Fails to Execute Dismount, Umount, or Delete Commands for NSS Volumes. . . . . . . . . . . . 350
Multipath Devices Are Not Resolved . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
NSS Volume Disappears from the NCP Console (ncpcon) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
Pathname Exceeds 255-Character Limit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
Server Hangs When Using an NSS Volume as a Netatalk Share . . . . . . . . . . . . . . . . . . . . . . . . . . 351
Slow Mount Performance for NSS Volumes Using the UNIX Name Space. . . . . . . . . . . . . . . . . . . 351
Software RAID 1 Fails to Recognize a Replacement Device. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
Tuning NSS Volumes for GroupWise Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
Unknown Users as File Owners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
Using Scripts to Create or Manage NSS Pools and Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
NFS Volume Mount Failure Blocks the Mounting of Other File Systems . . . . . . . . . . . . . . . . . . . . . 352
In the Process of Cluster Resource Going Offline, NSS Pool Deactivation Hangs . . . . . . . . . . . . . 353
Using Linux iManager Not Able to Manage an OES Linux/NetWare Server . . . . . . . . . . . . . . . . . . 353
Selecting the Device/Partition Option Which is Corrupted . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
Salvaging an Encrypted Volume Fails After Renaming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
Pool Activation Fails After Aborting ravsui Verify or Rebuild Process . . . . . . . . . . . . . . . . . . . . . . . 354
Trustee Entries are Stored in Different Formats in NetWare and Linux Platforms. . . . . . . . . . . . . . 354
Debugging of nlvm Results in the Display of Warning Messages in the nlvm_debug.log File . . . . . 354
NLVM Pool Move Fails and Deactivates the Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
Changing NameSpace of a Volume Does Not Work if the Volume is in Use on the Server
When NameSpace is Changed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
Metamig Error When Copying Hidden or System File From Source Volume to Target Volume . . . 355
iManager Taking Too Long to Load User Quotas or Error in Loading User Quotas . . . . . . . . . . . . 355
Error 23361 “No Space for Move Stamps” During a Pool Move . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
Role-based Users not able Access Rights to Files and Folders under Modify Group . . . . . . . . . . . 356
30 Security Considerations
30.1
30.2
30.3
357
Security Features of NSS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
Preventing Unauthorized Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
Securing Sensitive Information During Remote Management Sessions . . . . . . . . . . . . . . . . . . . . . 359
Contents
13
30.4
30.5
30.6
30.7
30.8
30.9
30.10
30.11
30.12
30.13
30.14
30.15
Protecting Data During Backup and on Backup Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
Preventing Exposure of Sensitive Data in a Core Dump . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
Preventing Exposure of the Encryption Password in a Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
30.6.1 NSS Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
30.6.2 NSS Logging and Security Implications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
30.6.3 Logging Communications between NSS and the _ADMIN Volume . . . . . . . . . . . . . . . . . 361
30.6.4 Logging Communications between NSS and eDirectory, NICI, or Linux User
Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
Using Data Shredding to Prevent Unauthorized Access to Purged Files. . . . . . . . . . . . . . . . . . . . . 364
Acquiring Security Equivalence Vectors for NSS Users. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
Protecting Modules Responsible for Security Equivalence Vectors. . . . . . . . . . . . . . . . . . . . . . . . . 365
Controlling File System Access and Attributes for NSS Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . 365
Displaying Directory and File Attributes for NSS Volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
Security Best Practices for zAPIs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
Controlling Physical Access to Servers and Resources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
Securing Access to the Servers With a Firewall. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
Creating Strong Passwords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
A NSS Commands
A.1
A.2
A.3
A.4
A.5
A.6
A.7
A.8
A.9
A.10
A.11
A.12
A.13
A.14
A.15
A.16
A.17
A.18
A.19
14
369
Using NSS Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
A.1.1
Issuing NSS Commands at Command Consoles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
A.1.2
Making NSS Commands Persist Through a Reboot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
A.1.3
Permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
A.1.4
Descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
Help and Find Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
Access Time Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
Background File System Checker Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
Cache Management Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
A.5.1
Cache Command. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
A.5.2
ID Cache Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
A.5.3
Cache Monitoring Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
A.5.4
UnplugAlways Command for the Read Queue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
Compression Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
A.6.1
Server-Level Compression Parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
A.6.2
Volume-Level Compression Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
Data Shredding Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377
Daylight Savings Time Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377
eDirectory Storage Object ID Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378
Extended Attributes (XAttr) Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378
A.10.1 CtimeIsMetadataModTime Option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378
A.10.2 ListXattrNWmetadata Option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
A.10.3 Additional Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
Event File List (EFL) Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
Hard Links Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
A.12.1 Hard Links Media Upgrade Commands for Existing Volumes . . . . . . . . . . . . . . . . . . . . . . 381
A.12.2 Hard Links Media Upgrade Commands for New Volumes . . . . . . . . . . . . . . . . . . . . . . . . 382
A.12.3 Hard Links Attribute Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
A.12.4 Hard Links Management Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
I/O Write Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
LAF Audit Log Messages Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
Load Commands for the nssstart.cfg File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
Low Storage Alert Messages Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
Migration Support Commands for Near-Line Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
Modified File List (MFL) Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
Multipath I/O Failover Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
OES 11 SP2: NSS File System Administration Guide for Linux
A.20 Multiple Server Activation Prevention (MSAP) Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
A.21 noatime and atime Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
A.21.1 Using noatime or atime at the Command Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
A.21.2 Using noatime in a Cluster Load Script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
A.21.3 Viewing the atime or noatime Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
A.22 noatime and nodiratime Support for Linux open, mount, nfsmount, and /etc/fstab . . . . . . . . . . . . . 389
A.22.1 Linux open(2) Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
A.22.2 Linux mount Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
A.22.3 Linux nfsmount Command. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
A.22.4 Linux /etc/fstab File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
A.23 Opportunistic Locking Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
A.24 Pool Freeze and Thaw Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
A.25 Pool Management Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
A.25.1 Pool Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
A.25.2 PoolAuto Commands for Load Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
A.26 Pool Snapshot Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
A.27 Pool Verify and Rebuild Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
A.28 POSIX Permission Mask Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
A.29 Quotas Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
A.29.1 Sys: Volume Quota Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
A.29.2 Directory Quotas Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
A.29.3 User Quotas Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
A.30 Read Ahead Blocks and Allocate Ahead Blocks Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
A.30.1 Read Ahead Blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
A.30.2 Allocate Ahead Blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
A.31 Salvage and Purge Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
A.32 Security Equivalence Vector Update Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
A.33 Sendfile API Support Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
A.34 Status Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
A.35 Visibility Rebuild Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
A.35.1 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
A.35.2 Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
A.35.3 Additional Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
A.36 Volume Management Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
A.36.1 Volumes Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
A.36.2 Volume Activity Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
A.36.3 Encrypted Volume Activity Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
A.36.4 VolumeAuto Commands for Load Time. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
A.37 ZID Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
B NSS Utilities
B.1
B.2
B.3
405
attrib . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
B.1.1
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
B.1.2
Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
B.1.3
Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
B.1.4
Example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
compfix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
B.2.1
Prerequisite for Computing Compression Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
B.2.2
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
B.2.3
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
B.2.4
Help Options (HOPTION) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
B.2.5
General Options (GOPTION). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
B.2.6
Volume-Level Options (VOPTION) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
B.2.7
File-Level Options (FOPTION) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
B.2.8
Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
metamig . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
Contents
15
B.4
B.5
B.6
B.7
B.8
B.9
B.10
B.11
B.12
B.13
B.14
B.15
B.3.1
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
B.3.2
Arguments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
B.3.3
Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
B.3.4
Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
nsscon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
B.4.1
Adding /opt/novell/nss/sbin to the PATH Environment Variable . . . . . . . . . . . . . . . . . . . . 414
B.4.2
Starting nsscon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
B.4.3
Using nsscon in a Script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
nssmu. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
nssupdate. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
B.6.1
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
B.6.2
Availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
B.6.3
Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
B.6.4
Example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
ravsui . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
B.7.1
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
B.7.2
Arguments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
B.7.3
Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
B.7.4
Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
B.7.5
Note . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
B.7.6
Example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
ravview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
B.8.1
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
B.8.2
Arguments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
B.8.3
Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
B.8.4
Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421
B.8.5
Example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421
refreshids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
B.9.1
Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
rights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
B.10.1 Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
B.10.2 Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424
B.10.3 Example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426
B.10.4 See Also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426
volumes (NCP Console Utility) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
B.11.1 Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
B.11.2 Using volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
B.11.3 Using volume name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
nssquota. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
B.12.1 Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
B.12.2 Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
B.12.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
nssraid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
B.13.1 Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
B.13.2 Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
B.13.3 Example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
ncsinit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
B.14.1 Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
B.14.2 Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
B.14.3 Example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
nsschown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
B.15.1 Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
B.15.2 Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
C Using Linux Commands to Manage NSS Devices
C.1
16
435
Creating and Mounting NSS Pools and Volumes by Using Linux Commands. . . . . . . . . . . . . . . . . 436
C.1.1
Using the Linux mkfs Command to Create NSS Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
OES 11 SP2: NSS File System Administration Guide for Linux
C.2
C.3
C.4
C.5
C.1.2
Creating a Partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
C.1.3
Creating and Mounting an NSS Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
C.1.4
Creating and Mounting an NSS Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
Configuring Default Mount Settings for NSS Pools and Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . 439
C.2.1
Understanding Entries in the /etc/fstab Configuration File. . . . . . . . . . . . . . . . . . . . . . . . . 439
C.2.2
Adding NSS Pool and Volume Mount Information to /etc/fstab . . . . . . . . . . . . . . . . . . . . . 440
Expanding NSS Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
Deleting NSS Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
Copying Data from a Linux-Managed Pool to an NSS-Managed Pool. . . . . . . . . . . . . . . . . . . . . . . 441
D Comparison of NSS on NetWare and NSS on Linux
443
E Comparison of NSS on Linux and NCP Volumes on Linux POSIX File Systems
449
F NSS Nomenclature
457
G Documentation Updates
461
G.1
G.2
January 2014 (OES 11 SP2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
G.1.1
What’s New . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
July 2012 (OES 11 SP1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
G.2.1
What’s New . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
G.2.2
nssquota . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
G.2.3
metamig . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
G.2.4
ravview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
G.2.5
Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
Contents
17
18
OES 11 SP2: NSS File System Administration Guide for Linux
About This Guide
This documentation describes how to use Novell Storage Services File System (NSS) to manage pools,
volumes, and software RAIDs on a Novell Open Enterprise Server (OES) 11 Support Pack 2 (SP2)
Server.
 Chapter 1, “Overview of NSS,” on page 23
 Chapter 2, “What’s New or Changed in NSS,” on page 29
 Chapter 3, “Installing and Configuring Novell Storage Services,” on page 33
 Chapter 4, “Upgrading the NSS Media Format,” on page 41
 Chapter 5, “Planning NSS Storage Solutions,” on page 47
 Chapter 6, “Using NSS in a Virtualization Environment,” on page 61
 Chapter 7, “Cross-Platform Issues for NSS,” on page 69
 Chapter 8, “Cluster-Enabling Shared NSS Devices and Pools with Novell Cluster Services,” on
page 73
 Chapter 9, “Management Tools for NSS,” on page 77
 Chapter 10, “Managing Devices,” on page 105
 Chapter 11, “Migrating NSS Devices to OES 11 SP2,” on page 119
 Chapter 12, “Managing Partitions,” on page 129
 Chapter 13, “Managing NSS Software RAID Devices,” on page 137
 Chapter 14, “Managing Multipath I/O to Devices,” on page 161
 Chapter 15, “Managing NSS Pools,” on page 163
 Chapter 16, “Verifying and Rebuilding NSS Pools and Volumes,” on page 183
 Chapter 17, “Managing NSS Pool Snapshots,” on page 193
 Chapter 18, “Managing NSS Volumes,” on page 209
 Chapter 19, “Managing Encrypted NSS Volumes,” on page 239
 Chapter 20, “Securing Access to NSS Volumes, Directories, and Files,” on page 247
 Chapter 21, “Managing Compression on NSS Volumes,” on page 263
 Chapter 22, “Managing Space Quotas for Volumes, Directories, and Users,” on page 281
 Chapter 23, “Salvaging and Purging Deleted Volumes, Directories, and Files,” on page 295
 Chapter 24, “Managing Hard Links,” on page 307
 Chapter 25, “Managing Files and Folders on NSS Volumes,” on page 317
 Chapter 26, “Managing Backup and Restore for Data and Trustee Information,” on page 329
 Chapter 27, “Tuning NSS Performance,” on page 333
 Chapter 28, “Monitoring the Status of the NSS File System and Services,” on page 341
 Chapter 29, “Troubleshooting the NSS File System,” on page 347
 Chapter 30, “Security Considerations,” on page 357
 Appendix A, “NSS Commands,” on page 369
 Appendix B, “NSS Utilities,” on page 405
About This Guide
19
 Appendix C, “Using Linux Commands to Manage NSS Devices,” on page 435
 Appendix D, “Comparison of NSS on NetWare and NSS on Linux,” on page 443
 Appendix E, “Comparison of NSS on Linux and NCP Volumes on Linux POSIX File Systems,”
on page 449
 Appendix F, “NSS Nomenclature,” on page 457
 Appendix G, “Documentation Updates,” on page 461
Audience
This guide is intended for network administrators. Chapter 30, “Security Considerations,” on
page 357 is intended for security administrators or anyone who is using NSS storage objects and is
responsible for the security of the system.
Feedback
We want to hear your comments and suggestions about this manual and the other documentation
included with this product. Please use the User Comment feature at the bottom of each page of the
online documentation.
Documentation Updates
The latest version of this NSS File System Administration Guide for Linux is available on the OES
documentation website.
Additional Documentation
For information about planning and implementing storage solutions in Novell Open Enterprise
Server 11, see the following:
 The “Storage and File Systems” section in the OES 11 SP2: Planning and Implementation Guide
describes considerations for choosing a storage solution and system-wide caveats for
implementing the different storage solutions.
 OES 11 SP2: Storage and File Services Overview describes typical requirements for system storage,
and identifies the various storage products and services in Novell Open Enterprise Server 11
that address those requirements.
For more information about services referenced in this guide, see the following:
 The OES 11 SP2: Novell Distributed File Services Administration Guide for Linux describes how to
configure and manage DFS services for NSS volumes.
 The OES 11 SP2: Dynamic Storage Technology Administration Guide describes how to configure
NSS volumes as shadow volumes by using Dynamic Storage Technology.
 Novell Archive and Version Services provides interval-based archiving for user data. See the
OES 11 SP2: Novell Archive and Version Services Administration Guide.
 The OES 11 SP2: File Systems Management Guide describes the Novell trustee model and how to
configure file system trustees, trustee rights, and attributes for NSS volumes.
 The OES 11 SP2: NCP Server for Linux Administration Guide describes how to manage NCP
connections for NSS volumes.
 The OES 11 SP2: Novell Linux User Management Administration Guide describes how to Linuxenable users.
20
OES 11 SP2: NSS File System Administration Guide for Linux
 The NDK: Virtual File Services (http://www.novell.com/developer/ndk/virtual_file_services.html)
describes the software APIs for creating software applications and scripts to manage NSS
volumes and services on Linux and NetWare.
 The NDK: Novell Storage Architecture Component (Media Manager and NWPA) (http://
www.novell.com/developer/ndk/
storage_architecture_components_%28media_manager_and_nwpa%29.html) describes
software APIs for creating storage-related applications.
 Novell Storage Services Error Codes (http://www.novell.com/documentation/nwec/nwec/data/
al3s3ui.html)
 The SLES 11: Storage Administration Guide (http://www.suse.com/documentation/sles11/
stor_admin/?page=/documentation/sles11/stor_admin/data/bookinfo.html) describes storage
services such as the Logical Volume Manager (LVM), UUIDs, Linux multipath I/O for devices;
and Linux software RAIDs 0, 1, 5, 6, and 10.
 Logical Volume Management for Linux documentation (http://sourceware.org/lvm2/).
About This Guide
21
22
OES 11 SP2: NSS File System Administration Guide for Linux
1
Overview of NSS
1
Novell Open Enterprise Server (OES) 11 SP2 provides Novell Storage Services (NSS) file system for
the Linux operating system. This section describes benefits and key features of NSS.
 Section 1.1, “Introduction to NSS,” on page 23
 Section 1.2, “Benefits of NSS,” on page 23
 Section 1.3, “Understanding NSS,” on page 24
 Section 1.4, “NSS Features and Capabilities,” on page 25
 Section 1.5, “Comparison of NSS to Other File Systems,” on page 28
 Section 1.6, “What’s Next,” on page 28
1.1
Introduction to NSS
The NSS file system and services provide visibility, a trustee access control model, multiple
simultaneous name space support, native Unicode, user and directory quotas, rich file attributes,
multiple data stream support, event file lists, and a file salvage subsystem. These capabilities can help
you effectively manage your shared file storage for any size of organization, scaling from small
businesses to even the largest of organizations with hundreds of thousands of employees.
NSS volumes that were created on NetWare are cross-compatible between Linux and NetWare. This
allows you to use a mixed-platform cluster with Novell Cluster Services while converting a cluster
from NetWare to Linux. NSS volumes can fail over between Linux and NetWare, allowing for full
data, trustee, and file system feature preservation when migrating data to Linux.
NSS devices and storage can be managed in the Web-based Novell iManager utility or with serverbased management tools. NSS also supports third-party tools on both platforms for advanced data
protection and management, virus scanning, and traditional archive and backup solutions.
1.2
Benefits of NSS
Files are at the heart of every company, large or small. Whether your network spans continents or a
few cubicles, your files become the foundation of your business. No one can afford unreliable file
service, especially when the files you manage are continually growing and requiring more and more
storage space.
Businesses today demand more storage space and faster and easier access to data. To meet the
demands, you need a file system that can scale to a growing business, is easily maintained, and is
better protected against corruption. NSS provides a variety of features that can be combined to
provide a robust and reliable solution for your business.
NSS provides the following benefits:
 A journaling file system that lets you create bigger volumes that activate (mount) quicker, store
more data, and resist corruption better than non-journaling file systems.
Overview of NSS
23
 Encrypted volume support to meet the legal standard of making data inaccessible to software
that circumvents normal access control, such as if the media were stolen.
 Access control and visibility management using the Novell trustee model.
 An unlimited number of NSS volumes, with up to 255 mounted concurrently.
 Lower memory requirements: 1 MB of RAM can activate an NSS volume.
 Pools of storage that span multiple devices and support dynamic resizing to grow the pool and
volumes.
 Pool snapshots that capture point-in-time versions of files in the pool.
 Software RAID support, including RAID 0 (striping), RAID 1 (mirroring), and RAID 5 (Blocklevel striping with distributed parity), RAID 0+1 (mirroring RAID 0 devices), and RAID 5+1
(mirroring RAID 5 devices).
 Multiple server activation prevention (MSAP) to help protect pools from being concurrently
activated by multiple servers that do not share a cluster relationship.
 Up to 4 billion (10E9) files in a single directory, so how you organize files is limited only by the
application or file browser, not the file system.
 Faster access to data, regardless of file size or volume size.
 Directory space restrictions.
 User space restrictions.
 Salvage support for deleted volumes and files.
 Data compression.
 Novell Distributed File Services allows you to better manage storage growth by defining virtual
file structures with junctions, moving volumes, and splitting volumes. For information, see the
OES 11 SP2: Novell Distributed File Services Administration Guide for Linux.
 Novell Archive and Version Services allows you to archive interval-based versions of user files
on NSS volumes, and to make them available to users for individual retrieval. For information,
see the OES 11 SP2: Novell Archive and Version Services Administration Guide.
1.3
Understanding NSS
NSS is a 64-bit file system that can manage a virtually unlimited number of file objects. On each
physical storage device, NSS abstracts physical Novell partitions to make them appear as contiguous
free space. With NSS you can create any number of virtual storage resources, called pools with a
maximum pool size of 8 TB. A pool can contain any number of volumes. If the pool spans devices by
using space from them for the pool, the volumes automatically span the devices. A single volume can
contain up to 8 trillion files and grow upto 8 TB in size, depending on the size of the pool and space
consumed by other volumes in the pool.
 Section 1.3.1, “Storage Pools,” on page 24
 Section 1.3.2, “NSS Volumes,” on page 25
1.3.1
Storage Pools
You can use NSS pools and volumes to store applications, files, and databases. You create storage
pools by assigning areas of free space obtained from one or more of a server’s storage devices. You
can create one or more NSS volumes from the space in the storage pool. The following figure shows
how NSS uses free space on multiple devices to create a storage pool.
24
OES 11 SP2: NSS File System Administration Guide for Linux
Figure 1-1 NSS Pool Architecture
Device
Device
Vol 1
Pool
Free space
1.3.2
Vol 2
Vol 3
Vol 1
unused
Vol 2
Vol 3
unused
Vol 3
Vol 2
Vol 3
NSS Volumes
The logical volumes you create on NSS storage pools are called NSS volumes. You can specify a
maximum storage quota for the volume, or allow the volume to grow dynamically to the size of its
pool. You can add any number of volumes to a storage pool.
Because there is no limit to the number of volumes you can create, it is possible that the combined
administrative size of all the volumes taken together is larger than the physical size of the storage
pool itself. NSS refers to this process as “overbooking.” If you overbook space in the pool, the
individual administrative size of a volume cannot exceed the size of the storage pool.
NSS allocates space from the pools to the volumes only as needed. Typically, user consumption of a
volume’s available space ebbs and flows; it is unlikely that users concurrently consume volumes at
100% of their available capacity. Each volume consumes the space it needs as it needs it. By
overbooking space, NSS provides a flexible and cost effective way to accommodate expanding
storage needs.
For example, suppose you have a 300 GB storage pool. From this storage pool, you create two NSS
volumes of 200 GB. You can define two 200 GB NSS volumes out of a storage pool of only 300 GB, if
you feel comfortable that the NSS volumes will not both exceed 75 percent capacity (150 GB) and
therefore, exceed the overall size of the storage pool. If one NSS volume does reach 150 GB, but the
other volume stays under 100 GB, your overbooking plan worked.
Suppose you expect one of the volumes might exceed its share of the pool. You can overbook the pool
by creating one NSS volume with a quota of 200 GB and a second NSS volume that can grow to the
size of the pool. As the combined size nears the size of the pool, you can extend the size of the pool by
adding another segment to it, allowing more space for the larger, expanding volume. Your
overbooking plan works because you built in the opportunity to expand the pool and volume,
according to your business needs.
1.4
NSS Features and Capabilities
NSS helps improve the scalability, flexibility, and availability of your storage devices. This section
identifies specific NSS features that help you do the following:
 Section 1.4.1, “Use Less Memory and Gain More Speed,” on page 26
 Section 1.4.2, “Improve Storage Availability,” on page 26
 Section 1.4.3, “Prevent Unauthorized Access,” on page 26
Overview of NSS
25
 Section 1.4.4, “Protect Data from Corruption or Loss,” on page 27
 Section 1.4.5, “Maximize Available Space,” on page 27
1.4.1
Use Less Memory and Gain More Speed
NSS requires only about 1 MB of server memory to activate a volume, independent of the number of
files it contains. With NSS, you can activate up to 256 NSS volumes concurrently per server, up to the
available server memory.
Whenever you activate an NSS volume, it takes only seconds to mount a volume instead of minutes.
NSS uses a journaling file system and does not need to scan the entire file system to create a directory
entry table (DET) to load the volume. NSS loads a file’s metadata into the memory only when you
access the file.
NSS reads the file system journal only if a server goes down abnormally. Instead of slowly searching
the volume for errors, NSS reads the journal to identify any incomplete transactions. It either
completes the transaction or backs it out. This results in less server down time and is beneficial for
applications such as mail services.
1.4.2
Improve Storage Availability
NSS provides the following features to improve I/O performance and provide fault-tolerant access to
your data:
 Software RAID support for RAIDs 0, 1, 5, 0+1, and 5+1
Uses software RAID devices to improve performance and availability. For information, see
Chapter 13, “Managing NSS Software RAID Devices,” on page 137.
 Shared-disk storage
Makes devices shareable for use in a cluster. For information, see Section 10.6, “Sharing
Devices,” on page 113.
 Multiple name space support
NSS provides full support for filenames in the Long, UNIX, DOS, and Macintosh name spaces.
Long name space is the default. For information, see “Lookup Namespace” on page 213.
 Rich file metadata support
NSS provides full support for all file attributes and multiple simultaneous data streams for DOS,
Windows, UNIX, and Macintosh. For information, see Section 20.1, “Configuring File System
Trustees, Trustee Rights, Inherited Rights Filters, and Attributes,” on page 247.
1.4.3
Prevent Unauthorized Access
NSS includes the following features to help prevent access to data that circumvents normal access
control:
 Encrypted Volume Support
Encrypts data volumes, meeting U.S. Government security standards. For information, see
“Managing Encrypted NSS Volumes” on page 239.
 Data shredding (up to 7 times) for deleted files
Erases files completely, meeting U.S. Government security standards. For information, see
Section 20.3, “Using Data Shredding to Prevent Access to Purged Files,” on page 257.
26
OES 11 SP2: NSS File System Administration Guide for Linux
 Multiple Server Access Prevention for pools
Ensures data integrity by preventing unauthorized access to shared media in a storage area
network. For information, see Section 15.13, “Preventing Pools from Activating on Multiple
Servers,” on page 175.
 Trustee model for access control on NSS volumes
NSS uses the Novell Trustee model to greatly simplify access control management in the file
system. It restricts visibility of data structures so that users only see subdirectories they have
rights to see, not the whole tree like all other file systems.
For information about the Novell Trustee model and NSS file system rights, see the OES 11 SP2:
File Systems Management Guide.
Some additional steps are necessary to configure access control for NSS. For information, see
Section 5.5, “Access Control for NSS,” on page 53.
1.4.4
Protect Data from Corruption or Loss
NSS includes the following features to ensure that the most current copy of your data is recoverable:
 Pool snapshots to provide point-in-time views of data
Backs up files from snapshots of data so that all files, including open ones, are backed up. For
information, see “Managing NSS Pool Snapshots” on page 193.
 Immediate data saves
Writes data to the volume at regular intervals in order to reduce the seek time on the drive. For
information, see Section 27.3, “Configuring or Tuning Group I/O,” on page 334.
 Salvage file subsystem
Recovers files, directories, and volumes that were deleted by applications or from the terminal/
console commands. For information, see “Salvaging and Purging Deleted Volumes, Directories,
and Files” on page 295.
1.4.5
Maximize Available Space
NSS includes the following features to help you maximize your available space:
 File compression
Compresses inactive files, according to preset parameters, to conserve space in a volume. For
information, see “Managing Compression on NSS Volumes” on page 263.
 Volume space restrictions
Limits the amount of space a volume can consume in its pool. For information, see Section 22.2,
“Managing NSS Volume Quotas,” on page 282.
 Directory space restrictions
Limits the amount of space a subdirectory can consume, regardless of broader volume and user
constraints. For information, see Section 22.3, “Managing Directory Quotas,” on page 284.
 User space restrictions
Limits the amount of space a user’s data can consume, regardless of broader directory or volume
constraints. For information, see Section 22.4, “Managing User Space Quotas,” on page 288.
Overview of NSS
27
1.5
Comparison of NSS to Other File Systems
Use the following table to find comparisons of NSS on Linux to NSS on NetWare and to NCP
volumes on Linux POSIX file systems:
Comparison
NSS on NetWare NSS on Linux
Linux POSIX File
Systems plus
NCP Server
Comparison of NSS on NetWare and NSS
on Linux (page 443)
Comparison of NSS on Linux and NCP
Volumes on Linux POSIX File Systems
(page 449)
1.6
What’s Next
See Chapter 2, “What’s New or Changed in NSS,” on page 29 to learn about new and modified
features in this release of NSS.
Review the following sections to help you plan your storage solution:
 For information about installing and configuring NSS on your server, see Chapter 3, “Installing
and Configuring Novell Storage Services,” on page 33.
 For guidelines and instructions about upgrading the media format of NSS volumes to use hard
links, see Chapter 4, “Upgrading the NSS Media Format,” on page 41.
 For guidelines and instructions about migrating NSS volumes from to OES 11, see Chapter 11,
“Migrating NSS Devices to OES 11 SP2,” on page 119.
 For guidelines about setting up NSS volumes and services on a virtual server, see Chapter 6,
“Using NSS in a Virtualization Environment,” on page 61.
 For management tools overviews and quick references, see Chapter 9, “Management Tools for
NSS,” on page 77.
 For information to help you plan storage services to use the NSS file system and services, see
Chapter 5, “Planning NSS Storage Solutions,” on page 47.
28
OES 11 SP2: NSS File System Administration Guide for Linux
2
What’s New or Changed in NSS
2
This section describes enhancements and changes in Novell Storage Services since the initial release
Novell Open Enterprise Server (OES) 11.
 Section 2.1, “What’s New or Changed in NSS (OES 11 SP2),” on page 29
 Section 2.2, “What’s New or Changed in NSS (OES 11 SP1),” on page 30
 Section 2.3, “What’s New or Changed in NSS (OES 11),” on page 31
2.1
What’s New or Changed in NSS (OES 11 SP2)
In addition to bug fixes, NSS provides the following enhancements and changes in OES 11 SP2:
 _admin supports up to 64 concurrent requests: The re-entrance issue in the _admin for user
space calls has been resolved, and it can handle up to 64 concurrent requests.
 file.cmd Supports All NSS Namespaces and Nametypes: The file.cmd interface now works
with all the NSS supported nametypes (regular, deleted file) and namespaces (DOS, UNIX,
LONG, Macintosh).
 NSS Change Owner Utility (nsschown): The nsschown utility has been introduced in OES 11
SP2 to identify the files and directories that have obsolete owners, and to list or change the files
owned by existing or obsolete owners for directories or files in an NSS volume. For more
information, see “nsschown” in the OES 11 SP2: NSS File System Administration Guide for Linux.
 Rights to Files and Folders: Using the Users or Groups plug-in, you can add or modify rights
to files and folders for a user or group on a specified volume from a Rights to Files and Folders tab.
You can do this even while creating users or groups. You can also verify the explicit rights of a
user or group on a given volume. For more information, see “Managing Rights” in the OES 11
SP2: NSS File System Administration Guide for Linux.
 View Effective Rights: In iManager, using the Files and Folders plug-in, you can view the
effective rights that a trustee has on a folder or file. For more information, see “Managing
Rights” in the OES 11 SP2: NSS File System Administration Guide for Linux.
 Linux Multipath I/O Changes in SLES 11 SP3: The Linux Device Mapper - Multipath software
deprecated some attributes and modified some default settings for attributes in the /etc/
multipath.conf file in SUSE Linux Enterprise Server 11 SP3. After upgrading to SLES 11 SP3,
the multipathd daemon might not work with your existing multipath configuration file. Ensure
that you verify your /etc/multipath.conf file before you start the multipathd daemon after
an upgrade to OES 11 SP2. For information about the latest attributes and default settings, see
the multipath.conf(5) man page after you install or upgrade the server. For more information,
see “Linux Multipath I/O Changes in SLES 11 SP3” in the OES 11 SP2: Novell Cluster Services for
Linux Administration Guide.
 New metamig Export Option: A new export option (-p) has been added to metamig. This
option exports the metadata of all the files under a given path. For more information, see the
metamig man page. For more information, see “metamig” in the OES 11 SP2: NSS File System
Administration Guide for Linux.
What’s New or Changed in NSS
29
2.2
What’s New or Changed in NSS (OES 11 SP1)
Novell Storage Services in OES 11 SP1 has been modified to run on 64-bit SUSE Linux Enterprise
Server (SLES) 11 SP2. In addition to bug fixes, NSS provides the following enhancements and
behavior changes in the OES 11 SP1 release:
Support for Managing BtrFS Volumes Using NSSMU and NLVM
 Support was added for the Btrfs file system. The Btrfs file system is now available as an
alternative to other supported file systems (Ext2, Ext3, ReiserFs, and XFS) when you create a
Linux volume. The btrfsprogs package is required. You can use NSSMU or NLVM to manage the
Btrfs volumes as you would manage other Linux data volumes. Btrfs volumes can also be
configured for NCP file access and for clustering with Novell Cluster Services. For more
information, see “Overview of Linux POSIX File Systems” in the OES 11 SP2: Linux POSIX
Volume Administration Guide.
 The NCP option for the NSSMU or nlvm create linux volume command can be used to enable
the Linux POSIX file system on the volume to be accessed with the NetWare Core Protocol
(NCP).
nssquota
 The clear option is introduced to remove the user quota, directory quota, and volume quota
restriction.
 The nssquota utility now supports Volume quota. You can use this utility to set, get, or clear the
volume quota on NSS volumes.
For more information, see nssquota utility in the OES 11 SP1: NSS File System Administration Guide
for Linux.
metamig
metamig can restore trustee information gathered by TRUSTEE.NLM. For more information, see
metamig in the OES 11 SP1: NSS File System Administration Guide for Linux.
nssAdminInstall
--delete command is introduced to delete the existing NSS Admin User from eDirectory.
Widened Partition List Panel
On the NSSMU Partitions page, the Partitions list panel has been widened to accommodate long
partition names. If you widen the NSSMU display screen, it widens the Partitions list panel and
allows you to see the partitions' full names.
30
OES 11 SP2: NSS File System Administration Guide for Linux
2.3
What’s New or Changed in NSS (OES 11)
This section describes enhancements and changes to the Novell Storage Services for Novell Open
Enterprise Server 11. The following features are added:
Change in the Volume Manager
Novell Linux Volume Manager (NLVM) replaces the Enterprise Volume Management System
(EVMS) volume manager, which is now deprecated. For more information, see the OES 11 NLVM
Reference Guide or see the nlvm (8) man page.
Support for Creating >2TB Partition
When you initialize a device, you can choose to use the DOS partition table scheme or the GUID
Partition Table (GPT) scheme for a given device.
The DOS partition table scheme supports devices up to 2TB in size. It allows up to four partitions on
a device.
The GPT partition table scheme supports device sizes up to 2E64 sectors (that is, up to 8388608
petabytes (PB) based on the 512-byte sector size). It allows up to 128 partitions per disk. Each of its
disks partitions is a logical device that is identified by a unique 128-bit (16-byte) GUID.
Support for Creating Linux Volumes
Using the NSSMU and NLVM command line option, you can create Linux volumes. For more
information, see Table 9-11, Create Linux Volume in the OES 11 NLVM Reference Guide, the
nssmu(8) or the nlvm(8) manpage.
Single Disk System Support
You can now use the system device containing the root partition also to create/manage NSS pools and
volumes.
quota Utility is Renamed to nssquota
For more information, see nssquota.
Moving a Pool
Using the NSSMU and NLVM command line option, you can move a pool from one location to
another on the same system. For more information, see Table 9-11, Moving a Pool, and the Move in
the OES 11 NLVM Reference Guide.
nssraid Utility
The nssraid utility options are soft linked to the Novell Linux Volume Manager nlvm raid options.
You can alternatively use nlvm raid commands to manage NSS software RAIDS at the command
line and in scripts in OES 11. nlvm raid commands have more options than the nssraid utility.
For information, see RAID in the OES 11 NLVM Reference Guide or see the nlvm man page.
What’s New or Changed in NSS
31
32
OES 11 SP2: NSS File System Administration Guide for Linux
3
Installing and Configuring Novell
Storage Services
3
This section describes how to install and configure Novell Storage Services on Novell Open
Enterprise Server 11 SP2.
 Section 3.1, “Requirements for Installing NSS,” on page 33
 Section 3.2, “Installing and Configuring NSS,” on page 34
 Section 3.3, “Upgrading the Media Format for Hard Link Support,” on page 37
 Section 3.4, “Enabling Users for Linux Utilities and Services,” on page 37
 Section 3.5, “Updating NSS on OES 11 SP2,” on page 38
 Section 3.6, “Upgrading from OES 2 to OES 11 SP2,” on page 39
3.1
Requirements for Installing NSS
Make sure your system and storage solution meets the requirements in this section.
 Section 3.1.1, “Device Requirements,” on page 33
 Section 3.1.2, “Requirements for NSS,” on page 34
 Section 3.1.3, “Requirements for Storage-Related iManager Plug-Ins,” on page 34
3.1.1
Device Requirements
The following requirements apply to devices for NSS on the latest release of OES 11 SP2:
 NSS can utilize device sizes up to 2E64 sectors (that is, up to 8388608 petabytes (PB) based on the
512-byte sector size) with a maximum pool size of 8 TB. For information, see Section 10.1.1,
“Device Size,” on page 105.
 At least 12 MB of free space is needed on the storage media for each NSS pool you plan to create.
 At least 12 MB of free space is needed on the storage media for each software RAID segment you
plan to create.
 For information about devices to use in a virtual environment, see Chapter 6, “Using NSS in a
Virtualization Environment,” on page 61.
Installing and Configuring Novell Storage Services
33
3.1.2
Requirements for NSS
General Requirements
The following general requirements apply to NSS:
 A physical server or virtual server running OES.
 NSS is not installed by default. You can select it during the YaST install, or install it at any time
from the YaST > Open Enterprise Server > OES Install and Configuration.
For information about install options, see Section 3.2, “Installing and Configuring NSS,” on
page 34.
 The NSS file system is used only for data volumes on OES. The Linux operating system requires
a Linux POSIX file system for its system volume, such as Ext3.
 After installing OES, install only approved updates. Refer to the OES 11 SP2: Installation Guide to
install the approved updates.
 A NetIQ eDirectory Read/Write replica must be available in the same tree as the server when
you create an NSS pool or volume so that the Storage objects can be created in eDirectory;
otherwise, NCP cannot map to the pool or volume.
3.1.3
Requirements for Storage-Related iManager Plug-Ins
For information about installing and using the storage-related iManager plug-ins, see Section 9.1,
“Novell iManager and Storage-Related Plug-Ins,” on page 77.
3.2
Installing and Configuring NSS
This section describes only those steps in the install that are directly related to installing Novell
Storage Services and its dependencies. For information about installing OES 11 SP2 services, see the
OES 11 SP2: Installation Guide.
 Section 3.2.1, “Selecting the NSS Pattern During the OES 11 SP2 Install,” on page 34
 Section 3.2.2, “Installing NSS on an Existing OES 11 SP2 Server,” on page 36
 Section 3.2.3, “Enabling or Disabling NSS,” on page 37
3.2.1
Selecting the NSS Pattern During the OES 11 SP2 Install
1 In the YaST install, on the Installations Settings page, click Software to go to the Software Selections
and System Tasks page.
For information about the entire OES 11 SP2 install process, see the OES 11 SP2: Installation
Guide.
2 From the OES Services options, select Novell Storage Services. Selecting NSS as part of a 64-bit
installation automatically installs NSS 64-bit support.
The following additional OES 11 SP2 services are automatically selected:
 Novell Backup / Storage Management Services
SMS makes it possible to back up trustee and other extended attributes for data on NSS
volumes. It is also used by Novell Distributed File Services for moving or splitting NSS
volumes.
34
OES 11 SP2: NSS File System Administration Guide for Linux
 NetIQ eDirectory
eDirectory supports authentication of users who connect to NSS volumes.
 Novell Linux User Management
LUM allows eDirectory users to be enabled for Linux services, such as access via Samba,
FTP, and so on. The administrator user for the server is automatically Linux-enabled with
LUM. Users must be Linux-enabled with LUM in order to access data on NSS volumes with
Linux services or utilities such as SSH, or with Linux protocols such as Samba. The Linux
services must also be LUM enabled.LUM is not required for NCP, Novell AFP, and Novell
CIFS access.
IMPORTANT: LUM is required even if the administrator user is the only LUM user on the
server.
 NCP Server / Dynamic Storage Technology
NCP Server provides support to NSS for access control, shadow volumes, commands, and
file access. It is required even if you are not using NCP clients to connect to the NSS volume.
 Novell Remote Manager (NRM)
Novell Remote Manager (NRM) is a browser-based management utility for monitoring
server health, changing the configuration of your server, or performing diagnostic and
debugging tasks.
NRM provides the NCP Server management plug-in that allows you to create shadow
volumes using NSS volumes. You can also use it to manage NCP connections to the NSS
volumes.
3 Optionally select Novell iManager to be installed on the server.
You must install iManager somewhere in the same tree as the server. If you install iManager and
NSS on the same server, the storage-related plug-ins are automatically installed.
If you install iManager on a different server, make sure you install the storage-related plug-ins
that you need to manage NSS file system and services. For information about installing storagerelated plug-ins on an existing server, see Section 9.1, “Novell iManager and Storage-Related
Plug-Ins,” on page 77.
Installing and Configuring Novell Storage Services
35
4 Optionally select non-NCP file access services to be installed on the server.
NSS requires NCP Server to be installed and running on the server even if you select one or
more of these alternate methods for user access.
 Novell AFP: Allows Macintosh users to connect to NSS volumes with the AFP (Apple
Filing Protocol). For information about configuring and managing AFP, see the OES 11 SP2:
Novell AFP for Linux Administration Guide.
 Novell CIFS: Allows CIFS/Samba users to connect to NSS volumes with the CIFS/Samba
protocol. For information about configuring and managing Novell CIFS, see the OES 11
SP2: Novell CIFS for Linux Administration Guide.
 Novell Samba: Allows CIFS/Samba users to connect to NSS volumes with the CIFS/Samba
protocol. This service is based on Linux Samba and requires users to be Linux-enabled with
Linux User Management. For information about configuring Samba during the install and
configuring users for CIFS/Samba access after the install, see the OES 11 SP2: Novell Samba
Administration Guide.
IMPORTANT: Novell Samba and Novell CIFS are different file access services that allow
CIFS/Samba users to connect to NSS volumes. You can select only one of the two on a given
server because of port contention issues.
5 Optionally select Novell Cluster Services to be installed on the server.
Install NCS if you plan to share NSS pools in a cluster. For information about installing NCS and
configuring shared devices and pools using NCS, see the OES 11 SP2: Novell Cluster Services for
Linux Administration Guide.
6 Click Accept to return to the Installation Settings page.
Licensing dialog boxes might open where you are prompted to accept proprietary modules
being installed.
7 Continue with the OES 11 SP2 installation.
8 After the install, use the Software Updater (or other update methods) to install any NSS patches
from the OES patch channel.
3.2.2
Installing NSS on an Existing OES 11 SP2 Server
If you did not install Novell Storage Services during the OES 11 SP2 installation, you can install it
later by using YaST > Open Enterprise Server > OES Install and Configuration.
1 Log in to the server as the root user.
2 In YaST, select Open Enterprise Server > OES Install and Configuration.
3 In the Selection window under OES Services, click Novell Storage Services and any other OES
components that you want to install.
Follow the instruction for selecting NSS and its dependencies described in Section 3.2.1,
“Selecting the NSS Pattern During the OES 11 SP2 Install,” on page 34.
4 Click Accept to begin the install, then click Continue to accept changed packages.
5 Follow the on-screen instructions to complete the install.
6 After the install, enter rcnovell-smdrd restart at the command prompt, or reboot the server
before performing any backups, restores, or server consolidations on the NSS file system.
7 Use the Software Updater (or other update methods) to install patches from the OES patch
channel and the SUSE Linux Enterprise Server 11 SP3 patch channel.
36
OES 11 SP2: NSS File System Administration Guide for Linux
3.2.3
Enabling or Disabling NSS
When you install NSS during the initial install, NSS and its dependencies are automatically enabled
in the Linux System Services (Runlevel) with Runlevels 2, 3, and 5. NSS is not automatically enabled
if you post-install NSS on the server.
Although you can uninstall OES 11 SP2 service RPMs through YaST, we do not recommend it
because so many modules have interdependencies. Uninstalling services can leave the server in an
undesirable state. If you no longer plan to use a server, we recommend disabling the service.
WARNING: NSS must be enabled to use any components or tools for the NSS file system.
To enable or disable NSS:
1 Log in to the server as the root user, then start YaST.
2 Click System > System Services (Runlevel), then click Expert Mode.
3 Select novell-nss, then click Set/Reset.
4 Select one of the following options from the Set/Reset menu:
 Disable the service
 Enable the service
5 Click Finish to save and apply your changes, then exit the YaST Runlevel tool.
3.3
Upgrading the Media Format for Hard Link Support
The enhanced hard links support for NSS volumes requires a media format upgrade after you have
installed or upgraded the operating system. For guidelines and media upgrade instructions, see
Chapter 4, “Upgrading the NSS Media Format,” on page 41.
3.4
Enabling Users for Linux Utilities and Services
eDirectory users must also have a Linux identity in order to access NSS volumes via Linux services
and utilities such as Samba, SSH, and FTP. OES 11 SP2 provides the Linux User Management (LUM)
technology that creates the local Linux user identity and stores the UID for the user in eDirectory. The
Administrator user for the server is automatically Linux-enabled with LUM and added to a LUM
administrator group for the server as part of the installation process. Before users of the NSS volumes
can access NSS volumes with Linux services and utilities, you must enable both the service and the
users with LUM.
For information, about how to enable users and Linux services with LUM, see the OES 11 SP2: Novell
Linux User Management Administration Guide. For more information about why LUM is necessary for
Linux services and utilities, see Section 5.5, “Access Control for NSS,” on page 53.
Installing and Configuring Novell Storage Services
37
3.5
Updating NSS on OES 11 SP2
You can get NSS patches in the OES 11 update channel or from the Novell Download Web site (http:/
/download.novell.com).
Consider the following issues when updating NSS on OES 11 SP2:
 Section 3.5.1, “Parameter Settings,” on page 38
 Section 3.5.2, “Reboot Server or Restart jstcpd, adminusd, and volmnd,” on page 38
 Section 3.5.3, “Storage-Related Plug-Ins,” on page 39
3.5.1
Parameter Settings
When you update an OES 11 SP2 server with a Support Pack or apply NSS patches, all NSS-related
parameter settings remain the same as they were before the update or patch. For example, serverlevel, pool, and volume settings are not modified.
3.5.2
Reboot Server or Restart jstcpd, adminusd, and volmnd
If you do not reboot the server as part of the update or patch process, some NSS functions and tools
might not work properly until you restart the jstcpd, adminusd, and volmnd daemons.
For example, NSSMU or the Novell Distributed File Services (DFS) volume location database might
hang when you create a volume. DFS is delivered and updated as a part of the NSS package on OES
11 SP2 servers. If the server is a VLDB replica site, the vldb might not work properly or cause hangs
when creating new NSS volumes. For information, see “DFS may not function properly after
upgrading NSS on OES 2 and later.” in the OES 11 SP2: Novell Distributed File Services Administration
Guide for Linux.
To ensure that NSS and DFS is functioning properly after updating OES 11 SP2 to a support pack or
applying NSS patches:
1 Log in as the root user, then open a terminal console.
2 At the terminal console prompt, restart the following daemons in the order indicated:
/opt/novell/nss/sbin/jstcpd
/opt/novell/nss/sbin/adminusd
/opt/novell/nss/sbin/volmnd
3 If the server is a VLDB replica site for a Novell Distributed File Services management context,
restart the VLDB by entering,
vldb stop service
vldb start service
38
OES 11 SP2: NSS File System Administration Guide for Linux
3.5.3
Storage-Related Plug-Ins
The File Protocols plug-in for the Native File Access Protocols for NetWare service has been replaced
in OES 2 SP1 by two plug-ins: Novell AFP (afpmgmt.npm) and Novell CIFS (cifsmgmt.npm). These
plug-ins support AFP and CIFS services for NSS volumes on both Linux and NetWare.
The AFP and CIFS plug-ins also require the NSS Management (nssmgmt.npm) and Storage
Management (storagemgmt.npm) plug-ins. Other storage-related plug-ins include Archive
Versioning (arkmgmt.npm), Cluster Services (ncsmgmt.npm), Files and Folders Management
(fileman.npm), Samba Management (sambamgmt.npm), and Distributed File Services (dfsmgmt.npm).
All storage-related plug-ins share code in common with the Storage Management plug-in.
You must uninstall the existing storage-related plug-ins, then install the new plug-ins at the same
time to make sure that the common code works for all plug-ins.
3.6
Upgrading from OES 2 to OES 11 SP2
Consider the following issues when upgrading from OES 2 to OES 11 SP2:
 Section 3.6.1, “Parameter Settings,” on page 39
 Section 3.6.2, “Read Ahead Blocks Setting,” on page 39
3.6.1
Parameter Settings
When you upgrade the server from OES 2 to OES 11 SP2, all NSS-related parameter settings remain
the same as they were before the upgrade. For example, server-level, pool, and volume settings are
not modified. All future pools and volumes are created with the default settings for OES 11 SP2.
3.6.2
Read Ahead Blocks Setting
The Read Ahead Blocks parameter specifies the number of data blocks that NSS reads ahead for any
open file on which read operations are ongoing in the specified server. Its default setting is 16. After
upgrading to OES 11 SP2, if you experience read performance problems with an NSS volume, check
the volume’s Read Ahead Blocks parameter setting.
You can view or modify the Read Ahead Blocks value by using NSSMU as follows:
1 In NSSMU, select Volumes to view a list of volumes.
2 Select the volume, then press Enter to view its Volume Properties list.
3 Press the arrow keys to go to the Read Ahead Count in Blocks parameter, then press Enter to access
the setting.
4 Type the new count value, then press Enter.
Valid values are 0 to 1024 blocks, where a block is 4 KB. However, a count of 128 is the practical
maximum value. Read-ahead block counts higher than 128 can starve other system components
for memory or buffers, which can impair performance or cause the system to hang. As the
number of concurrent connections to unique files increase, you should reduce the number of
read-ahead blocks.
5 Press the arrow keys to go to Apply, then press Enter to save and apply the new setting.
You can also modify the value by using the Read Ahead Blocks switches in the NSS Console. For
information, see Section A.30, “Read Ahead Blocks and Allocate Ahead Blocks Commands,” on
page 394.
Installing and Configuring Novell Storage Services
39
For more information, on updating see “For Servers with EVMS and LVM on the System Device” in
the OES 11 SP2: Installation Guide.
40
OES 11 SP2: NSS File System Administration Guide for Linux
4
Upgrading the NSS Media Format
4
An enhanced Novell Storage Services (NSS) media format is available that provides improved
support for hard links. After you install or upgrade your operating system to Novell Open Enterprise
Server 11 SP2, you can decide whether to upgrade the media format for your NSS volumes to use the
new metadata structure; some restrictions apply.
 Section 4.1, “Guidelines for Upgrading the Media Format of NSS Volumes,” on page 41
 Section 4.2, “Upgrading the Media Format for Existing NSS Volumes,” on page 43
 Section 4.3, “Upgrading the Media Format Automatically for New NSS Volumes,” on page 45
4.1
Guidelines for Upgrading the Media Format of NSS
Volumes
Before upgrading the media format of your NSS volumes, make sure you understand the following
guidelines:
 Section 4.1.1, “Cross-Platform Support for the NSS Media Upgrade,” on page 41
 Section 4.1.2, “Which NSS Volumes to Upgrade,” on page 42
 Section 4.1.3, “Before Upgrading the Media Format,” on page 42
 Section 4.1.4, “After Upgrading the Media Format,” on page 43
 Section 4.1.5, “If You Do Not Upgrade the Media Format,” on page 43
4.1.1
Cross-Platform Support for the NSS Media Upgrade
The NSS media upgrade for enhanced hard links support is available for the following operating
platforms (and later versions):
 Novell Open Enterprise Server 11
 Novell Open Enterprise Server 2 Linux and NetWare
 Novell Open Enterprise Server 1 SP2 NetWare
 NetWare 6.5 SP4
If the NSS volume is used in a cluster with Novell Cluster Services, all nodes in the cluster must be
upgraded to a supported platform before you upgrade the media format for any shared volumes.
After you upgrade the media format on an NSS volume, it cannot be mounted on an unsupported
platform.
Upgrading the NSS Media Format
41
4.1.2
Which NSS Volumes to Upgrade
With a few exceptions as noted below, it is highly recommended that you upgrade the NSS volume to
the new metadata structure after you upgrade the operating system to a supported platform.
Do not upgrade the media format of the NSS volume to the new metadata structure if any of the
following conditions exist:
 You have not yet verified that your system is performing as expected after upgrading the
operating system, and you might need to roll back to an earlier release.
 You plan to migrate one or more devices containing the NSS volume to an unsupported
platform.
 You need to share this volume with a mixed cluster with Novell Cluster Services where there are
some unsupported platforms in the mix. The cluster software prevents the media upgrade
unless all operating systems in the cluster support the new media format.
4.1.3
Before Upgrading the Media Format
 “Opportunity to Roll Back Before the Media Upgrade” on page 42
 “Hard Link Behavior without a Media Upgrade” on page 42
 “Clusters and the Media Upgrade” on page 42
Opportunity to Roll Back Before the Media Upgrade
When you upgrade the operating system, NSS does not automatically upgrade the media format to
use the new metadata structure. This allows you to the opportunity to roll back to the previous
release if necessary. Before you upgrade the media format of NSS volumes to the new data structure,
make sure the server is performing as expected.
WARNING: After the media format is upgraded to the new metadata structure, you cannot roll back
to a previous release.
Hard Link Behavior without a Media Upgrade
Until you upgrade the media format for enhanced hard link support, any existing hard links on the
NSS volume are visible, and they can be opened, closed, read, and written. However, you cannot
create new hard links, and you cannot rename or delete existing hard links. Attempts to do so are
rejected with an error.
Clusters and the Media Upgrade
If you attempt to upgrade the media format for a shared NSS volume on a cluster with Novell Cluster
Services, the upgrade is refused until all servers on the system are configured with a supported
operating system. Make sure to upgrade all cluster nodes to a supported platform before attempting
to upgrade the shared volume.
42
OES 11 SP2: NSS File System Administration Guide for Linux
4.1.4
After Upgrading the Media Format
After you upgrade the volume to use the media format with enhanced hard links, the following
constraints apply for its use:
 The Hard Links attribute must be enabled for the upgraded NSS volume before you can create
hard links.
When you upgrade the NSS volume to use the new media format, if any old-style hard links are
detected, the Hard Links attribute is automatically enabled. Otherwise, the volume is upgraded,
but the attribute is disabled and must be enabled before you can create hard links. For
information, see Section 24.3, “Enabling or Disabling the Hard Links Attribute,” on page 311.
 The upgraded NSS volume cannot be rolled back to use the old media format.
 You cannot roll back the operating system to a previous version.
 You cannot migrate a device containing the upgraded volume to a system with an unsupported
operating system.
 Only nodes that have a supported operating system can be added to a cluster where shared
volumes use the upgraded media format.
4.1.5
If You Do Not Upgrade the Media Format
If you do not upgrade the media format for an NSS volume, the volume’s format uses the same
metadata structure as is used on earlier releases. Any existing hard links on your system’s NSS
volumes are visible, and they can be opened, closed, read, and written. However, until you upgrade
the NSS volume to the new structure, you cannot create new hard links, and you cannot rename or
delete existing hard links. Attempts to do so are rejected with an error.
If the non-upgraded NSS volume is shared in a mixed cluster with Novell Cluster Services, hard links
can be created, renamed, or deleted by first mounting the volume on a node in the cluster with an
operating system that is compatible with the old media format.
It is possible to move devices that contain non-upgraded NSS volumes cross-platform to servers with
operating systems compatible with the old media format. For information about moving media
cross-platform, see Section 11.2, “Moving Non-Clustered Devices From NetWare 6.5 SP8 Servers to
OES 11 SP2,” on page 120.
4.2
Upgrading the Media Format for Existing NSS Volumes
 Section 4.2.1, “Upgrading the Media Format,” on page 43
 Section 4.2.2, “Enabling Hard Links After the Media Upgrade,” on page 44
4.2.1
Upgrading the Media Format
The media format upgrade process runs in the background. The elapsed time for the upgrade process
varies according to the number of storage objects in the volume; the more objects a volume contains,
the longer it takes. Volumes with a large number of objects can take a substantial time to upgrade.
The commands in this section perform the one-time upgrade for currently active NSS volumes. A
message is sent to the logger screen when the process is successfully completed.
Issue the commands at the NSS Console (nsscon) as the root user.
Upgrading the NSS Media Format
43
nss /ZLSSUpgradeCurrentVolumeMediaFormat=volumename
Upgrades the specified active volume. If it is a shared volume, the clustering system vetoes the
upgrade until all nodes in the cluster are running a supported operating system.
nss /ZLSSUpgradeCurrentVolumeMediaFormat=all
Upgrades all currently active volumes. If a shared volume is encountered, the clustering system
vetoes the upgrade until all nodes in the cluster are running a supported operating system.
nss /ZLSSUpgradeCurrentVolumeMediaFormat=all /include=shared
Upgrades all currently active volumes that are part of a shared pool, but only if the clustering
system permits it.
nss /ZLSSUpgradeCurrentVolumeMediaFormat=all /include=local
Upgrades all currently active volumes that are not part of a shared pool.
nss /ZLSSUpgradeCurrentVolumeMediaFormat=all /include=local,shared
Upgrades all currently active volumes whether they are shared or local. If a shared volume is
encountered, the clustering system vetoes the upgrade until all nodes in the cluster are running
a supported operating system.
4.2.2
Enabling Hard Links After the Media Upgrade
After the media is upgraded successfully, you must set the Hard Links attribute on volumes where
you want to create hard links. The Hard Links attribute is automatically enabled if there are existing
hard links on the volume. For information about using hard links on NSS volumes, see Chapter 24,
“Managing Hard Links,” on page 307.
IMPORTANT: Do not attempt to enable the Hard Links attribute until the upgrade process is
complete.
1 Issue the following commands at the NSS Console (nsscon) as the root user.
Command
Description
nss /HardLinks=volumename
Enables the Hard Links attribute for a specified
volume. This enables hard links to be created on
the volume.
nss /HardLinks=all
Enables the Hard Links attribute for all NSS
volumes on the server. This enables hard links to
be created on any volume on the server. Any given
hard link can point only to a file on the same
volume.
2 You can verify that the hard links attribute is set for the volume by entering the following
command at the NSS Console (nsscon):
volumes
44
OES 11 SP2: NSS File System Administration Guide for Linux
The Hard Links attribute appears in the Attributes column for volumes where it is enabled.
4.3
Upgrading the Media Format Automatically for New NSS
Volumes
You can enable all future NSS volumes to be automatically created with the new media format. Make
sure to use the commands on the servers where you plan to use hard links.
IMPORTANT: You must manually enable the Hard Links attribute on a volume before you are able
to create hard links on it. For instructions, see Section 24.3, “Enabling or Disabling the Hard Links
Attribute,” on page 311.
Issue the following commands from the NSS Console (nsscon), or the nssstart.cfg file. If the
command is issued from the command line, it persists until a server reboot. If the command is placed
in the nssstart.cfg file, it persists across server reboots.
NOTE: Creates all new NSS volumes with the upgraded structure if the new volume is on a newly
created pools (that is, pools created after enabling the ZLSSUpgradeNewVolumeMediaFormat flag).
nss /ZLSSUpgradeNewVolumeMediaFormat=all
Creates all new NSS volumes with the upgraded structure. This applies to volumes on both local
and shared pools.
nss /ZLSSUpgradeNewVolumeMediaFormat=all /include=shared
Creates all new NSS volumes with the upgraded structure if the new volume is on a shared pool.
However, the clustering system vetoes the upgrade until all nodes in the cluster are running a
supported operating system.
nss /ZLSSUpgradeNewVolumeMediaFormat=all /include=local
Creates all new NSS volumes with the upgraded structure if the new volume is not on a shared
pool.
nss /ZLSSUpgradeNewVolumeMediaFormat=all /include=local,shared
Creates all new NSS volumes with the upgraded structure whether the volume is local or
shared. In a cluster, the clustering system vetoes the upgrade until all nodes in the cluster are
running a supported operating system.
Upgrading the NSS Media Format
45
46
OES 11 SP2: NSS File System Administration Guide for Linux
5
Planning NSS Storage Solutions
5
Consider what your storage needs are and how you can effectively manage and divide your storage
space to best meet your needs. Use the information in this section to plan your storage deployment
by using the Novell Storage Services file system.
 Section 5.1, “Guidelines for NSS Storage,” on page 47
 Section 5.2, “Compatibility and Interoperability Issues for NSS,” on page 50
 Section 5.3, “Creating NSS Storage Objects in eDirectory,” on page 50
 Section 5.4, “Naming NSS Storage Objects,” on page 50
 Section 5.5, “Access Control for NSS,” on page 53
 Section 5.6, “File Access for Users,” on page 57
 Section 5.7, “Antivirus Support for NSS,” on page 60
 Section 5.8, “Backup Support for NSS,” on page 60
 Section 5.9, “NSS Support for Memory Mapped Files,” on page 60
5.1
Guidelines for NSS Storage
Use the guidelines in this section when planning your NSS storage solution:
 Section 5.1.1, “Devices,” on page 47
 Section 5.1.2, “Software RAID Devices,” on page 48
 Section 5.1.3, “Device Partitions,” on page 48
 Section 5.1.4, “NSS Pools and Volumes,” on page 48
 Section 5.1.5, “NSS Encrypted Volumes,” on page 49
 Section 5.1.6, “Storage Features,” on page 49
5.1.1
Devices
For OES 11 SP2, NSS recognizes device sizes up to 2E64 sectors (that is, up to 8388608 petabytes (PB)
based on the 512-byte sector size). For more information, see Section 10.1.1, “Device Size,” on
page 105.
Storage devices can be local to the server, such as a system hard drive, or external to the server, such
as with direct-attached storage or in a Fibre Channel or iSCSI storage area network (SAN). For
information about common device types, see Section 10.1.2, “Device Types,” on page 106.
A local hard drive typically contains the operating system software and can optionally be used for
applications and user data.
Planning NSS Storage Solutions
47
If your system does not have sufficient power loss protection, you must use write-through cache
management for SCSI devices to minimize the risk of losing data if there is a power failure. Writethrough cache management assures the file system that writes are being committed to disk as
required. For information, see Section 10.9, “Enabling Write-Through Cache Management on SCSI
Devices and RAID Controllers,” on page 117.
Understanding how much free space you will need from each device helps you during the disk
carving phase of the NSS configuration. For information about space availability, see Section 10.2,
“Viewing a List of Devices on a Server,” on page 109.
5.1.2
Software RAID Devices
NSS supports software RAIDs 0, 1, 5, 0+1, and 5+1. You can RAIDS 0, 1, and 5 in iManager or in
NSSMU. RAIDs 0+1 and 5+1 can be created using NSSMU only.
If you use hardware RAID devices, software RAID devices are unnecessary. You can use both
hardware and software RAID devices on the same server.
To maximize the performance benefits of software RAID devices, partitions used for the RAID
should come from different physical devices. For software RAID 1 devices, the mirrored partitions
cannot share any disks in common.
NSS software RAIDs can be used for local and clustered pools. You can use the same RAID0/5 device
for multiple local pools. You can use multiple RAID0/5 devices to contribute space to a single pool.
Any RAID0/5 device that is used for a clustered pool must contribute space exclusively to that pool; it
cannot be used for other pools. This allows the device to fail over between nodes with the pool cluster
resource. Ensure that its component devices are marked as Shareable for Clustering before you use a
RAID0/5 device to create or expand a clustered pool.
For more information, see Section 13.1, “Understanding Software RAID Devices,” on page 137 and
Section 13.2, “Planning for a Software RAID Device,” on page 139.
5.1.3
Device Partitions
NSS management tools automatically create and partitions for you on devices when you create and
delete pools. For information, see Section 12.1, “Understanding Partitions,” on page 129.
5.1.4
NSS Pools and Volumes
NSS is used for data storage. You can create NSS pools and volumes to store data on devices
managed by Novell Linux Volume Manager (NLVM). The operating system and applications are
stored on Linux POSIX volumes.
For prerequisites for creating a pool, see Section 15.1, “Guidelines for Creating a Pool,” on page 164.
When creating a pool, you can assign free space from multiple devices to create the maximum-sized
pool of 8 TB. You can grow a pool dynamically by adding free space from the same device or
different devices.
To mirror pools, each pool must use partitions from different devices; mirrored pools can have no
devices in common.
Pools can contain multiple volumes, but a given volume belongs to only one pool.
48
OES 11 SP2: NSS File System Administration Guide for Linux
Pools can be overbooked. If a pool contains multiple volumes, the cumulative administrative
maximum sizes of all volumes can exceed the pool size by using the overbooking feature, although
real total size is bound by physical limitations. Because space is allocated to volumes as needed, a
volume might not reach its quota.
When creating a volume, assign it a fixed volume quota, or allow the volume to grow dynamically to
the size of the pool. Any given volume’s quota cannot exceed the size of the pool.
All devices that contribute space to a clustered pool must be able to fail over with the pool cluster
resource. You must use the device exclusively for the clustered pool; do not use space on it for other
pools or for Linux volumes. A device must be marked as Shareable for Clustering before you can use
it to create or expand a clustered pool.
For guidelines for using volume attributes, see Section 18.1, “Understanding Volume Properties,” on
page 209.
For more guidelines for creating and managing NSS volumes, see Section 18.2, “Guidelines for NSS
Volumes,” on page 214.
5.1.5
NSS Encrypted Volumes
Encrypted Volume Support is available for data volumes. Create encrypted volumes only after you
verify a successful system install or upgrade. For information, see “Understanding Encrypted
Volume Support” on page 239.
5.1.6
Storage Features
Descriptions of the NSS storage features and guidelines for their use are located in sections that
discuss the how to manage them. Table 5-1 identifies the features and provides links to the
guidelines.
Table 5-1 Guidelines for Using NSS Storage Features
Storage Feature
Refer to
Pool snapshots
Section 17.1, “Understanding Pool Snapshots,” on page 193
Section 17.2, “Guidelines for Using and Managing Pool
Snapshots,” on page 195
Section 7.1, “Cross-Platform Issues for NSS Pool Snapshots,” on
page 69
Compression
Section 21.1, “Understanding Compression,” on page 263
Quotas
Section 22.1, “Understanding Space Quotas,” on page 281
Salvage and purge
Section 23.1, “Understanding the NSS Salvage System,” on
page 295
Hard links
Section 24.1, “Understanding Hard Links,” on page 307
Security
Chapter 20, “Securing Access to NSS Volumes, Directories, and
Files,” on page 247
Chapter 30, “Security Considerations,” on page 357
Performance tuning
Chapter 27, “Tuning NSS Performance,” on page 333
Planning NSS Storage Solutions
49
5.2
Compatibility and Interoperability Issues for NSS
Table 5-2 lists references for compatibility and interoperability issues for NSS.
Table 5-2 Compatibility and Interoperability Issues for NSS
5.3
Known Issues
Refer to
Virtualization environments
Chapter 6, “Using NSS in a Virtualization
Environment,” on page 61
Cross-platform issues
Chapter 7, “Cross-Platform Issues for NSS,” on
page 69
Clustering NSS pools and volumes
Chapter 8, “Cluster-Enabling Shared NSS Devices
and Pools with Novell Cluster Services,” on
page 73
Creating NSS Storage Objects in eDirectory
When you use NSSMU or iManager to create an NSS pool or volume on a server, a Storage object is
automatically created in NetIQ eDirectory. By default, the name of the Storage object is the server’s
name with an underscore and the object’s name appended (for example, myserver_sys). A Storage
object represents a logical or physical object on a server, whether it is a writable disk, a CD, or other
storage medium.
IMPORTANT: An NSS volume must have a Storage object in eDirectory to be able to participate in
Novell Distributed File Services.
For more information about NetIQ eDirectory, see the eDirectory website (https://www.netiq.com/
documentation/edir88/).
5.4
Naming NSS Storage Objects
Storage object names must be unique on a server. If the server is part of a cluster, then all pools and
volumes must have unique names across all servers in the cluster, whether they are in shared
relationships or not.
 Section 5.4.1, “Case Insensitive Names,” on page 50
 Section 5.4.2, “Number of Characters Allowed,” on page 51
 Section 5.4.3, “Conventions for Valid Names of NSS Storage Objects,” on page 51
 Section 5.4.4, “Other Naming Guidelines,” on page 52
5.4.1
Case Insensitive Names
NSS storage object names are case insensitive. Names such as AURORA, Aurora, and aurora are the
same. NSS saves pool and volume names in uppercase. NSS software RAID device names and labels
are case sensitive. For example, if you enter MyRaid as the name, it is saved as MyRaid only.
50
OES 11 SP2: NSS File System Administration Guide for Linux
IMPORTANT: Because Linux treats filenames as case sensitive, when using NSS volumes on Linux,
make sure to mount the volume with the Long name space (ns=long) option so that file queries are
case insensitive. For information, see Section 18.11, “Mounting NSS Volumes with Linux
Commands,” on page 225.
5.4.2
Number of Characters Allowed
For the NSS file system, the maximum length supported for a filename (the name and file extension)
is 255 16-bit Unicode characters. The maximum length supported for the full path name (which
includes the volume name, directories, filename, extension, and delimiters in the path) is 1023 16-bit
Unicode characters. However, different tools, applications, and file systems place different limits on
filenames and path lengths, some of which can be more or less restrictive than these limits. While it is
possible to create a full path that is longer than 1023 characters, most tools will have difficulty dealing
with it.
Use the guidelines in Table 5-3 to determine the length requirements for names of NSS Storage
objects.
Table 5-3 Storage Object Name and Password Length
NSS Storage Object
Minimum Number
of Characters (16bit Unicode
Characters)
Maximum Number
of Characters (16bit Unicode
Characters)
Device name for a physical or logical device
2
15
Device name for a software RAID
2
15 (NSSMU)
NOTE: RAIDs 0+1 and 5+1 can be created only from
NSSMU.
5.4.3
58 (iManager)
Longer names are
truncated.
Partition label
2
128
Pool name
2
15
Volume name
2
15
Encryption password for encrypted NSS volumes. Use
standard ASCII characters
2 (a minimum of 6 is
recommended)
16
Pathnames for files, including the server name, volume
name, path delineators (such as colons, slashes, and
dots), directory names, filename, and file extension
1
255
Conventions for Valid Names of NSS Storage Objects
Valid device, pool, and volume object names conform to the following naming conventions. We
recommend that you also consider the character conventions for the software RAID names in order to
have consistent naming policies on your system.
 Use only valid characters:
ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789_!@#$%&()
Planning NSS Storage Solutions
51
NOTE:
 Do not use the special characters !@#$%&() with the shared pool.
 NSS software RAID device names are case sensitive. For example, if you enter MyRaid as
the name, it is saved as MyRaid only.
IMPORTANT: Special characters (non-alphanumeric) can create confusion or problems for
some configuration files, command line utilities, applications, and scripts. For this reason, you
should avoid using the ampersand (&), at (@), dollar sign ($), exclamation point (!), percent (%),
and number sign (#) characters. For more information, see Section 5.4.4, “Other Naming
Guidelines,” on page 52
 Do not use spaces in the object name.
 Do not begin or end the object name with an underscore (_).
 Do not use multiple contiguous underscores (__) anywhere in the object name.
IMPORTANT: Novell Cluster services supports only alphanumeric characters and the underscore
character in cluster resource names. Special characters ( !@#$%&() )are not supported. Because the
pool name is automatically used in the pool cluster resource name, do not use special characters in
pool names for pools that you plan to cluster enable.
5.4.4
Other Naming Guidelines
 In general, we recommend that you avoid using reserved names or words as names of Storage
objects in order to avoid confusion.
For example, the following case-insensitive names are reserved names:
ALL
AUX
CLOCK
COM1
COM2
COM3
COM4
CON
LPT1
LPT2
LPT3
NETQ
NUL
PIPE
PRN
 Some characters on Linux, such as the ampersand (&), dollar sign ($), exclamation point (!), and
number sign (#) characters, can cause problems in some configuration files, command line
utilities, applications, and scripts. You might need to use different techniques in each case to
make the name be accepted in the manner intended. Refer to the documentation for the specific
consumer application or utility to find how to treat names that contain special characters in that
environment.
52
OES 11 SP2: NSS File System Administration Guide for Linux
To avoid this extra effort, we recommend that you avoid using special characters in names of
Storage objects.
 Because the “at sign” (@) character (also called “the at symbol”) is an element of electronic mail
addresses, such as code@engineer.com, it might cause confusion and possible problems in a
Storage object name. A Web browser or other application could mistake it for an e-mail address.
We recommend that you do not use the @ character in Storage object names.
 The percent character (%) might cause problems if it is passed in a format string to an application
routine that uses it to delineate parameters. For example, if a volume name that contains the
percent character, such as store%sales, is passed to an (s)printf routine, the (s)printf
routine might look for parameters that are not there and crash.
We recommend that you do not use the percent character in Storage object names.
 If spaces are used in User or Group object names, you must enclose the object name in double
quotation marks (") in order for it to be recognized in command line utilities, scripts, and
applications.
 If special characters are used in User or Group object names and passwords, you might need to
use different escape techniques in command line utilities (such as Bash on Linux) to make the
name be accepted in the manner intended. Refer to the documentation for the specific command
line utility to find how to escape special characters in that environment.
For example, enclosing the name in double quotation marks and preceding the character with a
backslash are common techniques for escaping special characters when parsing command lines.
To avoid this extra effort, we recommend that you avoid using special characters in names of
User and Group objects and in passwords.
5.5
Access Control for NSS
This section describes how NetIQ eDirectory, NCP (NetWare Core Protocol) Server, and Linux User
Management (LUM) work with Novell Storage Services to provide access to NSS volumes on OES 11
SP2 servers.
 Section 5.5.1, “Administrator User and Root User Roles for NSS,” on page 53
 Section 5.5.2, “NSS File System Users,” on page 55
 Section 5.5.3, “Novell Trustee Model,” on page 56
 Section 5.5.4, “POSIX Permissions,” on page 56
 Section 5.5.5, “How NSS Uses Novell Linux User Management,” on page 57
5.5.1
Administrator User and Root User Roles for NSS
The Administrator user and the Linux root user are two very different concepts. It is important to
understand the role of each in managing your NSS volume on OES 11 SP2 servers.
 “Administrator User” on page 54
 “Root User” on page 54
Planning NSS Storage Solutions
53
Administrator User
The Administrator user is an eDirectory user who is given all file system trustee rights for the server,
including the Supervisor right. The Administrator user account, or the Administrator equivalent user
account, is given the following privileges:
 The user identity and credentials are defined in eDirectory.
 The user is assigned as a trustee of the NSS volume and given all file system trustee rights for
that volume. You can also create a group for administrators with equivalent rights, and assign
the user to that group.
 The username must be Linux-enabled with Linux User Management (LUM), which gives the
user both an eDirectory GUID and a POSIX UID on the server.
NOTE: You might see the user with the same eDirectory GUID even after LUM disabling the
user. This is because NCP server clears its cache periodically at the interval of 30 minutes.
During this time do not restart edirectory. Run nsscon /ResetIDCache after 30 minutes. For
more information on ID Cache Commands, see Section A.5.2, “ID Cache Commands,” on
page 373.
 The user belongs to the Administrator group for the server that is Linux-enabled with LUM.
The Administrator user who installs NSS on OES 11 SP2 is automatically given these privileges. Any
other administrator, including the Tree Administrator user, who you want to be able to manage the
NSS storage must be manually configured with the same privileges.
IMPORTANT: The Tree Administrator user is not automatically granted permissions to OES servers
installed in the tree.
For more information about Linux-enabled eDirectory users, see Section 5.5.2, “NSS File System
Users,” on page 55.
For a Linux server, the administrator logs in to iManager as the Administrator user (or Administrator
equivalent user) to manage the NSS volume. The Administrator user can also use the iManager Files
and Folders plug-in, NetStorage, and the Novell Client to manage file system trustee assignments,
trustee rights, inherited rights masks, and file and directory attributes. These tools can also be used to
purge and salvage files for volumes where the Salvage attribute is enabled.
Root User
The root user is a local Linux user who is the all-powerful connection when running on the Linux
server. The root user is hardcoded internally in NSS to have all access rights to all files. In this way,
the root user on Linux is similar to the Link Connection 0 user on NetWare.
The root user is not defined as a user in eDirectory, and the root user is not Linux-enabled with
LUM. This allows you to log in to the server as the root user when eDirectory services are not
available. The root user is the only local Linux user who is allowed to access NSS via the VFS layer
without having an eDirectory GUID.
The root user logs in directly to the server to use NSS utilities (such as nsscon, nssmu, rights, attrib,
metamig, ravsui, and raview) from the terminal console and to issue NSS command line commands
from the NSS Console (NSSCON). The root user can also execute applicable Linux commands and
utilities.
When accessing an NSS volume from the Linux environment, the root user observes some
information differently, depending on whether the eDirectory user is Linux-enabled or not. Any
native Linux commands that run from a terminal console on the NSS volume, such as the ls
54
OES 11 SP2: NSS File System Administration Guide for Linux
command, are sent via the VFS layer. If the users are not Linux-enabled, instead of seeing the local
UID of the eDirectory user who owns the file, the root user sees all files as belonging to either the
Nobody user (if it exists) or the root user.
IMPORTANT: NSS reports the Nobody UID or the root user UID for display purposes only; it does
not change the true file ownership information stored as the user’s eDirectory GUID in the metadata
of the file system.
5.5.2
NSS File System Users
In addition to the root user and Administrator user, file system users fall into three categories:
 “eDirectory Users” on page 55
 “Linux-Enabled eDirectory Users” on page 55
 “Local Linux Users” on page 56
eDirectory Users
NSS uses the eDirectory GUID of a user to control access by using the Novell Trustee model. Users of
the NSS volume and the Administrator user (or Administrator equivalent user) who manages the
volume must be defined as users in NetIQ eDirectory. For information about managing users with
eDirectory, see the eDirectory website (https://www.netiq.com/documentation/edir88/).
To grant access to eDirectory users, you must assign them to be trustees in the file system, grant them
file system trustee rights, and set inherited rights filters. For more information about configuring
trustees for NSS, see Section 20.1, “Configuring File System Trustees, Trustee Rights, Inherited Rights
Filters, and Attributes,” on page 247.
Linux-Enabled eDirectory Users
Linux-enabled eDirectory users are users who are defined in eDirectory, granted file system trustee
rights to the NSS volume, and Linux-enabled with Linux User Management. Linux-enabled
eDirectory users have both a POSIX UID and an eDirectory GUID. You must Linux-enable users who
need to access NSS volumes via Samba, NFS, third-party AFP solutions, or who need to use Linux
utilities, commands, or services. NetStorage for Linux requires that users be Linux-enabled if
NetStorage is configured to use OpenSSH for user access.
WARNING: When creating a LUM (Linux-Enabled eDirectory User) user consider the following:
 Do not assign the same UID as any local Linux users and the vice versa.
 Ensure that LUM user names do not conflict with any of the local Linux users name.
Assigning conflicting or duplicate UID's or LUM user names result in NSS access violations.
IMPORTANT: A Linux service or utility must also be enabled for LUM in order for users to access
the file system with it.
For OES 2 and later, it is no longer necessary to Linux-enable the users with LUM in order for user
quotas (space restrictions) to be enforced. NCP Server for Linux has been modified to provide the
GUID information that NSS needs for file ownership. NSS uses file ownership information to enforce
user space restrictions based on a user’s eDirectory username.
Planning NSS Storage Solutions
55
Users who create hard links must be Linux-enabled in order to use the ln command on the server. It
is not necessary to Linux-enable users if they are only consumers of the hard link.
Beginning in OES 2 SP2, if users are Linux-enabled with LUM or not, the file creator, modifier, and
deleter fields are recorded with the username of the user who performs the action. In prior releases of
OES 2, the deleter field is recorded as the root user or Nobody user (if it exists) if the user is not LUM
enabled.
NOTE: In OES 1, the modifier field and deleter field are reported as the root user or Nobody user for
non-LUM-enabled users.
For information about installing and configuring Linux User Management and enabling users and
groups for Linux, see the OES 11 SP2: Novell Linux User Management Administration Guide.
Local Linux Users
Local Linux users are users who are defined locally for the Linux server. The root user is the only
local Linux user who can see and access the NSS volume.
5.5.3
Novell Trustee Model
NSS controls access to data based on the Novell Trustee model, which uses file system trustee
assignments, trustee rights, and inherited rights filters to control file access. The trustee model
depends on the secure directory services provided by eDirectory to manage the file system users. For
example, eDirectory users must be authenticated by eDirectory to connect to the server, and NSS uses
the effective file system rights of the user to control access to specific files or directories.
For information about the Novell Trustee model, see “ Understanding File System Access Control
Using Trustees” in the OES 11 SP2: File Systems Management Guide.
5.5.4
POSIX Permissions
For NSS volumes, the POSIX directory and file permissions are not used to determine access
permission. Access control is based on the Novell trustee model and not on the POSIX permissions or
access control lists (ACLs). NSS uses the POSIX permission fields to display Read Only, Read/Write,
Execute, and Hidden attributes for directories and files. NSS does not use the Group ID field. Group
ids associated with POSIX have no effect on files stored on NSS.
NSS does not allow the Linux system to set typical access control permissions in the POSIX fields. It
interprets Linux chmod commands to apply the values as NSS directory and file attributes, according
to the way NSS maps them to the User, Group, and Other permission fields.
By default, NSS sets the POSIX permissions fields for directories to 0777 (drwxrwxrwx). Some Linux
services specify permissions needed to use the service. NSS provides the nss /
PosixPermissionMask=mask option that allows you to change the default POSIX permissions, such
as for the Group or Other fields.
For example, SSH requires that the POSIX permissions on home directories be set so that the Other
field has no permissions. When you use NSS volumes as home directories, you must change the
permission to 0770 on the home directories. You can use the nss /PosixPermissionMask=0770
command in the NSS Console (nsscon) to modify the permissions.
For information and examples of how to interpret POSIX settings on your NSS volume, see “Viewing
Key NSS Directory and File Attributes as Linux POSIX Permissions” in the OES 11 SP2: File Systems
Management Guide.
56
OES 11 SP2: NSS File System Administration Guide for Linux
5.5.5
How NSS Uses Novell Linux User Management
Novell Linux User Management is a directory-enabled application that simplifies and unifies the
management of user profiles on Linux-based platforms.
IMPORTANT: LUM is not required for access via NCP, Novell AFP, and Novell CIFS.
Linux-enabled eDirectory users have both UIDs as local Linux users and GUIDs as eDirectory users.
NSS needs the UID to execute protocols and services that communicate to NSS through the VFS layer
only. NSS uses the GUID to enforce access to the files and directories based on the Novell Trustee
model, which uses file system trustee assignments, trustee rights, and inherited rights filters.
With Linux protocols and services, the UID is passed to NSS via the VFS layer. There is no back-end
XML call to exchange GUID information as there is with the NCP interface. NSS uses a LUM API to
translate the UID to a GUID, and then caches the result for fast mapping on subsequent access by the
same UID. With the GUID-UID mapping, NSS finds the GUID for the user who issues the command,
then executes the command. Without LUM, NSS cannot identify a GUID for the UID it receives, and
rejects the command with an error.
For information about installing and configuring LUM, enabling Linux services and utilities, and
enabling users and groups for Linux, see the OES 11 SP2: Novell Linux User Management
Administration Guide.
5.6
File Access for Users
NSS supports access via NCP and other protocols to eDirectory users and Linux-enabled eDirectory
users.
IMPORTANT: NSS uses the Novell trustee model for file access. Users must be made file system
trustees and granted trustee rights to data on the NSS volume that you want them to be able to access.
Rights management can be done in multiple management tools, including iManager, Novell Remote
Manager, the Novell Client and other NCP services, and command line commands. For information,
see Section 20.1, “Configuring File System Trustees, Trustee Rights, Inherited Rights Filters, and
Attributes,” on page 247.
 Section 5.6.1, “NCP,” on page 58
 Section 5.6.2, “Novell AFP,” on page 58
 Section 5.6.3, “Novell CIFS,” on page 58
 Section 5.6.4, “Novell Domain Services for Windows,” on page 58
 Section 5.6.5, “Samba,” on page 59
 Section 5.6.6, “SSH (Secure Shell),” on page 59
 Section 5.6.7, “Accessing Files with Linux Services, Utilities, and Protocols,” on page 59
Planning NSS Storage Solutions
57
5.6.1
NCP
NCP (NetWare Core Protocol) is the default protocol for accessing data on NSS volumes. NCP Server
is required for NSS even if users access the volume via other protocols. Users access data on NSS
volumes by using the Novell Client software on their Windows, Vista, or Linux workstations. This
document refers collectively to those workstations as “Novell clients”.
NCP Server is installed by selecting NCP Server and Dynamic Storage Technology from the OES Services
menu in the YaST installation interface. For information about NCP Server, see the OES 11 SP2: NCP
Server for Linux Administration Guide.
NCP Server works with NetIQ eDirectory, the Novell Client, and other NCP-based services such as
NetStorage to authenticate and manage user sessions. When NCP Server is running, eDirectory users
who have been granted file system trustee access can access an NSS volume with the Novell Client or
NCP services. NSS cooperates with NCP Server to track file ownership and file system trustee
assignments, trustee rights, and inherited rights based on the Novell trustee model.
The Linux file system interface uses UTF-8 encoding for all filenames. When accessing files with
NCP, make sure to use the UTF-8 enabled NCP software that is available in the latest Novell Client.
If you are converting NSS volumes from NetWare to Linux, make sure you have resolved any UTF-8
problems before moving the volume to Linux. For information, see Supporting Mixed Language
Environments with Novell NetWare (TID 10097059) (http://support.novell.com/docs/Tids/Solutions/
10097059.html) in the Novell Support Knowledgebase.
For information about configuring and managing NCP Server, see the OES 11 SP2: NCP Server for
Linux Administration Guide.
5.6.2
Novell AFP
NSS supports access to NSS volumes using the Novell AFP (Apple Filing Protocol). For OES 2 SP1
and later, Novell AFP for Linux is installed by selecting Novell AFP from the OES Services menu in
the YaST install interface.
For information about Novell AFP, see theOES 11 SP2: Novell AFP for Linux Administration Guide.
5.6.3
Novell CIFS
NSS supports access to NSS volumes using Novell CIFS. For OES 2 SP1 and later, Novell CIFS is
installed by selecting Novell CIFS from the OES Services menu in the YaST install interface.
For information about Novell CIFS, see the OES 11 SP2: Novell CIFS for Linux Administration Guide.
5.6.4
Novell Domain Services for Windows
NSS supports access to NSS volumes using Novell Domain Services for Windows (DSfW). DSfW
configures Samba access for Samba/CIFS users. Administrators must export NSS volumes over
Samba so that domain users (eDirectory users in the DSfW domain partition) can access NSS volumes
over Samba/CIFS.
Samba/CIFS users under the domain are Linux-enabled with Linux User Management. The Domain
Users group must be associated with the UNIX Workstation objects of the server (or servers if the
volume is used in a cluster) where the volume is mounted in order to give the users access to the NSS
volume via Samba/CIFS.
58
OES 11 SP2: NSS File System Administration Guide for Linux
5.6.5
Samba
Because NSS controls access based on file system trustee rights, not by the POSIX permissions, Samba
connections do not work until this trustee system has been configured for the Linux-enabled
eDirectory users of the NSS file system. You cannot set up the ACLs and standard POSIX permissions
for Samba access to an NSS volume. Instead, the Administrator user or Administrator user equivalent
must set up users in eDirectory and make file system trustee assignments, grant trustee rights, and
configure inherited rights masks on directories. The Samba service must also be enabled in LUM.
For information about configuring and managing Samba services for your OES 11 SP2 server, see the
OES 11 SP2: Novell Samba Administration Guide.
5.6.6
SSH (Secure Shell)
You can give users SSH (Secure Shell) access to NSS volumes by Linux-enabling users and the SSH
utility in Linux User Management. For information, see the OES 11 SP2: Novell Linux User
Management Administration Guide.
In addition, SSH requires that the POSIX permissions on home directories be set so that the Other
field has no permissions. By default, NSS sets the POSIX permissions to 0777 and SSH is disabled in
Linux User Management. If you use NSS volumes for home directories and you want users to have
SSH access to them, you must modify the POSIX permissions on NSS volumes to 0770. You must also
enable SSH with Linux User Management.
Add the following command in the /etc/opt/novell/nss/nssstart.cfg file to turn off all of the
bits corresponding to the Other field:
/PosixPermissionMask=0770
The setting applies to all NSS volumes on the server. If the volume is shared in a cluster, make sure to
add the command to the nssstart.cfg file and to Linux-enable SSH on all the nodes.
5.6.7
Accessing Files with Linux Services, Utilities, and Protocols
Only the root user and Linux-enabled eDirectory users who have been granted trustee access can see
and access the NSS volume from a Linux interface. Users must be Linux-enabled with Linux User
Management in order to use any of the standard Linux protocols, utilities, commands, services, or
APIs for the NSS volume.
IMPORTANT: Any Linux service or utility that you want users to have access to must also be
enabled in Linux User Management.
For information about installing and configuring Linux User Management, enabling users and
groups for Linux, and enabling Linux services and utilities, see the OES 11 SP2: Novell Linux User
Management Administration Guide.
Planning NSS Storage Solutions
59
5.7
Antivirus Support for NSS
For information about antivirus issues for NSS, see Developing and Testing Anti-virus Solutions for OESLinux (https://www.novell.com/communities/coolsolutions/developing-and-testing-anti-virussolutions-oes-linux/).
For a current list of antivirus software vendors that support Novell Open Enterprise Server, see Novell
Open Enterprise Server Partner Support: Backup and Antivirus Support (http://www.novell.com/
products/openenterpriseserver/partners_communities.html). This list is updated quarterly.
The Apple Filing Protocol (AFP) support for NSS files on OES 2 SP1 onwards is implemented via a
technology that bypasses the real-time scanning employed by most OES 11 SP2 antivirus solutions.
NSS files shared through an AFP connection might be protected by on-demand scanning on the OES
11 SP2 server or by real-time and on-demand scanning on the Apple client.
5.8
Backup Support for NSS
For information about OES 11 SP2 backup support for NSS, see Chapter 26, “Managing Backup and
Restore for Data and Trustee Information,” on page 329.
For a current list of backup software vendors that support Novell Open Enterprise Server, see Novell
Open Enterprise Server Partner Support: Backup and Antivirus Support (http://www.novell.com/
products/openenterpriseserver/partners_communities.html). This list is updated quarterly.
5.9
NSS Support for Memory Mapped Files
NSS has limited support for memory mapped files, primarily to support loading programs. NSS does
not fully support memory mapped files especially if the application uses sparse files.
For example, the CopyCat application used by Netatalk uses sparse files for its database. Netatalk
tries to create a CopyCat database as a sparse file called .AppleDB in the root of the volume by using
memory mapped IO. This can cause the server to hang if you are using an NSS volume as the
Netatalk share because of the limited support in NSS for this combination.
60
OES 11 SP2: NSS File System Administration Guide for Linux
6
Using NSS in a Virtualization
Environment
6
Use the information in this section to help you deploy Novell Storage Services file system and
services in a virtualization environment.
To get started with Xen virtualization, see the Virtualisation with Xen documentation.
 Section 6.1, “Guidelines for Using NSS in a Xen Virtualization Environment,” on page 61
 Section 6.2, “Installing OES 11 SP2 on a Virtual Machine,” on page 65
 Section 6.3, “Initializing New Virtual Disks on the Guest Server,” on page 65
 Section 6.4, “What’s Next,” on page 67
NOTE: To get started with third-party virtualization platforms, such as Hyper-V from Microsoft and
the different VMware offerings, refer to the documentation for the product that you are using.
6.1
Guidelines for Using NSS in a Xen Virtualization
Environment
Consider the following guidelines when planning to use NSS in a virtualization environment:
 Section 6.1.1, “Host Server Issues,” on page 61
 Section 6.1.2, “Virtual Machine Issues,” on page 63
 Section 6.1.3, “Guest Server Issues,” on page 64
6.1.1
Host Server Issues
 “Running NSS on the Host Server Is Not Supported” on page 61
 “Using RAIDs” on page 62
 “Using Multipath Devices” on page 62
Running NSS on the Host Server Is Not Supported
NSS pools and volumes are not supported on the Xen host server in a Xen virtualization
environment. You can install NSS on the guest servers from inside the guest server environment, just
as you would if the guest servers were physical servers.
When you create a virtual machine, you must assign devices to it. If you plan to use the virtualization
guest server as a node in a cluster and you need to be able to fail over cluster resources to different
physical servers, you must assign SAN-based physical devices to the virtual machine. You create the
NSS pools and volumes from within the guest server.
Using NSS in a Virtualization Environment
61
If you install Novell Cluster Services in the host server environment, the cluster resources use shared
Linux POSIX volumes, and do not use shared NSS pools.
If you install Novell Cluster Services in the guest server environment, the guest server is a node in the
cluster. The disk sharing is managed by Novell Cluster Services from within the guest server
environment. You can use shared NSS pools as cluster resources that run on the guest server and on
other nodes in that cluster.
For information about deployment scenarios using shared NSS pools in clusters in a virtualization
environment, see “Configuring Novell Cluster Services in a Virtualization Environment” in the OES
11 SP2: Novell Cluster Services for Linux Administration Guide.
Using RAIDs
In a Xen virtualization environment, if you need to use RAIDs for device fault tolerance in a highavailability solution, we recommend that you use standard hardware RAID controllers. Hardware
RAIDs provide better performance over using software RAIDs on the virtualization host server or
guest server.
To get the best performance from a software RAID, create a RAID device on the Xen host and present
that device to the guest VM. Each of the RAID’s segments must be on different physical devices. It is
best to present the entire physical RAID device or a physical partition of the RAID device to the guest
VM, and to not present just a file-backed virtual device.
NSS is not supported to run in the virtualization host server environment, so NSS software RAIDs
cannot be used there. Xen supports using Linux mdadm for software RAIDs on the host server.
If you attempt to create and manage a software RAID on the guest server in a production
environment, make sure to present different physical devices to the guest VM that you want to use
for the software RAID. Using segments from virtual devices that actually reside on the same physical
device on the host server slows performance and provides no protection against failed hardware
devices. The maximum number of disks that can be presented to the VM is 16 (xvda to xvdp). Xen
provides a mechanism to dynamically add and remove drives from a VM.
Using NSS software RAIDs in a virtualization guest server environment has not been tested.
Using Multipath Devices
If it is available, use your storage vendor’s multipath I/O management solution for the storage
subsystem. In this case, the multiple paths are resolved as a single device that you can assign to a
virtual machine.
Do not use multipath management tools in the guest environment.
If a storage device has multiple connection paths between the device and the host server that are not
otherwise managed by third-party software, use Linux multipathing to resolve the paths into a single
multipath device. When assigning the device to a VM, select the device by its multipath device node
name (/dev/mapper/mpathN). The guest server operating system is not aware of the underlying
multipath management being done on the host. The device appears to the guest server as any other
physical block storage device. For information, see Managing Multipath I/O for Devices (http://
www.suse.com/documentation/sles11/stor_admin/?page=/documentation/sles11/stor_admin/data/
multipathing.html) in the SLES 11: Storage Administration Guide (http://www.suse.com/
documentation/sles11/stor_admin/?page=/documentation/sles11/stor_admin/data/bookinfo.html).
62
OES 11 SP2: NSS File System Administration Guide for Linux
6.1.2
Virtual Machine Issues
 “Assigning Physical Disks or Disk Partitions to the Virtual Machine” on page 63
 “Assigning File-Backed Disk Images for Virtual Devices” on page 63
 “Configuring I/O Scheduler for OES 11 SP2 NSS on XEN Virtual Machine” on page 63
Assigning Physical Disks or Disk Partitions to the Virtual Machine
For the best performance on a Xen guest server, NSS pools and volumes should be created on block
storage devices that are local SCSI devices, Fibre Channel devices, iSCSI devices, or partitions on
those types of devices.
SATA or IDE disks have slower performance because the Xen driver requires special handling to
ensure that data writes are committed to the disk in the order intended before it reports.
Assigning File-Backed Disk Images for Virtual Devices
Novell supports file-backed disk images on virtual machines, but does not recommend using them
for important data because the volume can become corrupt after a power failure or other catastrophic
failure. For example, file-backed volumes might be useful for training and sales demonstrations.
WARNING: Data corruption can occur if you use Xen file-backed disk images for NSS volumes on
the guest server in the event of a power failure or other catastrophic failure.
Configuring I/O Scheduler for OES 11 SP2 NSS on XEN Virtual Machine
OES 11 SP2 kernel has four I/O schedulers available to choose from for custom configuration. They
each offer a different combination of optimizations. The four Types of Linux I/O Schedulers are the
following:
 NOOP Scheduler
 Deadline Scheduler
 Anticipatory Scheduler
 Completely Fair Queuing (CFQ) Scheduler
The NOOP scheduler is the simplest of all the I/O schedulers. It merges requests to improve
throughput, but otherwise attempts no other performance optimization. All requests go into a single
unprioritized first-in, first-out queue for execution. It is ideal for storage environments with extensive
caching, and those with alternate scheduling mechanisms—a storage area network with multipath
access through a switched interconnect, for instance, or virtual machines, where the hyperviser
provides I/O backend. It’s also a good choice for systems with solid-state storage, where there is no
mechanical latency to be managed.
The Deadline scheduler applies a service deadline to each incoming request. This sets a cap on perrequest latency and ensures good disk throughput. Service queues are prioritized by deadline
expiration, making this a good choice for real-time applications, databases and other disk-intensive
applications.
The Anticipatory scheduler does exactly as its name implies. It anticipates that a completed I/O request
will be followed by additional requests for adjacent blocks. After completing a read or write, it waits
a few milliseconds for subsequent nearby requests before moving on to the next queue item. Service
queues are prioritized for proximity, following a strategy that can maximize disk throughput at the
risk of a slight increase in latency.
Using NSS in a Virtualization Environment
63
The Completely Fair Queuing (CFQ) scheduler provides a good compromise between throughput and
latency by treating all competing processes even-handedly. Each process is given a separate request
queue and a dedicated time slice of disk access. CFQ provides the minimal worst-case latency on
most reads and writes, making it suitable for a wide range of applications, particularly multi-user
systems.
For OES 11 SP2 on XEN guest, the default is NOOP scheduler. To improve the I/O scheduler
performance, change the default NOOP scheduler to CFQ. Perform the following steps to view and
change the I/O scheduler after the OES 11 SP2 installation:
To view the current scheduler, enter the following command:
cat /sys/block/{DEVICE-NAME}/queue/scheduler
To change the scheduler to CFQ, enter the following command:
echo cfq > /sys/block/{DEVICE-NAME}/queue/scheduler
For example, your device name is sda. To view the scheduler, enter the following command:
cat /sys/block/sda/queue/scheduler
and the output received is the following:
[noop] anticipatory deadline cfq
To change the current NOOP scheduler to CFQ, enter the following command:
echo cfq > /sys/block/sda/queue/scheduler
The optimization in OES 11 SP2 can also be achieved during the boot time at a global level. To achieve
that, perform the following:
Add the elevator option to your kernel command in the GRUB boot loader configuration file (/boot/
grub/menu.lst) and then reboot. For example,
kernel /vmlinuz-2.6.16.60-0.46.6-smproot=/dev/disk/by-id/scsi-SATA_WDC_WD2500YS23_WD-WCANY4424963-part3 vga=0x317 resume=/dev/sda2 splash=silent showopts
elevator=cfq
or
Using YaST2, edit the optional kernel command line parameter under System > Boot Loader for the
booting kernel image or any other kernel image listed and add elevator=cfq. For more information
on editing and using the boot loader configuration, see Configuring the Boot Loader with YaST
(http://www.suse.com/documentation/sles11/book_sle_admin/?page=/documentation/sles11/
book_sle_admin/data/sec_boot_yast2_config.html).
6.1.3
Guest Server Issues
Consider the issues in this section for the OES 11 SP2 server running in the Xen host environment:
 “Initializing Virtual Disks” on page 65
 “NSS Features that Are Not Supported in a Virtualization Environment” on page 65
64
OES 11 SP2: NSS File System Administration Guide for Linux
Initializing Virtual Disks
The primary virtual disk (the first disk you assign to the virtual machine) is automatically recognized
when you install the guest operating system. The other virtual devices must be initialized before any
space is shown as available for creating a pool. Without initializing the devices, no space is shown as
available for pool creation. For information, see Section 6.3, “Initializing New Virtual Disks on the
Guest Server,” on page 65.
NSS Features that Are Not Supported in a Virtualization Environment
Some NSS features are not supported in a Xen guest server environment.
Table 6-1 NSS Feature Support in a Guest Server Environment
6.2
NSS Feature
NSS on Virtualized Linux Environment
Data shredding
Not supported
Multipath I/O
Not applicable; not supported on Linux
Software RAIDs
Not tested
Installing OES 11 SP2 on a Virtual Machine
When you install OES 11 SP2 on a virtual machine, we recommend that you configure a virtual
machine with multiple devices. Use the primary disk on the guest server as the system device with
LVM2 (the YaST install default) as the volume manager. After the install, assign additional storage
resources from the host to the virtual machine. In this scenario, NSS volumes are created only on the
data disks for the guest server, not on the system disk that you are using for the guest server’s system
device.
IMPORTANT: When you create the virtual machine, make sure to configure the size of the primary
virtual disk according to the amount of space you need for the boot (/boot), swap, and root (/)
volumes.
For information about creating a Xen virtual machine, see Virtualization with Xen documentation.
6.3
Initializing New Virtual Disks on the Guest Server
A new virtual disk can appear as an unformatted disk to the guest server if it does not have a
partition table associated with it. You must initialize the device on the guest server just as you do for
a blanked-out device on a physical server.
The primary virtual disk (the first disk you assign to the virtual machine) is automatically recognized
when you install the guest operating system. After the install, use NSS tools to initialize additional
blanked-out virtual devices where you plan to create NSS pools and volumes.
You can initialize the disk by using the Initialize Disk function in NSSMU or in the Storage plug-in to
iManager. For general instructions for initializing disks, see Section 10.5, “Initializing a Disk,” on
page 112.
Using NSS in a Virtualization Environment
65
To initialize devices for the guest server:
1 On the host, use the virtualization management tool to create and allocate virtual devices for the
virtual machine.
For information, see Virtualization with Xen documentation.
2 If the guest server is not running, boot the guest server now.
3 In iManager, click Storage > Devices.
4 Browse to locate and select the guest server to view a list of its devices.
The virtual server has a Server object in the NetIQ eDirectory database, just like a physical
server.
5 In the Devices list, select the newly added virtual device to view its details.
6 Verify that the device you selected is the new unformatted device, and not your system device or
a formatted device.
WARNING: Do not initialize the system disk.
For example, for an unformatted device, the Free Space size is reported as 0.00 Bytes.
7 Click Initialize.
When the page refreshes, the device is initialized and available for further configuration with
NSS pools and volumes.
8 Verify that the Free Space is now reported properly.
66
OES 11 SP2: NSS File System Administration Guide for Linux
For example, after the device is initialized, the Free Space is reported to be the same as Capacity.
6.4
What’s Next
To get started with virtualization, see Virtualization with Xen documentation.
For information on setting up virtualized OES 11 SP2, see “Installing, Upgrading, or Updating OES
on a VM” in the OES 11 SP2: Installation Guide.
Using NSS in a Virtualization Environment
67
68
OES 11 SP2: NSS File System Administration Guide for Linux
7
Cross-Platform Issues for NSS
7
This section describes the cross-platform compatibility issues for the Novell Storage Services file
system and services between NetWare and Novell Open Enterprise Server 11 SP2 servers. You should
understand these differences when working with NSS.
 Section 7.1, “Cross-Platform Issues for NSS Pool Snapshots,” on page 69
 Section 7.2, “Cross-Platform Issues for NSS Volumes,” on page 69
 Section 7.3, “Cross-Platform Issues for NSS Features,” on page 70
 Section 7.4, “Cross-Platform Issues for File Access,” on page 71
 Section 7.5, “Cross-Platform Issues for Management Tools,” on page 71
7.1
Cross-Platform Issues for NSS Pool Snapshots
Different pool snapshot technologies are used for NSS pools on NetWare and NSS pools on Linux.
You can create pool snapshots on either platform, but you should not move them to another platform.
Pool snapshots taken on NetWare do not work on Linux, and vice versa.
Consider these guidelines when working with NSS pool snapshots:
 The snapshots taken on a given platform are unusable if you move the pool’s devices crossplatform. Before you move a pool with existing snapshots to a different platform, delete all
existing snapshots for the pool.
WARNING: You might not be able to open the original pool on the other platform if you do not
delete the snapshots.
 NSS does not support using pool snapshots for clustered pools.
 Do not use the Pool Snapshot feature for a clustered pool in a mixed-platform cluster.
 You must remove any existing pool snapshots for a clustered pool on NetWare before you
cluster migrate the pool cluster resource from a NetWare server to a Linux server in a mixedplatform cluster. (Mixed-platform clusters are supported only during a rolling cluster
conversion.)
7.2
Cross-Platform Issues for NSS Volumes
OES 11 SP2 requires a Linux POSIX file system volume for the operating system, such as Ext3. If you
plan to move NSS pools and volumes cross-platform between NetWare and Linux servers, consider
the following guidelines:
 You cannot install the Linux operating system on an NSS volume.
 You cannot install the NetWare operating system on a Linux POSIX file system volume or on an
NSS volume on Linux.
Cross-Platform Issues for NSS
69
 Use NSS on Linux only as data pools and volumes.
 You should not move an NSS system volume from NetWare to Linux unless you intend to use it
as a data volume (or not at all) while it is mounted on the Linux server.
At install time, OES 11 SP2 sets up a sys: volume on a Linux POSIX file system with the Linux
path of /usr/novell/sys, and creates an NCP volume for it in the /etc/opt/novell/
ncpserv.conf file. The sys: volume contains the same login and public directories that exist on
NetWare. These directories let Novell clients run commands for logging in, mapping drives, and
so on, as well as providing the means for client commands to be run from login scripts.
 NSS volumes that were originally created on NetWare can be moved cross-platform from
NetWare to Linux if both platforms support the same media format.
 If you use shared pools in a cluster, only pools that are originally created on NetWare can be
migrated or failed back from Linux to NetWare. Mixed-platform clusters are supported only for
rolling cluster conversions from NetWare to Linux.
7.3
Cross-Platform Issues for NSS Features
The following features of NSS on NetWare are not available for NSS on Linux. Use the native Linux
alternatives where available.
 Section 7.3.1, “Multipath I/O to Devices,” on page 70
 Section 7.3.2, “Removable Media,” on page 70
 Section 7.3.3, “Transaction Tracking System,” on page 70
7.3.1
Multipath I/O to Devices
The Media Manager solution for multipath I/O handling is not available for NSS on Linux. Use the
Linux multipath I/O management tools. You should configure multipath I/O before using NSS
management tools to create NSS software RAIDs, pools, or volumes on the devices. For information,
see Chapter 14, “Managing Multipath I/O to Devices,” on page 161.
7.3.2
Removable Media
Removable media such as CDs, DVDs, CD and DVD image files, and DOS partitions are typically
mounted as file systems native to the Linux platform. Removable media and partitions are mounted
by using Linux POSIX file systems options. For information, see “Other Supported File Systems”
(http://www.suse.com/documentation/sles11/stor_admin/?page=/documentation/sles11/stor_admin/
data/sec_filesystems_add.html) in the SUSE Linux Enterprise Server 11 SP3 Storage Administration
Guide (http://www.suse.com/documentation/sles11/stor_admin/?page=/documentation/sles11/
stor_admin/data/bookinfo.html).
7.3.3
Transaction Tracking System
The NSS Transaction Tracking System (TTS) is not available for NSS on Linux.
70
OES 11 SP2: NSS File System Administration Guide for Linux
7.4
Cross-Platform Issues for File Access
Users of the NSS volume must be Linux-enabled with Linux User Management if you want to give
users access via any native Linux protocol or any Linux service or utility, such as Samba, FTP, or SSH.
You must also LUM-enable the Linux service or utility. For information, see Section 5.5, “Access
Control for NSS,” on page 53.
7.5
Cross-Platform Issues for Management Tools
 Section 7.5.1, “Storage-Related Plug-Ins for Novell iManager 2.7,” on page 71
 Section 7.5.2, “Interoperability of Protocols for the iManager Server and Target Server,” on
page 71
 Section 7.5.3, “Management Capabilities for Software RAIDs,” on page 71
7.5.1
Storage-Related Plug-Ins for Novell iManager 2.7
The following storage-related plug-ins for OES 2 and later require Novell iManager 2.7 (or later):
Storage-Related Plug-In
NSS on Linux
NSS on NetWare
Archive and Versioning (Archive
and Version Services)
Yes
Yes
Clustering (Novell Cluster
Services)
Yes
Yes
Distributed File Services (Novell
Distributed File Services)
Yes
Yes
Novell AFP
Yes
Yes
(OES 2 SP1 and later)
Novell CIFS
Yes
Yes
(OES 2 SP1 and later)
Novell NFS
No
Yes
Files and Folders
Yes
Yes
Storage (NSS file system)
Yes
Yes
For more information about storage-related plug-ins, see Section 9.1.1, “Understanding StorageRelated Plug-Ins,” on page 78.
7.5.2
Interoperability of Protocols for the iManager Server and Target Server
Table 9-3 in “Protocols for iManager Communications” on page 82 provides information about the
protocols needed to use iManager to manage storage in a heterogeneous environment.
7.5.3
Management Capabilities for Software RAIDs
NSSMU supports creating nested RAID 0+1 and 5+1 devices.
Cross-Platform Issues for NSS
71
72
OES 11 SP2: NSS File System Administration Guide for Linux
8
Cluster-Enabling Shared NSS Devices
and Pools with Novell Cluster Services
8
Shared Novell Storage Services devices and pools can be used in a cluster environment by using
Novell Cluster Services on your Novell Open Enterprise Server 11 (OES 11) SP2 server. The NSS
software is not clustered and must be installed and running on every server in the cluster.
 Section 8.1, “Cluster-Enabling NSS Pools and Volumes,” on page 73
 Section 8.2, “Guidelines for Cluster-Enabling NSS,” on page 73
8.1
Cluster-Enabling NSS Pools and Volumes
For information about installing Novell Cluster Services and cluster-enabling shared NSS devices
and pools in the Novell Cluster Services clusters, see “Configuring and Managing Cluster Resources
for Shared NSS Pools and Volumes” in the OES 11 SP2: Novell Cluster Services for Linux Administration
Guide.
For OES 2 SP3 to OES 11 SP2 upgrades, see “Requirements and Guidelines for Upgrading Clusters
from OES 2 SP3” in the OES 11 SP2: Novell Cluster Services for Linux Administration Guide.
For NetWare 6.5 SP8 to OES 11 SP2, see “Managing File Systems in Mixed-Mode Clusters” in the OES
11 SP2: Novell Cluster Services NetWare to Linux Conversion Guide.
8.2
Guidelines for Cluster-Enabling NSS
Novell Cluster Services must already be installed and configured on the server. Table 8-1 provides
references for cluster-related tasks for NSS.
Table 8-1 Clustering Guidelines for NSS
NSS Feature
Description
Reference
Shared device
Enable the Shareable for Clustering parameter to support
high-availability server clusters with Novell Cluster
Services.
Section 10.6, “Sharing
Devices,” on page 113
Shared pools
Enable the pool for clustering when you create the pool.
Section 15.2, “Creating
a Pool,” on page 164
Devices contributing space to the pool must already be
marked as shareable in order to be able to create a shared
pool. Unshared pools can be created on shared devices.
Pools created on NetWare can fail over to a Linux node in a
mixed-node cluster, but only pools that were originally
created on NetWare can fail back from Linux to NetWare.
Cluster-Enabling Shared NSS Devices and Pools with Novell Cluster Services
73
NSS Feature
Description
Reference
Multiple Server
Activation
Prevention
(MSAP) for pools
MSAP prevents some accidental activations of a pool on
more than one server at a time. MSAP is enabled by
default.
Section 15.13,
“Preventing Pools from
Activating on Multiple
Servers,” on page 175
Pool snapshot
On Linux, NSS does not support using pool snapshots for “Cross-Platform Issues
clustered pools. You must remove any existing pool
for NSS Pool
snapshots for a clustered pool on NetWare before you
Snapshots” on page 69
cluster migrate the pool cluster resource from a NetWare
server to a Linux server during a rolling cluster conversion.
WARNING: You might not be able to open the original pool
on Linux if you do not delete the snapshots before you
attempt to cluster migrate the pool cluster resource from
NetWare to Linux.
Shared volumes
You must create at least one shared volume in a clusterenabled pool. We recommend using only one volume per
pool in a cluster.
Typically, all volumes are created when you initially set up
the cluster resource and before you need to cluster migrate
or fail over the resource to other servers in the cluster.
Section 18.2.4,
“Guidelines for NSS
Volumes in a MixedNode Cluster,” on
page 215
The Server, Pool, Volume, Cluster Resource, and Cluster
objects are recommended to be in the same context (such
as ou=ncs,o=novell).
If the objects are in different contexts, you might receive an
eDirectory error when you attempt to modify the pool,
create or modify the volumes, home directories, Distributed
File Services junctions, or any other elements that are
managed using eDirectory objects. To resolve the problem,
you must cluster migrate the pool cluster resource back to
the node where the pool was created in order to perform
those management tasks.
Shared encrypted When shared pools contain encrypted volumes, you must
volume
provide the encryption password the first time that a
volume is mounted after a reboot. Thereafter, the nodes in
the cluster share the key.
“Sharing Encrypted
NSS Volumes in a
Cluster” on page 242
Shadow volume
pair using
Dynamic Storage
Technology
OES 11 SP2: Dynamic
Storage Technology
Administration Guide
When a shared pool contains a volume that is part of a
shadow volume pair, the other volume in the shadow pair
can reside on the same pool or in a different pool on the
same server. If it is on a different pool, both pools must be
managed by the same cluster resource.
“Using Encrypted
Volumes in a Server
Cluster” on page 244
In a cluster, shadow volumes can reside on and fail over
only to nodes running OES 2 and OES 11 SP2.
Cluster-Enabling an Existing NSS Pool and Its Volumes
Before you attempt to cluster-enable an existing pool and its volumes, the pool should be deactivated
and its volumes should be dismounted.
74
OES 11 SP2: NSS File System Administration Guide for Linux
Comment out (or remove) the volume’s entry in the /etc/fstab file. The load and unload scripts
that are created when you cluster-enable the pool will be responsible for mounting and dismounting
the volume after the pool is cluster enabled. If you leave /etc/fstab as-is, the server will continue to
try and mount the NSS volume on reboot, but it will not succeed. For information about clusterenabling an existing pool and its volumes, see “Cluster-Enabling an Existing NSS Pool and Its
Volumes” in the Novell Cluster Services for Linux Administration Guide.
Cluster-Enabling Shared NSS Devices and Pools with Novell Cluster Services
75
76
OES 11 SP2: NSS File System Administration Guide for Linux
9
Management Tools for NSS
9
This section identifies the various tools for managing the Novell Storage Services file system.
 Section 9.1, “Novell iManager and Storage-Related Plug-Ins,” on page 77
 Section 9.2, “NSS Management Utility (NSSMU) Quick Reference,” on page 92
 Section 9.3, “NSS Commands and Utilities,” on page 96
 Section 9.4, “Novell NetStorage,” on page 97
 Section 9.5, “Novell Remote Manager,” on page 99
 Section 9.6, “Novell Client,” on page 101
 Section 9.7, “ConsoleOne,” on page 102
 Section 9.8, “Virtual File Services, APIs, and Scripts,” on page 103
 Section 9.9, “Novell Linux Volume Manager (NLVM),” on page 103
NOTE: NSS also supports the use of third-party tools on both kernels for advanced data protection
and management, virus scanning, and traditional archive and backup solutions.
9.1
Novell iManager and Storage-Related Plug-Ins
Novell iManager is a Web browser-based tool used for configuring, managing, and administering
NetIQ eDirectory objects on your network. The Storage plug-in is the primary tool used to manage
NSS devices, software RAIDs, pools, and volumes.
Novell iManager gives you the ability to assign specific tasks or responsibilities to user accounts and
to present the user with only the tools (with the accompanying rights) necessary to perform those sets
of tasks.
NOTE: The storage-related plug-ins do not support Mobile iManager.
This section describes the following:
 Section 9.1.1, “Understanding Storage-Related Plug-Ins,” on page 78
 Section 9.1.2, “Prerequisites for Using the Storage-Related Plug-Ins,” on page 81
 Section 9.1.3, “Downloading and Installing Plug-In Updates,” on page 83
 Section 9.1.4, “Accessing Novell iManager,” on page 83
 Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83
 Section 9.1.6, “Selecting a Server to Manage,” on page 84
 Section 9.1.7, “Storage Plug-In Quick Reference,” on page 84
 Section 9.1.8, “Files and Folders Plug-In Quick Reference,” on page 90
Management Tools for NSS
77
9.1.1
Understanding Storage-Related Plug-Ins
Storage-related plug-ins share some management code in common. It is necessary to manage these
plug-ins together when installing or updating any of the plug-ins.
 “Overview of the Storage-Related Plug-Ins for iManager” on page 78
 “Archive Versioning Management” on page 79
 “Cluster Services Management” on page 79
 “Distributed File Services Management” on page 79
 “Files and Folders Management” on page 79
 “File Protocols Management (AFP and CIFS)” on page 80
 “Samba Management” on page 80
 “NSS Management” on page 80
 “Storage Management” on page 80
 “Files for Storage-Related Plug-Ins” on page 80
Overview of the Storage-Related Plug-Ins for iManager
Table 9-1 identifies the storage-related plug-ins for Novell iManager 2.7.4 in OES 2 SP2 and later.
Table 9-1 Storage-Related Plug-Ins for iManager
Storage-Related PlugIn
NPM File
Role in iManager
Use to Manage
Archive Versioning
Management
arkmgmt.npm
Archive Versioning
Novell Archive and
Version Services
Cluster Services
Management
ncsmgmt.npm
Clusters
Novell Cluster Services
Distributed File Services dfsmgmt.npm
Management
Distributed File Services Novell Distributed File
Services
Files and Folders
Management
fileman.npm
Files and Folders
Novell Files and Folders
AFP Management
afpmgmt.npm
File Protocols > AFP
Novell AFP for Linux
Novell AFP for NetWare
CIFS Management
cifsmgmt.npm
File Protocols > CIFS
Novell CIFS for Linux
Novell CIFS for NetWare
78
Samba Management
sambamgmt.npm
File Protocols > Samba
Novell Samba
NSS Management
nssmgmt.npm
Storage
Novell Storage Services
Storage Management
storagemgmt.npm
No role. Required when Contains common code
using any combination of for all storage-related
storage-related plug-ins plug-ins
OES 11 SP2: NSS File System Administration Guide for Linux
IMPORTANT: The storage-related plug-ins share code in common in the storagemgmt.npm file. If
you use more than one of these plug-ins, you should install, update, or remove them all at the same
time to make sure the common code works for all plug-ins. If you remove only one of the plug-ins, it
removes the common code and breaks the remaining installed plug-ins.
Archive Versioning Management
The Archive Versioning Management (arkmgmt.npm) file contains the Archive Versioning plug-in for
Novell Archive and Version Services. This plug-in requires the NSS Storage Management plug-in and
the Storage Management plug-in.
For information about using this plug-in, see the OES 11 SP2: Novell Archive and Version Services
Administration Guide.
Cluster Services Management
The Cluster Services Management (ncsmgmt.npm) file contains the Clustering plug-in for managing
Novell Cluster Services. This plug-in requires the Storage Management plug-in. The NSS Storage
Management plug-in is required for cluster-enabling NSS pools and volumes.
For information about using this plug-in, see OES 11 SP2: Novell Cluster Services for Linux
Administration Guide.
Distributed File Services Management
Use the Distributed File Services plug-in to manage Novell DFS for NSS volumes. The DFS plug-in
also requires the NSS Storage Management and Storage Management plug-ins.
For information about using the DFS plug-in, see OES 11 SP2: Novell Distributed File Services
Administration Guide for Linux.
Files and Folders Management
Provides a file tree view for managing and browsing files and NetIQ eDirectory objects.
When NCP Server is installed, this plug-in supports the Novell Trustee Model of access control for
Novell Storage Services (NSS) volumes on Linux and NetWare and for NetWare Core Protocol
volumes on Linux POSIX file systems. Use it to manage the file system trustees, trustee rights, and
inherited rights filters for files and folders on NSS and NCP volumes. On Linux, install the NCP
Server and Dynamic Storage Technology service. On NetWare, the NCP Server service is
automatically installed.
For file systems that have Volume objects in eDirectory, this plug-in allows you to upload, download,
and delete files and to create and delete directories. For NSS volumes, it provides additional features
that allow you to manage file system attributes, to salvage and purge deleted files and folders, and to
manage directory quotas.
Management Tools for NSS
79
File Protocols Management (AFP and CIFS)
Beginning in OES 2 SP2, the File Protocols plug-in for the Native File Access Protocols for NetWare
services has been replaced by two plug-ins: Novell AFP (afpmgmt.npm) and Novell CIFS
(cifsmgmt.npm). These plug-ins support AFP and CIFS services for NSS volumes on both Linux and
NetWare.
The AFP and CIFS plug-ins also require the NSS Storage Management (nssmgmt.npm) and Storage
Management (storagemgmt.npm) plug-ins.
IMPORTANT: Make sure to uninstall the existing File Protocols plug-in, NSS plug-in, and Storage
Management plug-in before you upgrade to these new plug-ins.
For information about managing these services, see the following guides:
 OES 11 SP2: Novell AFP for Linux Administration Guide
 OES 11 SP2: Novell CIFS for Linux Administration Guide
Samba Management
Provides a Web interface for configuring your Samba server to provide file and print services to
clients that support the Microsoft SMB and CIFS protocols. This plug-in supports CIFS access to files
on NSS volumes, NCP volumes, and Linux POSIX volumes.
NSS Management
The Novell Storage Services Management (nssmgmt.npm) plug-in allows you to manage NSS services
(devices, software RAIDs, pools, and volumes). Information about using this plug-in is the focus of
this guide.
Storage Management
The Storage Management (storagemgmt.npm) file contains common code that is shared by the
storage-related plug-ins. If you use more than one of these storage-related plug-ins, you should
install, update, or remove the storagemgmt.npm file and all installed storage-related .npm files at the
same time.
IMPORTANT: If you remove any one of the installed storage-related plug-ins, it removes the
common code (storagemgmt.npm) file, which breaks the remaining installed plug-ins.
Files for Storage-Related Plug-Ins
The module files (see Table 9-1, “Storage-Related Plug-Ins for iManager,” on page 78) are located in
the /var/opt/novell/iManager/nps/portal/modules/ directory.
The Java JAR files are located in the /var/opt/novell/iManager/nps/WEB-INF/lib/ directory.
The Tomcat TLD files are located in the /var/opt/novell/iManager/nps/WEB-INF/ directory.
80
OES 11 SP2: NSS File System Administration Guide for Linux
Table 9-2 Java and Tomcat Files for Storage-Related Plug-Ins
Storage-Related Plug-In
Java Files
Tomcat Files
Novell AFP
afpGadgets.jar
afpManageLib.jar
afpTags.jar
afp.tld
Archive and Version Services
arkGadgets.jar
arkTags.jar
arkManageLib.jar
ark.tld
Novell CIFS
cifsGadgets.jar
cifsManageLib.jar
cifs.tld
Novell Cluster Services
ncsGadgets.jar
ncsManageLib.jar
ncs.jar
ncsTags.jar
ncs.tld
Novell Distributed File Services
dfsGadgets.jar
dfsManageLib.jar
nasGadgets.jar
dfs.tld
Files and Folders
fileman.jar
Novell Samba
9.1.2
samba.tld
Novell Storage Services
nssGadgets.jar
nssManageLib.jar
Storage Management (common to all)
nssAdminClient.jar
nssGadgetLib.jar
nssTags.jar
nss.tld
Prerequisites for Using the Storage-Related Plug-Ins
The requirements in this section apply to the storage-related plug-ins for iManager 2.7 that are
described in “Understanding Storage-Related Plug-Ins” on page 78.
 “Web Browser Language Setting” on page 81
 “Web Browser Character Encoding Setting” on page 82
 “Protocols for iManager Communications” on page 82
Web Browser Language Setting
The iManager plug-in might not operate properly if the highest priority Language setting for your
Web browser is set to a language other than one of iManager’s supported languages. To avoid
problems, in your Web browser, click Tools > Options > Languages, then set the first language
preference in the list to a supported language.
Management Tools for NSS
81
Web Browser Character Encoding Setting
Supported language codes are Unicode (UTF-8) compliant. To avoid display problems, make sure the
Character Encoding setting for the browser is set to Unicode (UTF-8) or ISO 8859-1 (Western, Western
European, West European).
In a Mozilla browser, click View > Character Encoding, then select the supported character encoding
setting.
In an Internet Explorer browser, click View > Encoding, then select the supported character encoding
setting.
Protocols for iManager Communications
The storage-related plug-ins can be used to manage OES 11 SP2 servers. Different communications
protocols are required for connecting the various platforms.
Table 9-3 provides information about the protocols needed to use iManager to manage storage in a
heterogeneous environment. A protocol annotated with an asterisk (*) is the default and is
configured automatically on the servers. The protocols that you use must be loaded and running on
both the iManager server and the target server that you want to manage.
For further clarification about WBEM, CIFS, and NCP, see the following section:
Table 9-3 Interoperability of Protocols Used to Connect the iManager Server and Target Servers
iManager Server
Operating
Platform
OES 1 Linux and
Later
Protocol Used to Connect to the Target Server Based on Its Operating
Platform (* indicates the default)
OES 1 Linux and
Later
OES 1 SP1
OES 1 NetWare
NetWare, NetWare and NetWare 6.5
6.5 SP4, and Later SP3
WBEM
WBEM
WBEM (Start
WBEM)
CIFS
CIFS
CIFS
WBEM
WBEM (Start
WBEM)
NCP
NCP
NCP
CIFS
CIFS
CIFS
CIFS (Field Patch
2B)
*WBEM
*WBEM
WBEM (Start
WBEM)
NCP
NCP
NCP
CIFS
CIFS
CIFS
CIFS (Field Patch
2B)
Not available
NCP
NCP
NCP
OES 1 SP1
WBEM
NetWare, NetWare
6.5 SP4, and Later
OES 1 NetWare
and NetWare 6.5
SP3
NetWare 6.5 SP2
82
OES 11 SP2: NSS File System Administration Guide for Linux
NetWare 6.5 SP2
CIFS (Field Patch
2B)
WBEM
Where WBEM is the default protocol, WBEM is loaded and runs automatically when you start the
server. Otherwise, you must start WBEM to use the protocol.
For OES 11 SP2, the storage-related plug-ins for iManager require CIMOM connections for tasks that
transmit sensitive information (such as a username and password) between iManager and the
_admin volume on the OES 11 SP2 server that you are managing. Typically, CIMOM is running, so
this should be the normal condition when using the server. CIMOM connections use Secure HTTP
(HTTPS) for transferring data, and this ensures that sensitive data is not exposed.
If CIMOM is not currently running when you click OK or Finish for the task that sends the sensitive
information, you get an error message explaining that the connection is not secure and that CIMOM
must be running before you can perform the task.
IMPORTANT: If you receive file protocol errors, it might be because WBEM is not running.
To check the status of SFCB:
1 As root in a console shell, enter
rcsfcb status
To start SFCB:
1 As root in a console shell, enter
rcsfcb start
9.1.3
Downloading and Installing Plug-In Updates
For information, see the iManager 2.7.x website (https://www.netiq.com/documentation/
imanager27/).
9.1.4
Accessing Novell iManager
1 Launch a Web browser.
2 Click File > Open, then enter
https://server-IP-address/nps/iManager.html
The URL is case sensitive. Replace server-IP-address with the actual server DNS name or IP
address. For example:
https://192.168.1.1/nps/iManager.html
The iManager Login page opens.
3 Use your administrator username and password to log in to the eDirectory tree that contains the
server you want to manage.
In Novell iManager, you can access only the roles and tasks you are authorized to manage. For
full access to all available Novell iManager features, you must log in as Supervisor of the tree.
9.1.5
Accessing Roles and Tasks in iManager
1 Access iManager, then log in to the eDirectory tree where the server you want to manage resides.
For information, see Section 9.1.4, “Accessing Novell iManager,” on page 83.
Management Tools for NSS
83
2 In Roles and Tasks, click the Storage role to expand its main tasks
As you work in the storage-related plug-ins, use the navigation links at the top of the page,
referred to as “breadcrumbs,” to return to pages you recently visited, or use the links in Roles and
Tasks. If you use the Refresh and Back features of your Web browser to navigate, iManager returns
you to the initial page you encountered after login.
3 To activate the options on the selected page, select a server to manage.
For information, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
9.1.6
Selecting a Server to Manage
Before you can access the management options on a selected task page, you must select a server to
manage that is in the same NetIQ eDirectory tree where you are currently logged in.
1 Use one of the following methods to select a server in the tree where you are logged in:
 Type the NetIQ eDirectory distinguished server name for the server you want to manage,
then press Tab or click somewhere on the page outside of the Server field to enter your
selection. For example:
svr1.company
 Click the Search icon to open the eDirectory Object Selector. Browse or search the list to
locate the server you want to manage, then click the server name.
 Click the Object History icon to select a server you have recently managed.
2 Wait for iManager to retrieve information about that server and display the appropriate
information to the task page you are in.
It might take several seconds to retrieve the information, depending on the size and complexity
of your storage solution.
9.1.7
Storage Plug-In Quick Reference
The Storage role comprises seven key tasks:
 “Pools” on page 85
 “Volumes” on page 86
 “User Quotas” on page 87
 “Partitions” on page 87
84
OES 11 SP2: NSS File System Administration Guide for Linux
 “Software RAIDs” on page 88
 “Devices” on page 89
Pools
You can create and manage storage pools to efficiently use all free space. You can also enable the Pool
Snapshot feature to preserve point-in-time views of data pools and to support data backup and
recovery. Snapshots are not supported for clustered pools.
Table 9-4 Pool Management Tasks
Task
Description
Reference
Pools
Displays a list of all pools on the
selected server.
“Viewing Pools on a Server” on
page 171
Details
Displays information about a selected
pool.
“Viewing Pool Details” on page 172
New
Creates a new pool on the selected
server.
“Creating a Pool” on page 164
Delete
Deletes a selected pool and all of its
volumes and their data.
“Deleting a Pool” on page 171
Rename
Renames a selected pool.
“Renaming a Pool” on page 170
Activate
Mounts and activates a selected
deactive or unmounted pool.
“Activating and Deactivating Pools” on
page 168
Deactivate
Deactivates a selected active pool.
“Activating and Deactivating Pools” on
page 168
Increase Size
Allows you to select one or more
partitions from available devices in
order to expand the size of a pool.
“Increasing the Size of a Pool” on
page 169
Snapshot
Opens the Pool Snapshots page
where you can create and manage
pool snapshots.
“Managing NSS Pool Snapshots” on
page 193
This option is disabled for online
snapshot pools, because you cannot
create a snapshot of a snapshot.
Update eDirectory
Updates (replaces) the eDirectory
“Updating eDirectory Pool Objects” on
object for a selected pool. Use only if a page 179
storage object is not recognized or has
been lost.
Deleted Volume
Displays a list of deleted volumes in a
pool, and allows you to salvage or
purge them. You can also pause and
resume the autopurging of deleted
volumes.
“Viewing, Salvaging, or Purging
Deleted NSS Volumes in a Pool” on
page 301
Offline
For a pool snapshot that is online as
an active pool, takes it offline. This
does not delete the pool snapshot.
“Viewing and Managing an Online
Pool Snapshot” on page 206
Management Tools for NSS
85
Task
Description
Reference
Partitions
Displays a list of the partitions
comprising the pool’s storage space.
“Viewing Partition Information for a
Pool” on page 173
Volumes
Lists all volumes on a selected pool.
“Viewing Volume Information for a
Pool” on page 174
Devices
Displays a list of the devices that
contribute space to a selected pool.
“Viewing Device Information for a
Pool” on page 174
Volumes
You can create and manage NSS volumes, including their key attributes.
Table 9-5 Volume Management Tasks
Task
Description
Reference
Volumes
Displays a list of all volumes on the
selected server.
“Managing NSS Volumes” on
page 209
Details
Displays information about a selected
volume.
“Viewing the Details of an NSS
Volume” on page 219
New
Creates a new unencrypted volume.
“Creating Unencrypted NSS Volumes”
on page 216
To create an encrypted NSS volume,
use NSSMU.
86
Delete
Deletes a selected volume and all of
its contents.
“Deleting an NSS Volume” on
page 235
Rename
Renames a selected volume.
“Renaming an NSS Volume” on
page 226
Activate
Mounts and activates a deactive or
unmounted volume.
“Activating and Deactivating an NSS
Volume” on page 228
Deactivate
Deactivates an active volume.
“Activating and Deactivating an NSS
Volume” on page 228
Mount
Mounts an unmounted volume. A
volume must be mounted to view its
details.
“Mounting and Dismounting an NSS
Volume” on page 228
Dismount
Dismounts a mounted volume.
“Mounting and Dismounting an NSS
Volume” on page 228
Move
Moves a selected NSS volume for the
purpose of reorganizing and
redistributing storage on the same
server (or to other servers) in
response to changing business needs.
For requirements, guidelines, and
procedures for splitting volumes, see
“Using DFS to Move NSS Volumes” in
the OES 11 SP2: Novell Distributed
File Services Administration Guide for
Linux.
OES 11 SP2: NSS File System Administration Guide for Linux
Task
Description
Reference
Split
Splits a selected NSS volume for the
purpose of reorganizing and
redistributing storage on the same
server (or to other servers) in
response to changing business needs.
For requirements, guidelines, and
procedures for splitting volumes, see
“Using DFS to Split NSS Volumes” in
the OES 11 SP2: Novell Distributed
File Services Administration Guide for
Linux.
Properties
Displays a list of volume attributes,
“Viewing Properties of an NSS
and allows you to set the attributes
Volume” on page 220
and name space to use for a volume. It
also displays usage statistics for a
volume.
User Quotas
Displays user quotas and space
“Configuring a User Space Quota” on
consumed for users of the volume.
page 291
Administrators can view and manage
user quotas. Users can view their own
user space quotas.
Offline
Takes a selected snapshot volume
offline, where it remains active in the
background.
“Viewing and Managing an Online
Pool Snapshot” on page 206
Update eDirectory
Updates (replaces) the eDirectory
object. Use only if a storage object is
not recognized or has been lost.
“Updating eDirectory Volume Objects”
on page 219
User Quotas
The User Quotas task (see Table 9-6) allows managers to view and manage user space restrictions. It
can be specified as an iManager role-based task for administrators. An individual user can log in to
iManager under his or her own username to view the user’s user space restrictions.
Table 9-6 User Quotas Task
Task
Description
Reference
User Quotas
For the Administrator user, displays
quotas and allows the administrator
user to manage user quotas for all
users.
“Configuring a User Space Quota” on
page 291
For the user, displays the user’s own
space restrictions.
Partitions
NSS creates and deletes partitions for you when you work with software RAIDS and pools in
iManager and NSSMU. To delete partitions manually, use the NSSMU Partitions page.
Management Tools for NSS
87
Table 9-7 Partitions Tasks
Task
Description
Reference
Partitions
Displays a list of all partitions on a
selected server.
“Viewing a List of Partitions” on
page 131
Details
Displays information about a selected
partition.
“Viewing Details for a Partition” on
page 132
Edit (NetWare)
Adds a label for a selected partition.
“Labeling a Partition” in the NW 6.5
SP8: NSS File System Administration
Guide
Software RAIDs
You can create and manage software RAID 0, 1, and 5 devices to improve storage performance and
reliability. You can use NSSMU to create a software RAID 0+1 or 5+1 device.
Table 9-8 Software RAID Management Tasks
Task
Description
Reference
Software RAIDs
Displays a list of software RAID
devices on the selected server.
“Viewing a List of Software RAID
Devices on a Server” on page 143
Details
Displays the details of a selected
software RAID device.
“Viewing Details of a Software RAID
Device” on page 144
New
Creates a new software RAID 0, 1, or
5 device for the selected server.
“Creating Software RAID Devices with
iManager” on page 145
Rename
Renames a selected software RAID
device.
“Renaming a Software RAID Device”
on page 152
Increase Size
Expands an existing software RAID
device by adding a partition to the
RAID (up to the limit for that type of
RAID). If there are no devices
available, the button is disabled.
“Increasing the Size of a Software
RAID Device” on page 152
Each partition you add must reside on
a different device. You can add
partitions that match the shared state
of current member devices. They must
be all local or all shared; you cannot
mix them.
88
Restripe
Completes a restriping process for a
RAID 0 or RAID 5 device that has
been paused.
“Restriping a Software RAID” on
page 154
Delete
Deletes the selected software RAID
device and removes the RAID
relationship between member
partitions and the underlying storage
structures. All data on the member
partitions is lost.
“Deleting a Software RAID Device” on
page 157
OES 11 SP2: NSS File System Administration Guide for Linux
Task
Description
Reference
Pools
Lists pools on a selected software
RAID devices.
“Viewing Pools on a Software RAID
Device” on page 158
Partitions
Lists details about partitions (member
segments) in the RAID. In some
cases, you can also delete a partition
to repair a RAID.
“Viewing Partitions on a Software
RAID Device” on page 158
Devices
You can configure, mount, and maintain a wide selection of storage devices, including directattached-storage devices, network-attached storage devices, networked storage devices in a Fibre
Channel or iSCSI storage area network (SAN), and hardware device arrays.
Table 9-9 Device Management Tasks
Task
Description
Reference
Devices
Displays a list of all local and external
devices available on the selected
server.
“Viewing a List of Devices on a Server”
on page 109
Details
Displays information about a selected
device.
“Viewing Details for a Device” on
page 110
Initialize Disk
Initializes a selected device by erasing “Initializing a Disk” on page 112
its partition table, effectively destroying
all of its data. If devices are present
but not showing up for creating pools
and volumes, you should initialize the
disk.
Multipath (NetWare)
For network configurations with
multiple paths between network
devices and your NetWare server,
opens the Multipath page where you
can set the primary path and path
failover priorities for fault tolerance of
connections between host bus
adapters and storage devices. You
can also bring paths up and down.
Set Default Path
(NetWare)
Sets the connection to selected device “Setting the Primary Path for a Device
to its user-defined default primary
to Its Default Path” in the NW 6.5 SP8:
path.
NSS File System Administration Guide
Reset Registry
(NetWare)
Resets the multipath priority settings
for a selected device in the server
registry to its user-defined defaults.
“Resetting the Server Registry with
Default Priority Settings for a Device”
in the NW 6.5 SP8: NSS File System
Administration Guide
Shareable for
Clustering
Enables device sharing to support
high-availability server clusters.
“Sharing Devices” on page 113
Pools
Displays a list of the pools on a device. “Viewing Pools on a Device” on
page 116
“Managing Multipath I/O to Devices
(NetWare)” in the NW 6.5 SP8: NSS
File System Administration Guide
Management Tools for NSS
89
9.1.8
Task
Description
Reference
Partitions
Displays information about partitions
that are configured on a device.
“Viewing Partitions on a Device” on
page 115
Files and Folders Plug-In Quick Reference
The Files and Folders plug-in for iManager 2.7.x provides the Files and Folders role for Linux and
NetWare. It is also integrated in iManager as the View Objects option in the iManager toolbar. File
browsing in iManager is available for file systems that have a Volume object defined in eDirectory,
such as for NSS volumes on Linux and NetWare and for NCP volumes on Linux.
The Files and Folders Manager NPM file (filemanager.npm) is automatically installed in iManager.
For information about manually installing NPM files for iManager, see the iManager 2.7.x website
(https://www.netiq.com/documentation/imanager27/).
Click the Files and Folders role to select tasks first, then search for the file or folder you want to
manage. Click the View Objects icon to view the Tree, Browse, and Search view of a server’s eDirectory
objects in the left pane. In the Tree view, click a Volume object to see the hierarchical file system tree
view of the volume's folders and files. Click the plus (+) or minus (-) icon next to a directory name to
expand or collapse the view of its subdirectories. Locate the file or folder you want to manage, then
specify the action you want to perform for it.
The Files and Folders plug-in for Novell iManager 2.7.x provides the tasks described in this section.
All of the tasks and actions that are available under the Files and Folders role are also available from
the View Objects tree view.
 “Delete” on page 90
 “Deleted Files” on page 90
 “Download” on page 91
 “New Folder” on page 91
 “Properties” on page 91
 “Upload” on page 92
Delete
Deletes a file or folder on an NSS volume or an NCP volume (NCP share on a Linux POSIX file
system). For information, see Section 25.2, “Deleting a File or Folder on an NSS Volume,” on
page 318.
Deleted Files
Salvages or purges deleted files on an NSS volume. Salvage and purge of deleted files and directories
is available only for NSS volumes where the volume’s Salvage attribute is enabled. Other NSS
settings determine how long deleted files and directories are available.
For information about configuring salvage and purge behavior for NSS volumes, see Chapter 23,
“Salvaging and Purging Deleted Volumes, Directories, and Files,” on page 295.
For non-NSS volumes, the Deleted Files report is empty (no deleted files).
90
OES 11 SP2: NSS File System Administration Guide for Linux
Download
Downloads a selected file from an NSS volume or NCP volume to a specified location on your local
drive or mapped network drive. For information, see Section 25.4, “Downloading Files from an NSS
Volume,” on page 320.
New Folder
Creates a folder on an NSS volume or NCP volume. For information, see Section 25.1, “Creating a
Folder on an NSS Volume,” on page 317.
Properties
Adds, removes, or modifies file system trustees, trustee rights, inherited rights filters, and file system
attributes for files and folders on NSS volumes and NCP volumes. See Table 9-10 for a complete list of
tasks you can perform from the Properties page.
Table 9-10 Properties Tasks
Properties Tab
Task Description
Information
Displays or modifies a directory quota for the selected folder. Directory quotas
management is available only for NSS volumes where the volume’s Directory
Quotas attribute is enabled. For information, see Section 22.3, “Managing
Directory Quotas,” on page 284.
Displays information about a selected file or folder, such as:
 Current size
 Time stamps for when the file was created, modified, accessed, and
archived
Displays or modifies the owner of a selected file or folder.
Displays or modifies the file system attributes for a file or folder. For information,
see Section 20.1.3, “Configuring File or Folder Attributes,” on page 248.
Rights
Displays, adds, or removes file system trustees for a selected file or directory.
Displays, grants, or revokes file system trustee rights for trustees of the
selected file or directory.
Displays or modifies the inherited rights filter for a selected file or directory.
For information, see Section 20.1, “Configuring File System Trustees, Trustee
Rights, Inherited Rights Filters, and Attributes,” on page 247.
Inherited Rights
Displays or modifies the inherited rights filters at every level of the path for a
selected file or directory. For information, see “Configuring the Inherited Rights
Filter for a File or Directory” on page 252.
Displays the effective rights for the selected file or directory. For information,
see Section 20.1.5, “Viewing Effective Rights for a Trustee,” on page 253.
Management Tools for NSS
91
Upload
Uploads a specified file from your local drive or a mapped network drive to a specified location on an
NSS volume or NCP volume. For information, see Section 25.3, “Uploading Files to an NSS Volume,”
on page 319.
9.2
NSS Management Utility (NSSMU) Quick Reference
The Novell Storage Services Management Utility (NSSMU) is a console-based utility for managing
NSS storage media on a server. You can use NSSMU at any time as an alternative to the browserbased iManager Storage plug-in.
NSSMU is installed when you install NSS. The Linux install creates symlinks in the /opt/novell/
nss/sbin folder for common NSS utilities, including NSSMU. Symlinks allow the path to NSSMU to
become part of the root user’s path, which allows you to run nssmu from a terminal console as the
root user.
Table 9-11 identifies key functions available in NSSMU. This quick reference is also available in the
nssmu(8) man page. To access the man page, enter the following at a terminal console prompt:
man 8 nssmu
Table 9-11 Summary of Management Options in NSSMU
Management Options
Description
Devices
Use this option to initialize and maintain
physical storage devices and software
RAID devices available to this server.
Use the Software RAID Devices option
to create, repair, or delete RAIDs.
F3 = Initialize device (Do not initialize your system device.)
F5 = Refresh display
F6 = Share (shareable/not shareable for clustering)
Enter = Show partitions
Esc = Previous menu
Initialize the selected device by erasing
its partition table, effectively destroying
all of its data. If devices are present but
not showing up for creating pools and
volumes, you should initialize the disk.
When you initialize a device, you can
select the DOS or the GUID Partition
Table (GPT) partitioning scheme for a
given device.
The DOS partitioning scheme supports
devices up to 2 TB in size. It allows up to
four partitions on a device.
The GPT partitioning scheme supports
device sizes up to 2E64 sectors (that is,
up to 8388608 petabytes (PB) based on
the 512-byte sector size). It allows up to
128 partitions per disk. Each of its disks
partitions is a logical device that is
identified by a unique 128-bit (16-byte)
GUID.
92
OES 11 SP2: NSS File System Administration Guide for Linux
Management Options
Description
Partitions
Use this option to display details about
partitions. All types of partitions are
displayed, including those for Linux file
systems.
Ins = Create an NSS partition (disabled)
Del = Delete an NSS partition
F3 = Mirror partition (that contains an existing pool)
F5 = Refresh details of the partition
F6 = Label
Enter =Show volumes
Esc = Previous menu
The Create option is disabled. NSS
partitions are automatically created for
you as you define NSS pools or software
RAIDs.
You can delete a single partition at a time
when repairing a failed software RAID
partition. To delete all partitions for a
software RAID, you should delete the
RAID itself from the Software RAIDs
page; otherwise, the RAID is not cleanly
deleted.
The Mirror option lets you specify 1 to 3
partitions to mirror an existing partition
that contains an NSS pool. Effectively,
you are creating a RAID1 mirror device
for the pool. Each segment of the
defined RAID is a complete mirror of the
original pool and is the same size as the
original partition. After you mirror the
partition, manage the RAID from the
Software RAIDs page.
If you widen the NSSMU display screen,
it widens the Partitions list panel and
allows you to see the partitions' full
names.
Management Tools for NSS
93
Management Options
Description
Pools
Use this option to create, delete,
rename, and expand NSS storage pools
to efficiently use all free space in the
available devices.
Ins = Create a pool
Del = Delete a pool
F3 = Expand a pool (by adding space)
F4 = Update NDS/eDirectory
F5 = Refresh the pool list
F6 = Rename a pool
F7 = Activate/deactivate a pool
s = Display pool segments, segment size, and devices
F8 = More (list more options)
F9 = Show deleted volumes (then salvage, purge, or pause/
resume autopurging)
M = Move pool
Enter = Show volumes for a pool
Esc = Previous menu
Space = Refresh details of the selected pool
After you create a pool, you can expand
it by adding free space from the same or
different device to increase its size.
Select from the available free space to
allocate it to the pool. Each device can
contribute a different amount of space to
the pool. Devices that contribute space
must be in the same share state as the
pool, that is, Shared or Not Shared. You
can increase the size of a pool, but you
cannot reduce it.
The move pool option moves an NSS
pool from one location to another on the
same system. The pool remains active
during this process. All the segments in
the pool are consolidated and moved to
the specified device(s). If the specified
device is larger than the original device,
the pool is automatically expanded on
the completion of move job.
If a clustered pool is moved, on
performing cluster resource migration
the Move pool job is resumed on other
node. In the cluster setup, the exact
status of the pool move can be seen only
on the node where the pool move job is
in progress.
For more information, see Section 15.12,
“Moving a Pool,” on page 174.
94
OES 11 SP2: NSS File System Administration Guide for Linux
Management Options
Description
Volumes
Use this option to create, delete,
rename, activate/deactivate, and mount/
dismount NSS volumes and to set their
attributes.
Ins = Create a new volume
Del = Delete a volume
F2 = Rename mount point for the volume (new path with
volume name)
F3 = Rename volume
F4 = Update NDS/eDirectory
F5 = Refresh details of the volume
F6 = View compression statistics
F7 = Dismount/mount a volume. If it is encrypted, the volume
prompts for a password on the first mount after a system
boot or reboot.
F8 = More (list more options)
F9 = Name Space - choose Long (default), UNIX, DOS, or
Macintosh
Enter = View volume properties
Esc = Previous menu
Linux Volumes
Ins = Create
Del = Delete
F3 = Rename
F5 = Refresh details of the Linux volumes
F7 = Mount or Dismount
Esc = Previous menu
RAID Devices
Ins = Create a software RAID
Del = Delete a software RAID device
F3 = Expand a RAID device (add partitions)
F4 = Rename a RAID device
F5 = Refresh details of the software RAID device
F6 = Restripe (to restripe or resume restriping for paused
RAID 0 or RAID 5)
OR
F6 = Remirror (to remirror or resume remirroring for paused
RADI 1)
Enter = Show segments (list member partitions for selected
device)
Esc = Previous menu
F8 = More (list more options)
Space=Status Refresh
To store data in encrypted format,
specify a password when you create the
volume. This enables the Encryption
attribute. The encryption setting persists
for the life of the volume. The encryption
password can be 2 to 16 standard ASCII
characters, with a suggested minimum
of 6. The password generates a 128-bit
NICI key for encryption. On system
reboot, specify the password when you
activate the volume for the first time.
You can mount encrypted volumes only
from NSSMU on the first time after a
system reboot. Provide the password
when needed. Until you provide a
password for encrypted volumes, you
cannot mount multiple encrypted
volumes at a time.
Use this option to create and manage
the Linux volumes. For more
information, see “Managing Linux
Volumes with NSSMU” in the OES 11
SP2: Linux POSIX Volume
Administration Guide.
Use this option to create and manage
NSS software RAID devices. A software
RAID device emulates a hardware RAID
device. RAID devices combine
partitioned space on multiple physical
devices into a single virtual device that
you manage like any device. Each
member device contributes an equal
amount of space and only a single
partition to the RAID.
Pressing F6 would either restripe or
remirror based on the chosen RAID
device.
NOTE: If the RAID 1 is already
synchronized, pressing F6 will not show
any remirroring progress.
Management Tools for NSS
95
Management Options
Description
Snapshot
Use this option to create, delete, mount,
and dismount pool snapshots for NSS
pools.
Ins = Create a pool snapshot
Del = Delete a pool snapshot
F5 = Refresh display
F7 = Mount or dismount the pool snapshot as an active pool.
The snapshot functions continue whether the snapshot is
mounted or dismounted.
Esc = Previous menu
Snapshots are stored on a separate
partition that you specify, not another
pool. After it is created, the partition for
the snapshot pool cannot be expanded.
The default unit for setting and displaying the volume quota or pool size is Gigabyte (GB). However,
the following units will also be displayed relative to the storage size:
Pool / Volume Quota Size
Displayed Unit
<= 1023 KB
KB
>= 1024 KB and <= 1023 MB
MB
>= 1024 MB and <= 1023 GB
GB
>= 1024 GB
TB
NOTE:
 The pool size is not displayed in KB as the minimum allowed pool size should be at least 12 MB.
 When the volume quota is entered without specifying any units, the volume quota size is
assumed to be in GB. For example, specifying a value of 1 will be considered as 1 GB, 2 as 2 GB
and 0.5 as 512 MB. Specifying the value of 'ALL' will create a pool consuming the entire disk
space or LUN size.
9.3
NSS Commands and Utilities
Command line instructions and utilities are available to control most NSS functions.
 Section 9.3.1, “Command Consoles,” on page 96
 Section 9.3.2, “NSS Commands,” on page 97
 Section 9.3.3, “NSS Utilities,” on page 97
9.3.1
Command Consoles
NSS commands and utilities are issued from command line interfaces that are referred to as consoles
in this guide. All NSS commands and utilities are issued while logged in as the root user or a user
with equivalent privileges.
 “Linux Terminal Console” on page 97
 “NSS Console” on page 97
96
OES 11 SP2: NSS File System Administration Guide for Linux
Linux Terminal Console
NSS utilities for Linux are issued at the command prompt of a Linux terminal console.
If you are not running a graphical interface on the server, the terminal console is simply the
command line prompt displayed when you log in to the server.
If you are using a graphical interface, you can open a terminal console by using one of these methods:
 Right-click on the Linux desktop, then select Open Terminal from the menu.
 From the Linux desktop, click the Computer menu, then select Terminal (Command Line Terminal)
from the Applications menu.
NSS Console
The NSS Console (NSSCON, nsscon(8)) utility for Linux provides a command line interface in a
console environment familiar to NetWare users. Use it to issue NSS commands and to monitor NSS
activity through console messages. For more information, see Section B.4, “nsscon,” on page 413.
To start NSSCON, enter the following at a terminal console prompt:
nsscon
9.3.2
NSS Commands
To view a list of NSS commands and options, enter the following command at the NSSCON prompt:
nss /help
For information about NSS commands, see Appendix A, “NSS Commands,” on page 369.
9.3.3
NSS Utilities
The most well-known NSS utilities are the NSS Management Utility (nssmu) for managing storage
and the NSS Console (nsscon) utility for issuing NSS commands. NSS provides other utilities to
support more complex command line management tasks for NSS pools and volumes.
For information about NSS utilities, see Appendix B, “NSS Utilities,” on page 405.
9.4
Novell NetStorage
Novell NetStorage provides a Web-based interface to access directories and files on your NSS
volumes on NetWare. You can also manage file system trustees, file system trustee rights, and
directory and file attributes for the NSS file system on NetWare.
 Section 9.4.1, “Prerequisites,” on page 98
 Section 9.4.2, “Accessing NetStorage,” on page 98
 Section 9.4.3, “Configuring File System Trustees, Trustee Rights, and Attributes,” on page 98
 Section 9.4.4, “Purging and Salvaging Deleted Files,” on page 99
 Section 9.4.5, “Browsing Directories and Files,” on page 99
 Section 9.4.6, “Additional Information,” on page 99
Management Tools for NSS
97
9.4.1
Prerequisites
For NSS, users must be Linux-enabled with Linux User Management in order to use NetStorage. For
information about installing and configuring Linux User Management and enabling users and
groups for Linux, see the OES 11 SP2: Novell Linux User Management Administration Guide.
9.4.2
Accessing NetStorage
In iManager
From OES 2 onwards, NetStorage is accessible from within iManager.
1 Log in to iManager in the eDirectory tree of the servers that you want to manage.
For information, see Section 9.1.4, “Accessing Novell iManager,” on page 83.
2 In Roles and Tasks, select NetStorage.
Direct URL
To avoid conflicts, the date and time on the workstation being used to access NetStorage should be
reasonably close (within a few hours) to the date and time on the server running NetStorage.
1 Launch your Web browser and open it to the following location:
http://192.168.1.1/oneNet/NetStorage
Replace 192.168.1.1 with the actual DNS name or IP address of your NetStorage server or the IP
address for Apache-based services. If Apache-based services use a port other than 80, you must
also specify that port number with the URL.
For example, if the port number is 51080, the URL would be in the form
http://192.168.1.1:51080/oneNet/NetStorage
2 Log in with your administrator username and password to manage file system access for
directories and files on NSS volumes.
NetStorage uses NetIQ eDirectory for authentication. You can also log in as any username with
equivalent rights to the administrator. This limitation does not apply if you have created a
Storage Location object using SSH (Secure Shell).
9.4.3
Configuring File System Trustees, Trustee Rights, and Attributes
Using NetStorage, you can set file system trustees, trustee rights, and attributes for directories and
files on NSS volumes on your Linux or NetWare servers by using the NetWare Info tab and NetWare
Rights tab in the Properties dialog box.
IMPORTANT: The label of Netware refers to the NetWare Core Protocol (NCP) that is used for
trustee management. Use the option for NSS volumes and NCP on Linux.
For information about file system trustees, trustee rights, and attributes for directories and files on
NSS Volumes, see the OES 11 SP2: File Systems Management Guide.
98
OES 11 SP2: NSS File System Administration Guide for Linux
Directory or File Attributes
1 In NetStorage, select the file or directory, then click the NetWare Info tab to view or modify NSS
directory or file attributes.
NSS File System Trustee Rights
1 In NetStorage, select the file or directory, then click the NetWare Rights tab to view or modify
NSS file system trustee rights.
9.4.4
Purging and Salvaging Deleted Files
Using NetStorage, you can purge and possibly undelete (salvage) NSS files that were previously
deleted. For information, see Section 23.6.1, “Using NetStorage,” on page 304.
9.4.5
Browsing Directories and Files
Administrators and users can use NetStorage to browse directories and files in an NSS volume.
9.4.6
Additional Information
For information, see:
 OES 11 SP2: File Systems Management Guide
 OES 11 SP2: NetStorage Administration Guide for Linux
9.5
Novell Remote Manager
Novell Remote Manager (NRM) is a browser-based management utility for monitoring server health,
changing the configuration of your server, or performing diagnostic and debugging tasks.
NRM provides the NCP Server management plug-in that allows you to create shadow volumes using
NSS volumes. You can also use it to manage NCP connections to the NSS volumes.
 Section 9.5.1, “Prerequisites for Using Novell Remote Manager,” on page 99
 Section 9.5.2, “Novell Remote Manager,” on page 100
 Section 9.5.3, “Accessing Novell Remote Manager,” on page 101
 Section 9.5.4, “Starting, Stopping, or Restarting Novell Remote Manager,” on page 101
9.5.1
Prerequisites for Using Novell Remote Manager
 “Prerequisites for Remote Administration” on page 99
 “Prerequisites for Administrator User Access” on page 100
Prerequisites for Remote Administration
Your configuration must satisfy the following prerequisites:
 Make sure SSL 3.0 (where available) or SSL 2.0 is enabled in your Web browser.
Management Tools for NSS
99
Novell Remote Manager requires an SSL connection between your Web browser and the target
server where it is running. You must enable SSL services for your Web browser; otherwise, the
browser displays an error when it tries to display the Novell Remote Manager Web pages.
 Ports 8008 (insecure) and 8009 (secure) are the default ports used for accessing Novell Remote
Manager. If you change the port number, make sure you specify the same value for the port
number when you log in.
Prerequisites for Administrator User Access
You can log into Novell Remote Manager as the root user or equivalent for the OES server you are
managing.
You can alternately log in to Novell Remote Manager with your eDirectory credentials if you first
enable Linux User Management (LUM) in your eDirectory tree and install and configure LUM on the
target server. The Administrator user or equivalent must be Linux-enabled and at least one of the
following conditions must be met:
 The Administrator user (or equivalent user) must be associated to the eDirectory group that has
the Supervisor right for the Entry Rights property for the UNIX Workstation object in eDirectory.
 The Administrator user (or equivalent user) must have the Supervisor right for the Entry Rights
property to the NCP object that represents the Linux server in the eDirectory tree.
To see if a user is Linux-enabled, go to iManager, select the User role, then select the user to see if the
following is true:
 The user has a Linux Profile tab on the Modify User page in iManager.
 The user’s eDirectory object is associated with the UNIX Workstation object that represents the
Linux server.
For information about configuring Linux User Management and enabling users for Linux, see the
OES 11 SP2: Novell Linux User Management Administration Guide.
9.5.2
Novell Remote Manager
Novell Remote Manager allows you to browse NSS volumes. It requires that the NCP Server and
NCP Server plug-in for Novell Remote Manager be installed and running.
Tasks
The NCP Server plug-in supports the following tasks:
 Managing connections to NSS volumes and viewing open files for a connection.
For information, see “Managing Connections for NCP Volumes and NSS Volumes” in the OES
11 SP2: NCP Server for Linux Administration Guide.
 Creating or managing shadow volumes with NSS volumes as the primary and secondary
storage areas.
For information, see the OES 11 SP2: Dynamic Storage Technology Administration Guide.
Novell Remote Manager does not support the following tasks for NSS:
 Configuring directory quotas
 Salvaging and purging deleted files and directories
100
OES 11 SP2: NSS File System Administration Guide for Linux
 Configuring file system trustees and attributes for directories and files
 Creating and managing partitions, pools, and volumes
Additional Information
For detailed information about Novell Remote Manager, see the OES 11 SP2: Novell Remote Manager
Administration Guide.
9.5.3
Accessing Novell Remote Manager
1 From your Web browser, enter one of the following:
http://server-ip-address:8008
https://server-ip-address:8009
Replace server-ip-address with the IP address of the server you want to manage. If you have
Domain Name Services (DNS) installed on your network for server name-to-IP address
resolution, you can optionally use the server’s DNS name instead of the IP address.
2 Determine the authenticity of the SSL certificate, then accept it if the certificate is valid.
3 When the Login page appears, type the username and password of the root user for that server,
or type the username and password of the Administrator user (or equivalent user) who is an
eDirectory user and who has been Linux-enabled.
4 Click OK to log in to the target server and initiate your SSL session.
The management interface opens in your Web browser. After logging in, your SSL session for
Novell Remote Manager remains open until you close all your browser windows at that
workstation.
9.5.4
Starting, Stopping, or Restarting Novell Remote Manager
Novell Remote Manager is installed and runs by default. If it hangs, you can use the /etc/init.d/
novell-httpstkd script to get status or to stop, start, or restart httpstkd. For the latest information
about httpstkd, see “Starting or Stopping HTTPSTKD” in the OES 11 SP2: Novell Remote Manager
Administration Guide.
1 Open a terminal console, then log in as the root user.
2 At the terminal console prompt, enter the command for the task you need to perform:
9.6
Task
Command
Status
rcnovell-httpstkd status
Start
rcnovell-httpstkd start
Stop
rcnovell-httpstkd stop
Restart
rcnovell-httpstkd restart
Novell Client
 Section 9.6.1, “Novell Client,” on page 102
 Section 9.6.2, “Novell Client for Windows XP/2003 and Vista,” on page 102
Management Tools for NSS
101
9.6.1
Novell Client
The Novell Client software allows users of Linux workstations to access and use all of the services
available on servers running NetIQ eDirectory. The Novell Client brings the full power, ease of use,
manageability, and security of eDirectory to Linux workstations. The Novell Client for Linux fully
supports NetWare, OES, and eDirectory services and utilities on a Linux workstation, including
security, file, and print services through Novell iPrint.
For more information, see the Novell Client Linux Administration Guide (http://www.novell.com/
documentation/linux_client/).
9.6.2
Novell Client for Windows XP/2003 and Vista
In combination with NCP Server, the Novell Client for Windows XP/2003 and the Novell Client for
Vista support the following:
 Management of file system trustees, trustee rights, and inherited rights filters for directories and
files on NSS volumes
 Management of attributes for directories and files on NSS volumes
 Purge and salvage of deleted files on NSS volumes, if the volume is configured to support it by
enabling the Salvage Files attribute for a volume
 Drive mapping for NSS volumes
 Login scripts for automatic drive mapping on login
For more information, see the Novell Client 4.91 SP5 for Windows XP/2003 Installation and
Administration Guide and the Novell Client 2 SP1 for Windows Administration Guide.
9.7
ConsoleOne
ConsoleOne is a Java application that runs on the server or a workstation. You can use ConsoleOne to
manage trustees and attributes for directories and files in your NSS file system. For information
about trustees and file and directory attributes for NSS, see the OES 11 SP2: File Systems Management
Guide.
Loading ConsoleOne
To use ConsoleOne on a client workstation, you need to load ConsoleOne 1.3 or later, then copy the
NSS .jar files from your server to your client workstation.
1 Copy these three NSS .jar files to your local lib directory:
public\mgmt\consoleone\1.3\lib\nssadmin.jar
public\mgmt\consoleone\1.3\lib\nssc1lib.jar
public\mgmt\consoleone\1.3\lib\nssjavalib.jar
2 Copy these two .jar files to your local resources directory:
public\mgmt\consoleone\1.3\resources\nssadminres.jar
public\mgmt\consoleone\1.3\resources\nssc1libres.jar
3 Copy the following .jar file to your local snap-ins directory:
public\mgmt\consoleone\1.3\snapins\nssadminreg.jar
102
OES 11 SP2: NSS File System Administration Guide for Linux
Enabling the ConsoleOne Media Snap-In for Managing DFS
1 Open ConsoleOne.
2 Open the tree you want to work with.
3 Right-click the server object you want to manage.
4 Click Properties.
This opens the ConsoleOne interface, where you can access the Media snap-in for configuring
DFS.
5 Select the Media snap-in.
Additional Information
For more information, see the ConsoleOne 1.3.x User Guide.
9.8
Virtual File Services, APIs, and Scripts
Virtual File Services (VFS) provides methods that allow you to manage services such as NSS by using
standard file system functions. Using VFS and a scripting or GUI-based interface, you can view the
status and statistics for your system and change the system parameters.
NSS provides a special administration volume, known as the _admin volume, that exists on each
server. This volume uses no disk space and is created at startup time. Using VFS and the services
provided by files that are created on the _admin volume, you can potentially control all server
management functions.
For more information and instructions, see NDK: Virtual File Services (http://developer.novell.com/
wiki/index.php/Virtual_File_Services_for_NetWare) in the Novell Developers Kit (NDK)
documentation.
9.9
Novell Linux Volume Manager (NLVM)
The Novell Linux Volume Manager (NLVM) provides management of Novell Storage Services (NSS)
storage objects in Novell Open Enterprise Server (OES) 11 Support Pack 1 (SP2). The command line
interface (CLI) commands can be used in a Linux console or in a script. The NSS management tools
use the NLVM library of APIs to create and manage NSS storage objects. NLVM also provides
options to create Linux POSIX file systems, such as BtrFS, Ext2, Ext3, ReiserFS, and XFS.
For more information, see the OES 11 SP2: NLVM Reference.
Management Tools for NSS
103
104
OES 11 SP2: NSS File System Administration Guide for Linux
10
Managing Devices
10
This section describes how to manage devices where you want to create or manage NSS storage
objects.
 Section 10.1, “Understanding Devices,” on page 105
 Section 10.2, “Viewing a List of Devices on a Server,” on page 109
 Section 10.3, “Viewing Details for a Device,” on page 110
 Section 10.4, “Scanning for Devices,” on page 111
 Section 10.5, “Initializing a Disk,” on page 112
 Section 10.6, “Sharing Devices,” on page 113
 Section 10.7, “Viewing Partitions on a Device,” on page 115
 Section 10.8, “Viewing Pools on a Device,” on page 116
 Section 10.9, “Enabling Write-Through Cache Management on SCSI Devices and RAID
Controllers,” on page 117
 Section 10.10, “What’s Next,” on page 118
10.1
Understanding Devices
A block storage device is the physical, logical, or virtual storage media available to a server. A device
can be directly attached to the server or connected via storage networking protocols such as Fibre
Channel and iSCSI.
 Section 10.1.1, “Device Size,” on page 105
 Section 10.1.2, “Device Types,” on page 106
 Section 10.1.3, “Device Details,” on page 108
10.1.1
Device Size
For OES 11 SP2, NSS recognizes device sizes up to 2E64 sectors (that is, up to 8388608 petabytes (PB)
based on the 512-byte sector size).
NSS allows you to initialize devices to use either the DOS partition table format or the GPT (GUID
Partition Table) format for a given device. When you create pools, you can combine space from
devices that are formatted using any method.
The DOS partition table scheme supports devices up to 2TB in size. It allows up to four partitions on
a device.
The GPT partition table scheme supports device sizes up to 2E64 sectors (that is, up to 8388608
petabytes (PB) based on the 512-byte sector size). It allows up to 128 partitions per disk. Each of its
disks partitions is a logical device that is identified by a unique 128-bit (16-byte) GUID.
Managing Devices
105
IMPORTANT: Pools created on GPT initialized devices are not backward compatible with versions
earlier to OES 11.
Different manufacturers report device sizes differently. The actual device size varies with the
hardware design and the applications and software drivers that manage the device. Many vendors
report sizes using a definition where 1 TB = 10E12 bytes = 1,000,000,000,000 bytes. Space can also be
consumed by metadata that is added to manage the device. The location on the device where the
metadata is stored can also vary by hardware manufacturer and software vendor. After you format
the drive, yet another size might be reported. Third-party product documentation might state the
maximum size limits of devices it supports before or after making accommodations for any
management data or space lost to formatting. The size of devices you ultimately carve out for use
with NSS depends on all these factors.
IMPORTANT: Make sure to refer to the documentation of the device manufacturer, application
vendor, and software driver vendor for other limitations on the device size.
10.1.2
Device Types
The following are examples of common types of devices:
 “Server Disks” on page 106
 “Direct-Attached Storage Devices” on page 106
 “LUN Devices” on page 106
 “iSCSI Devices” on page 107
 “RAID Devices” on page 107
 “Multipath Devices” on page 108
 “Removable Media” on page 108
 “Virtual Disks” on page 108
Server Disks
Server disks include physical disks on the server or logical disks carved from the server disk.
Direct-Attached Storage Devices
Physical or logical disks can be directly attached to the server as individual devices or in a storage
array.
LUN Devices
A LUN (logical unit number) can be either a physical or a logic disk drive. Refer to the iSCSI SAN or
Fibre Channel SAN documentation for information about creating and managing LUNs for your
SAN implementation.
A metaLUN is a controller-managed group of multiple LUNs or of multiple hardware RAIDs that are
striped or concatenated together to be presented as a single LUN device to the server. Refer to the
hardware manufacturer’s documentation for information about creating metaLUNs.
106
OES 11 SP2: NSS File System Administration Guide for Linux
iSCSI Devices
An iSCSI device is a remote target disk or tape drive on an iSCSI disk server that is made available
across an IP network by iSCSI initiator software running on the server. After connecting to the disk
server, you can view the devices in the Devices list and add NSS pools and volumes as you would
with any device.
For information about managing and using iSCSI devices on OES 11 SP2, see “Mass Storage over IP
Networks—iSCSI” in the SLES 11 SP3 Storage Administration Guide. See also the Linux iSCSI Project.
RAID Devices
A RAID (redundant array of independent disks) is a logical device that combines space from
multiple devices by using special hardware, software, or both. Data is striped or replicated across all
member devices to improve data reliability, increase I/O performance, or provide device fault
tolerance. All RAID types require configuration using a RAID management tool made for the specific
hardware or software used in the RAID.
 “Hardware RAID Devices” on page 107
 “Controller RAID Devices” on page 107
 “Software RAID Devices” on page 107
Hardware RAID Devices
In a hardware RAID, the RAID functionality and management are in firmware within the storage
cabinet. Refer to the hardware manufacturer’s documentation for information about creating
hardware RAIDs.
Controller RAID Devices
Controller RAID devices are also known as BIOS RAIDs, fakeRAIDs, hostRAID, and quasi-hardware
RAIDs.
In controller RAIDs, the functionality and management are in the HBA or controller BIOS/firmware.
If the controller does not contain an on-board CPU resource to use for RAID management, the
controller RAID consumes server CPU resources to manage the RAID.
Refer to the hardware manufacturer’s documentation for information about configuring Controller
RAID devices. For information about using Controller RAIDs with OES 11 SP2, see TID 3626577:
BIOS RAID Support in the Novell Support Knowledgebase.
Software RAID Devices
Software RAIDs are controlled by special software in the server's OS such as in the HBA driver or in
upper level module such as NSS. Software RAIDs consume CPU resources to manage the RAID.
For information about creating and managing NSS software RAIDs, see Chapter 13, “Managing NSS
Software RAID Devices,” on page 137.
You can optionally use Linux tools to create and manage Linux software RAIDs. Linux software
RAIDs must be initialized with either a DOS or GPT partition format in order to be used by NSS.
Managing Devices
107
Multipath Devices
If there are multiple connection paths between a device’s hardware controller and the server, each
path presents a given device to the server as a separate device. You must use a multipath
management tool to resolve the multiple apparent devices to a single multipath device. Use the
multipath device UUID or alias when you are creating NSS pools and volumes. Multipath tools also
provide automatic path management for path failover, failback, and reconfiguration.
Use the Linux multipath I/O tools to create the multipath device.
Removable Media
Removable media devices include CDs, DVDs, or CD/DVD image files. Removable media are
mounted as Linux POSIX file systems. Use Linux native tools to manage removable media.
Virtual Disks
In a Xen virtual environment, you use the Virtual Machine Manager in YaST to allocate storage
devices from the host to the virtual machine. The devices that you want to use for the NSS file system
on the guest machine must be less than 2 TB (where 1 TB = 2E40 bytes) if the guest operating is OES 2
SP3 or earlier versions. Devices of 2 TB or larger are supported from OES 11 onwards. For
information about storage considerations in virtual environment, see Chapter 6, “Using NSS in a
Virtualization Environment,” on page 61.
10.1.3
Device Details
The following table describes the type of information available for each device by viewing the Device
Details page.
Table 10-1 Explanation of Device Details
Device Detail
Description
Name
The device name assigned by the device manager.
Major Number
Minor Number
The device identity in major:minor format. Major and minor numbers are
associated with the device special files in the /dev directory and are used by
the operating system to determine the actual driver and device to be accessed
by the user-level request for the special device file.
Partitioning Type
DOS - The device has the DOS partitioning scheme.
GPT - The device has the GPT partitioning scheme.
LVM2 volume - The device has no partition table because there is an LVM
volume at the beginning of the disk. This is the case when we create a
Clustered Linux Volume.
CSM Unknown - The device is not recognized by NSS.
Shareable for Clustering The attribute of a device that indicates whether the selected device can be
shared by multiple computers in a cluster solution.
Capacity
108
The total available storage space of the selected device.
OES 11 SP2: NSS File System Administration Guide for Linux
Device Detail
Description
Used Space
The amount of space on the device that is currently in use by partitions,
including Novell partitions for NSS as well as native Linux partitions.
Free Space
The total amount of space on the device that is currently not in use.
Pools
The drop-down list shows all pools that exist on this device. To view a pool’s
details or to manage a pool, select the pool from the list, then click the View
Details icon to go to the Pools page for that pool.
Number of Pools
The total number of pools that use this device.
Partitions
The drop-down list shows all partitions that exist on this device. To view a
partition’s details, select the partition from the list, then click the View Details
icon to go to the Partition Information page for that partition.
Mirror Status
For a RAID1 device, this field shows its status:
 In Sync: The mirror group is fully synchronized.
 Partial Sync: The mirror group is only partially synchronized.
 Not Mirrored: The device is not mirrored (only one partition).
10.2
Viewing a List of Devices on a Server
1 In iManager, click Storage > Devices.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select a server to manage.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
Depending on the number of devices, it can take a few seconds to display the list of devices.
Avoid clicking again in the page until it refreshes and displays the Devices list.
Managing Devices
109
3 Select a device to view its details.
For an overview of the subtasks available from this page, see “Devices” on page 89.
10.3
Viewing Details for a Device
The Details field in the Devices page displays information about each device in the Devices list.
To view a device’s details:
1 In iManager, click Storage > Devices.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select a server to manage.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
When the page refreshes the devices are listed in the Devices list. Depending on the number of
devices, this can take several seconds. Wait for the page to load before moving to the next step.
110
OES 11 SP2: NSS File System Administration Guide for Linux
3 Select a device in the Devices list to view its details.
The page must refresh to display the details, which might take several seconds.
10.4
Scanning for Devices
If you add more disks to the SAN, you can use the rescan-scsi-bus.sh script to scan for the new
devices without rebooting. For information, see “Scanning for New Devices without Rebooting”
(http://www.suse.com/documentation/sles11/stor_admin/?page=/documentation/sles11/stor_admin/
data/scandev.html) in the SLES 11 SP3: Storage Administration Guide (http://www.suse.com/
documentation/sles11/stor_admin/?page=/documentation/sles11/stor_admin/data/bookinfo.html).
You can also use the following procedure to scan the devices and make them available without
rebooting the system:
1 On the storage subsystem, use the vendor’s tools to allocate the devices and update its access
control settings to allow the Linux system access to the new storage. Refer to the vendor's
documentation for details.
2 On the Linux system, use the HBA driver commands to scan the SAN to discover the new
devices. The exact commands depend on the driver.
For example, for a QLogic 2300 HBA, the command is
echo scsi-qlascan >/proc/scsi/qla2xxx/<host number>
At this point, the newly added device is not known to the higher layers of the Linux kernel's
SCSI subsystem and is not yet usable.
3 Scan all targets for a host to make its new device known to the middle layer of the Linux kernel's
SCSI subsystem. At a terminal console prompt, enter
echo "- - -" >/sys/class/scsi_host/host<hostnumber>/scan
4 If the devices have multiple paths, run the Multipath tool to recognize the devices for Device
Mapper Multipath I/O (DM-MPIO) configuration. At a terminal console prompt, enter
Managing Devices
111
multipath
For information about configuring multipathing for devices on your Linux server, see Managing
Multipath I/O for Devices (http://www.suse.com/documentation/sles11/stor_admin/?page=/
documentation/sles11/stor_admin/data/multipathing.html) in the SLES 11: Storage
Administration Guide (http://www.suse.com/documentation/sles11/stor_admin/?page=/
documentation/sles11/stor_admin/data/bookinfo.html).
10.4.1
Scanning for Devices using iManager
The system typically recognizes all devices on reboot or after you create them. If you add devices to a
server and the system does not automatically detect them, you might need to scan for devices.
IMPORTANT: This feature is available only for servers of version OES 11 or later.
1 In iManager, click Storage > Scan for Devices.
2 Select a server to manage.
The browser displays the Scan Devices page.
IMPORTANT: After iManager connects to the server, it scans for devices and displays them in
the Devices list. The scan can take several seconds, depending on the number of adapters and
disks on your systems. Click Cancel at any time to back out of the process.
3 View the list of devices.
NOTE: You can only view the list. There are no actions to make from this page.
10.5
Initializing a Disk
If you can see a device listed in the Devices list, but the device is not available for creating pools and
volumes, you probably need to initialize the disk.
On the Devices page, the Initialize Disk option initializes the selected device and completely removes
all the partitions it contains. All the data stored on the device is lost. If the device contains a partition
of an NSS pool, a Traditional volume, or a software RAID device, the Initialize process also deletes
data on all of the partitions of the entire pool, volume, or device, even if they reside on separate
devices.
WARNING: Do not initialize the device that contains a system volume (such as /boot, swap, and /
(root)). Initializing the system volume destroys the operating system and all the data in it.
This option is disabled (dimmed) if the selected device contains Software RAID 1 (mirrored) device.
It can also be disabled if there is no space available based on each partition’s size, or if you already
have the maximum number of partitions allocated in a software RAID device.
NOTE: You can also use ncsinit utility to initialize and to set the device to a shared state. For more
information, refer Section B.14, “ncsinit,” on page 430.
To initialize a disk from iManager:
1 In iManager, click Storage > Devices.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
112
OES 11 SP2: NSS File System Administration Guide for Linux
2 Select a server to manage to view the Devices list.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
A list of devices appears in the Devices list.
3 In the Devices list, select the device that you want to initialize (such as sdd).
NOTE: You can select multiple devices for initialization.
4 Select MBR Type (Master Boot Record) from the following options:
 DOS: The DOS partition table scheme supports devices up to 2TB in size.
 GPT: The GPT partition table scheme supports device sizes up to 2E64 sectors (that is, up to
8388608 petabytes (PB) based on the 512-byte sector size). It allows up to 128 partitions per
disk. Each of its disks partitions is a logical device that is identified by a unique 128-bit (16byte) GUID. Select the GPT option to create the device > 2 TB in size.
IMPORTANT: Pools created on GPT initialized devices are not backward compatible with
versions earlier to OES 11.
5 Click Initialize Disk.
To initialize a disk from NSSMU:
1 In NSSMU, click Devices.
A list of devices appears in the Devices list.
2 In the Devices list, select a device.
NOTE: You can select multiple devices for initialization.
3 Press F3 to Initialize Disk.
4 Select Partitioning Scheme from the following options
 DOS: The DOS partition table scheme supports devices up to 2TB in size.
 GPT: The GPT partition table scheme supports device sizes up to 2E64 sectors (that is, up to
8388608 petabytes (PB) based on the 512-byte sector size). It allows up to 128 partitions per
disk. Each of its disks partitions is a logical device that is identified by a unique 128-bit (16byte) GUID. Select the GPT option to create the device of any size.
If you encounter any error, refer to the log messages at /var/log/messages. The messages are
preceded with NSSMU.
10.6
Sharing Devices
Devices that are shared in a Novell Cluster Services Cluster must be marked as shareable for
clustering. This includes devices that you plan to use for shared software RAIDs, shared pools, and
the cluster SBD (split-brain detector) partition. Unsharing a device fails if the device contains a pool
(or any segment of a pool) that is cluster-enabled with Novell Cluster Services.
 Section 10.6.1, “Understanding Sharing,” on page 114
 Section 10.6.2, “Planning for Device Sharing,” on page 114
 Section 10.6.3, “Configuring the Device’s Share State,” on page 114
Managing Devices
113
10.6.1
Understanding Sharing
Storage devices that exist in a storage area network (SAN) can be shared by multiple servers in a
cluster using Novell Cluster Services. For information about clustering, see the OES 11 SP2: Novell
Cluster Services for Linux Administration Guide.
IMPORTANT: The system hardware does not specify that disk drives come up automatically as
Shareable for Clustering or Not Shareable for Clustering. You must manually set this value for each
device, according to the configuration of your storage system.
Making a device shareable enables device sharing for those devices in high-availability clusters that
you want to be part of a shared-disk storage solution. If the Shareable for Clustering option is enabled
(selected), the selected storage device can be shared by multiple computers in a cluster.
WARNING: Marking a device as shareable for clustering sets all of the pools on this device to
shareable. If any of these pools span multiple devices, you must make sure that each device is set to
the same share state as this one, or the device can become unusable.
If a device is a member of a software RAID device, marking the device as shareable for clustering
automatically sets all the other member devices of the RAID as shareable for clustering.
10.6.2
Planning for Device Sharing
By default, devices are not shared. Use the following guidelines when planning whether to share
devices:
 The device that contains the operating system cannot be marked as Shareable for Clustering.
 You cannot mix space from shared and unshared devices to create a pool. If a pool spans
multiple storage devices, all of the member devices in that pool must be marked as Shareable for
Clustering, or all marked as Not Shareable for Clustering.
 You cannot mix space from shared and unshared devices to create a software RAID. All devices
that contribute space to the RAID must be marked as Shareable for Clustering, or all marked as
Not Shareable for Clustering.
 Do not mark a device as Shareable for Clustering if it is not capable of being shared, such as when
the device contributes space to the system pool (sys), to an unshared software RAID, or to an
unshared pool.
10.6.3
Configuring the Device’s Share State
To configure the device’s share state from iManager:
1 In iManager, click Storage > Devices.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select the server that you want to manage to view a list of its devices.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
3 In the Devices list, select a device to view information about it.
114
OES 11 SP2: NSS File System Administration Guide for Linux
4 Depending on the current state of the device, do one of the following:
 To set a device’s share state to On, select the Shareable for Clustering check box, then click
Apply or click OK.
 To set a device’s share state to Off, deselect the Shareable for Clustering check box, then click
Apply or click OK.
If you click Apply, iManager saves the change and remains on the device page. If you click OK,
iManager saves the change and takes you to the main Storage page. If you do not click Apply or
OK, the change is not implemented.
To configure the device’s share state from NSSMU:
1 In NSSMU, click Devices.
A list of devices appears in the Devices list.
2 In the Devices list, select a device. Press F5 to select the device.
NOTE: You can select multiple devices at a time.
3 Press F6 to share the device.
NOTE: If you have selected shared devices, you are prompted to make your shared devices
unshared.
If you encounter any error, refer to the log messages at /var/log/messages. The messages are
preceded with NSSMU.
10.7
Viewing Partitions on a Device
NSS abstracts all partition creation and deletion in iManager; there are no actions to perform on
partitions. For information about partitions, see “Managing Partitions” on page 129.
1 In iManager, click Storage > Devices.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select a server to manage.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
A list of devices appears in the Devices list.
3 In the Devices list, select a device.
4 In the Details area, click the arrow on the Partitions drop-down list to expand it.
5 Select a partition, then click View Details.
This opens the Partitions page. It displays a list of all the partitions that currently exist on the
selected device.
Managing Devices
115
6 To view details about a partition, select the check box next to it in the Partitions list, then click
Details to view its details.
10.8
Viewing Pools on a Device
1 In iManager, click Storage > Devices.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select the server that you want to manage.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
3 In the Devices list, select a device.
4 In the Details area, click the arrow on the Pools drop-down list to expand it.
5 Select a pool, then click View Details.
This opens the Pools page where you can view the details of the pool and manage it.
For information about pool management, see “Managing NSS Pools” on page 163.
116
OES 11 SP2: NSS File System Administration Guide for Linux
10.9
Enabling Write-Through Cache Management on SCSI
Devices and RAID Controllers
Any journal based file system (including NSS) requires that when writes occur, they must be
committed to disk in order to prevent corruption in event of a power failure.
Using write-back cache management can improve performance by allowing data to be held in cache
rather than being written to disk. However, write-back cache management introduces the risk of
losing data if the power fails. Many array controllers have an on-board battery backup, which can
reduce the risk of data loss when using write-back, but it might not eliminate the risk. It is up to you
to determine if the power backup is sufficient for power loss scenarios in your production
environment.
If your system does not have sufficient power loss protection, we require using write-through cache
management for SCSI devices to minimize the risk of losing data in the event of power failure. WriteThrough cache management assures the file system that writes are being committed to disk as
required.
If the server uses a RAID controller, enable Write-Through (disable Write-Back) cache management
when configuring the RAID device by using the controller’s BIOS setup routine or configuration
utility.
To enable Write-Through cache management for local devices:
1 Log in to the server as root.
2 If the scsi-config utility is not already installed, install it using the xscsi RPM.
The xscsi RPM, which contains the scsi-config command, is not installed by default.
2a In YaST, open the Various Linux Tools section.
2b Install the xscsi RPM package, then close YaST.
The xscsi package installs the scsi-config utility in /user/bin/scsi-config.
3 Enable Write-Through (disable Write-Back) cache management for each SCSI device by
performing the following for each device where you plan to use NSS volumes:
3a At a terminal console prompt, enter
scsi-config
3b In the window that opens, browse to select drive you want to manage, then click Continue.
3c Click Cache Control Page.
3d Enable Write-Through cache management mode by deselecting the Write cache enabled check
box.
Write-Through cache management is enabled by default, so the Write cache enabled check
box should be deselected. If the Write cache enabled check box is selected, Write-Back cache
management mode is enabled and you deselect the box to disable Write-Back cache
management.
3e Click Quit > Save Changes.
3f When prompted to confirm the change, click Go Ahead and Save > Quit.
3g To verify the setting, at a terminal console prompt, enter
scsiinfo -c /dev/sdx
Managing Devices
117
Replace /dev/sdx with the device you are checking.
A value of 0 for Write Cache means that the drive is in Write-Through cache management
mode.
10.10
What’s Next
If your server provides multiple I/O paths between the server and its storage devices, configure the
primary path and path priorities for I/O failover for each device. For information, see Chapter 14,
“Managing Multipath I/O to Devices,” on page 161. When you are done, continue with creating
software RAIDs and pools.
To configure software RAID devices, see Chapter 13, “Managing NSS Software RAID Devices,” on
page 137.
To create pools of storage on the devices, see Chapter 15, “Managing NSS Pools,” on page 163.
118
OES 11 SP2: NSS File System Administration Guide for Linux
11
Migrating NSS Devices to OES 11 SP2
1
This section describes the steps involved in migrating Novell Storage Services devices to Novell
Open Enterprise Server 11 SP2 servers.
IMPORTANT: For a general discussion of migration issues in OES 11 SP2, see “Migrating Existing
Servers and Data” in the OES 11 SP2: Planning and Implementation Guide.
 Section 11.1, “Guidelines for Moving Devices from NetWare 6.5 SP8 to OES 11 SP2,” on page 119
 Section 11.2, “Moving Non-Clustered Devices From NetWare 6.5 SP8 Servers to OES 11 SP2,” on
page 120
 Section 11.3, “Moving Non-Clustered Devices From NetWare 6.5 SP8 to OES 11 SP2,” on
page 125
 Section 11.4, “Moving Clustered Devices with NSS Volumes OES 11 SP2,” on page 128
11.1
Guidelines for Moving Devices from NetWare 6.5 SP8 to
OES 11 SP2
You can move devices containing NSS volumes between NetWare 6.5 SP8 servers and OES 11 SP2
servers. When you move an unshared device to a different server, you must decommission its
volumes in eDirectory for the current server, then recommission them for the new server.
For shared NSS pools and volumes, Novell Cluster Services provides this service automatically
during a rolling cluster conversion from NetWare 6.5 SP8 to OES 11 SP2. For information about
converting shared pool cluster resources and service resources, see the OES 11 SP2: Novell Cluster
Services NetWare to Linux Conversion Guide.
 Section 11.1.1, “Media Format,” on page 119
 Section 11.1.2, “Pool Snapshots,” on page 120
 Section 11.1.3, “Cross-Platform Issues,” on page 120
11.1.1
Media Format
The NSS media upgrade for enhanced hard links support is available for the following operating
platforms (and later versions):
 Novell Open Enterprise Server 11
 Novell Open Enterprise Server 2 Linux and NetWare
 Novell Open Enterprise Server 1 SP1 NetWare
 NetWare 6.5 SP4
Migrating NSS Devices to OES 11 SP2
119
The media upgrade is not available on OES 1 Linux, NetWare 6.5 SP3, and earlier versions. You
cannot move a device that has been upgraded to the new media format to a platform that does not
support it. For information about the media format upgrade, see Chapter 4, “Upgrading the NSS
Media Format,” on page 41.
11.1.2
Pool Snapshots
Different pool snapshot technologies are used for NSS pools on NetWare and NSS pools on Linux.
You can create pool snapshots on either platform, but the snapshots are unusable if you move the
devices cross-platform and are invalid if you move the volume back.
WARNING: Before moving a device cross-platform make sure to delete any existing pool snapshots
for all pools on it. You might not be able to see the original pools on the Linux platform if you do not
delete the snapshots before moving the device.
11.1.3
Cross-Platform Issues
For information about differences to expect when using NSS, see Chapter 7, “Cross-Platform Issues
for NSS,” on page 69.
11.2
Moving Non-Clustered Devices From NetWare 6.5 SP8
Servers to OES 11 SP2
This section describes how to move devices cross-platform from a NetWare 6.5 SP8 server to OES 11
SP2 server. NSS supports moves of devices containing NSS volumes between any servers that
support a compatible media format. For information, see Section 11.1, “Guidelines for Moving
Devices from NetWare 6.5 SP8 to OES 11 SP2,” on page 119.
IMPORTANT: Similar handling is necessary when moving devices with NSS pools between any two
OES servers (NetWare to Linux, NetWare to NetWare, or Linux to Linux).
To preserve the NSS pool and volumes on the device when you move it, you must modify the
volumes’ Storage objects in eDirectory. You decommission the volume by removing its related
Storage object from eDirectory for the original server. You recommission the volume by creating a
new Storage object in eDirectory for the destination server. When moving clustered devices crossplatform for a cluster conversion from NetWare to OES 11 SP2, Novell Cluster Services automatically
manages the Storage object updates to eDirectory.
NOTE: The decommission and recommission terminology is used only to illustrate the process; it does
not represent a particular technology or tool.
 Section 11.2.1, “Prerequisites,” on page 121
 Section 11.2.2, “Setting Up File Access For Users on the OES 11 SP2 Server,” on page 121
 Section 11.2.3, “Decommissioning Each NSS Pool and Its Volumes on the Original Server,” on
page 122
 Section 11.2.4, “Recommissioning Each NSS Pool and Its Volumes on the Destination Server,” on
page 123
 Section 11.2.5, “Using Scripts to Decommission and Recommission NSS Volumes,” on page 124
120
OES 11 SP2: NSS File System Administration Guide for Linux
11.2.1
Prerequisites
The prerequisites in this section apply to moving multiple devices from a NetWare 6.5 SP8 server to
an OES 11 SP2 server.
IMPORTANT: When moving a non-clustered device, you must also move any other devices that
contribute segments to the NSS pools on the device you are moving.
Compatibility Issues for Using NSS Volumes Cross-Platform
Before you begin, make sure you understand the Section 7.2, “Cross-Platform Issues for NSS
Volumes,” on page 69.
Original NetWare Server
You can move NetWare 6.5 SP4 (or later) NSS media to an OES 11 SP2 server if the operating platform
can support the NSS media format. NetWare 6.5 SP3, OES 1 SP2 Linux, and earlier servers do not
support the new media format.
For information, see Section 4.1, “Guidelines for Upgrading the Media Format of NSS Volumes,” on
page 41.
Destination OES 11 SP2 Server
 NSS and other needed OES 11 SP2 services must be installed on the OES 11 SP2 server where
you want to move the NSS volume. For information, see Section 3.2, “Installing and Configuring
NSS,” on page 34.
11.2.2
Setting Up File Access For Users on the OES 11 SP2 Server
Before or after you move an NSS volume from NetWare to Linux, you need to set up file access for
users on the OES 11 SP2 server.
Set Up Users in eDirectory
The original server and the destination server can be in the same or different eDirectory trees.
If the destination server is in the same tree as the original server, the file system trustees and trustee
rights continue to work after the move.
If the destination server is in a different tree, use eDirectory to enable or reassign affected users for
access in the destination tree. For information, see the eDirectory website (https://www.netiq.com/
documentation/edir88/).
Set Up Protocols and Services
To provide access for users on the OES 11 SP 1 server, do one or more of the following depending on
your network environment:
 NCP Server and Services: Install and configure NCP Server to allow the users to access the
volume with the Novell Client or other NCP services. For information, see the OES 11 SP2: NCP
Server for Linux Administration Guide.
Migrating NSS Devices to OES 11 SP2
121
 Novell AFP: Install and configure Novell AFP to allow the users to access the volume with the
Apple Filing Protocol. For information, see the OES 11 SP2: Novell AFP for Linux Administration
Guide.
 Novell CIFS: Install and configure Novell CIFS to allow the users to access the volume with
CIFS. For information, see the OES 11 SP2: Novell CIFS for Linux Administration Guide.
 Linux Protocols and Services: Install and configure other protocols, such as Novell Samba or
Linux NFS, to allow the users to access the volume with the non-NCP protocols. Using these
Linux services requires that the users be Linux enabled to execute Linux commands and services
on the volume.
For information about installing Novell Samba, see OES 11 SP2: Novell Samba Administration
Guide.
For information about configuring Linux NFSv3, see Section 18.16, “Exporting and Importing
NSS Volumes for NFS Access,” on page 230.
For information about enabling users and the Linux service with Linux User Management
(LUM), see the OES 11 SP2: Novell Linux User Management Administration Guide.
For guidelines about users and access, see Section 5.5, “Access Control for NSS,” on page 53.
11.2.3
Decommissioning Each NSS Pool and Its Volumes on the Original
Server
For each NSS pool, decommission the pool and its volumes from the original server.
1 If you use native Linux protocols or Linux services (such as SSH and FTP) for user access on the
destination OES 11 SP2 server, you must Linux-enable the current users of the volumes before
you move the devices.
IMPORTANT: If you do not use native Linux protocols or services for user access, this step is
not necessary.
Use one of the following methods to Linux-enable users of the volumes on the device:
 To enable multiple users at once, use the nambulkadd command.
User IDs are automatically refreshed after the enabling process ends.
 To enable a single user at a time, use iManager.
For information, see the OES 11 SP2: Novell Linux User Management Administration Guide.
2 Deactivate the pool on the device.
2a In iManager, click Roles and Tasks
.
2b Click Storage > Pools.
2c Browse to select the original server where the NSS pool resides.
2d Select the pool you want to decommission, then click Deactivate.
3 Remove the eDirectory Storage objects for the NSS pool and each of its volumes.
3a In iManager, click Roles and Tasks
.
3b Click eDirectory Administration > Delete Object.
3c Specify the name and context of the object or objects you want to delete.
3d Click OK.
4 Repeat Step 2 and Step 3 for each pool on the devices you plan to move.
122
OES 11 SP2: NSS File System Administration Guide for Linux
5 If you are using DFS in the tree where the original server is located, run the vldb repair
command.
On the primary VLDB server, at the command prompt, enter
vldb repair
This removes a GUID entry from the VLDB for each of the decommissioned volumes.
6 Remove or reallocate the devices from the original server. Depending on your storage
configuration, this might require a server shutdown.
11.2.4
Recommissioning Each NSS Pool and Its Volumes on the Destination
Server
For each NSS pool, recommission the pool and its volumes on the destination server.
1 Relocate or reassign the devices to the destination server.
2 Reboot the destination server to mount the devices.
3 If a pool on the devices you moved is not automatically activated, activate the pool.
3a In iManager, click Roles and Tasks
.
3b Click Storage > Pools.
3c Browse to select the destination server.
3d Select the pool, then click Activate.
4 Create the eDirectory Storage objects for the NSS pool and each of its volumes.
4a In iManager, click Roles and Tasks
.
4b Click Storage > Pools.
4c Browse to select the destination server.
4d Select the pool, then click Update eDirectory.
4e In the lower right, select View Volume Details to view all volumes on the selected pool.
iManager opens to the Volumes page with the server and pool preselected.
4f For each volume in the selected pool, select the volume, then click Update eDirectory.
4g Repeat Step 4d through Step 4f for each NSS pool and its volumes.
5 Allow the eDirectory tree to stabilize.
This can take several minutes.
6 Run the vldb repair command.
At the server command prompt on the primary VLDB server, enter
vldb repair
This adds a GUID entry to the VLDB for each of the recommissioned volumes.
Migrating NSS Devices to OES 11 SP2
123
11.2.5
Using Scripts to Decommission and Recommission NSS Volumes
Scripts are available to automate the process of decommissioning and recommissioning NSS volumes
that are not cluster-enabled, see Decommissioning Script and Recommissioning Script for moving
NSS devices cross-platform (http://www.novell.com/documentation/oes/script/decom_recom.zip).
This decom_recom.zip file contains two Perl scripts:
 decom.pl: The decommissioning script deactivates the specified pool, removes eDirectory
Storage objects for a specified NSS pool and each of its volumes on the original server, then it
repairs the VLDB, if it exists, to remove the volumes’ information from the VLDB. You provide
the pool name, and the script automatically gets the list of volumes on the pool.
 recom.pl: The recommissioning script activates the specified pool, creates eDirectory Storage
objects for a specified NSS pool and each of its volumes on the destination server, then it repairs
the VLDB, if it exists, to add the volumes’ information to the VLDB. You provide the pool name,
and the script automatically gets the list of volumes on the pool.
The scripts support moving NSS volumes on NetWare 6.5 SP8 to OES 11 SP2. You can modify the
scripts to move volumes between any two non-clustered OES servers:
 NetWare to Linux
 Linux to NetWare
 NetWare to NetWare
 Linux to Linux
Decommissioning NSS Pools on the Original Server with decom.pl
For each NSS pool, decommission the pool and its volumes from the original server.
1 If you use non-NCP protocols or Linux services for user access on the destination OES 11 SP2
server, you must Linux-enable the current users of the volumes before you move the devices.
IMPORTANT: If you use only NCP Server and NCP services for user access, this step is not
necessary.
Use one of the following methods to Linux-enable users of the volumes on the device:
 To enable multiple users at once, use the nambulkadd command.
User IDs are automatically refreshed after the enabling process ends.
 To enable a single user at a time, use iManager.
For information, see the OES 11 SP2: Novell Linux User Management Administration Guide.
2 For each NSS pool on the device you are moving, run the decom.pl script and specify the name
of the pool to decommission.
3 Remove or reallocate the devices from the original server. Depending on your storage
configuration, this might require a server shutdown.
124
OES 11 SP2: NSS File System Administration Guide for Linux
Recommissioning NSS Pools on the Destination Server with recom.pl
For each NSS pool, recommission the pool and its volumes on the destination server.
1 Relocate or reassign the devices to the destination server.
2 Reboot the destination server to mount the devices.
3 For each NSS pool on the device you moved, run the recom.pl script and specify the name of
the pool to recommission.
11.3
Moving Non-Clustered Devices From NetWare 6.5 SP8 to
OES 11 SP2
NSS supports moving the devices containing NSS volumes from a NetWare 6.5 SP8 server to an OES
11 SP2 server. For information, see Section 11.1, “Guidelines for Moving Devices from NetWare 6.5
SP8 to OES 11 SP2,” on page 119.
To preserve the NSS pool and volumes on the device when you move it, you must modify the
volumes’ Storage objects in eDirectory. You “decommission” the volume by removing its related
Storage object from eDirectory for the original server. You “recommission” the volume by creating a
new Storage object in eDirectory for the destination server. When moving clustered devices crossplatform, while converting the cluster from NetWare to Linux, Novell Cluster Services automatically
manages the Storage object updates to eDirectory.
NOTE: The “decommission” and “recommission” terminology is used only to illustrate the process;
it does not represent a particular technology or tool.
IMPORTANT: Similar handling is necessary when moving devices with NSS pools between any two
OES servers (NetWare to Linux, NetWare to NetWare, or Linux to Linux).
This section describes how to move devices cross platform from a NetWare 6.5 SP8 server to an OES
11 server:
 Section 11.3.1, “Prerequisites,” on page 125
 Section 11.3.2, “Setting Up File Access For Users on the OES 11 SP2 Server,” on page 126
 Section 11.3.3, “Decommissioning Each NSS Pool and Its Volumes on the Original Server,” on
page 127
 Section 11.3.4, “Recommissioning Each NSS Pool and Its Volumes on the OES Server,” on
page 127
11.3.1
Prerequisites
The prerequisites in this section apply to moving multiple devices from a NetWare 6.5 SP8 server to
an OES 11 SP2 server.
IMPORTANT: When moving a non-clustered device, you must also move any other devices that
contribute segments to the NSS pools on the device you are moving.
Migrating NSS Devices to OES 11 SP2
125
Compatibility Issues for Using NSS Volumes Cross-Platform
Before you begin, make sure you understand Section 7.2, “Cross-Platform Issues for NSS Volumes,”
on page 69.
11.3.2
Setting Up File Access For Users on the OES 11 SP2 Server
Before or after you move an NSS volume from NetWare 6.5 SP8 to OES 11 SP2, you need to set up file
access for users on the OES 11 SP2 server.
Set Up Users in eDirectory
The original server and the destination server can be in the same or different eDirectory trees.
If the destination server is in the same tree as the original server, the file system trustees and trustee
rights continue to work after the move.
If the destination server is in a different tree, use eDirectory to enable or reassign affected users for
access in the destination tree. For information, see the eDirectory website (https://www.netiq.com/
documentation/edir88/).
Set Up Protocols and Services
To provide access for users on the OES 11 SP2 server, do one or more of the following, depending on
your network environment:
 NCP Server and Services: Install and configure NCP Server to allow the users to access the
volume with the Novell Client or other NCP services. For information, see the OES 11 SP2: NCP
Server for Linux Administration Guide.
 Novell AFP: Install and configure Novell AFP to allow the users to access the volume with the
Apple Filing Protocol. For information, see the OES 11 SP2: Novell AFP for Linux Administration
Guide.
 Novell CIFS: Install and configure Novell CIFS to allow the users to access the volume with
CIFS. For information, see the OES 11 SP2: Novell CIFS for Linux Administration Guide.
 Linux Protocols and Services: Install and configure other protocols, such as Novell Samba or
Linux NFS, to allow the users to access the volume with the non-NCP protocols. Using these
Linux services requires that the users be Linux enabled to execute Linux commands and services
on the volume.
For information about installing Novell Samba, see OES 11 SP2: Novell Samba Administration
Guide.
For information about configuring Linux NFSv3, see Section 18.16, “Exporting and Importing
NSS Volumes for NFS Access,” on page 230.
For information about enabling users and the Linux service with Linux User Management
(LUM), see the OES 11 SP2: Novell Linux User Management Administration Guide.
For guidelines about users and access, see Section 5.5, “Access Control for NSS,” on page 53.
126
OES 11 SP2: NSS File System Administration Guide for Linux
11.3.3
Decommissioning Each NSS Pool and Its Volumes on the Original
Server
For each NSS pool, decommission the pool and its volumes from the original server.
1 On the NetWare 6.5 SP8 server, if the device you want to move contains any mirrored partitions,
you must first break the mirror (remove all but 1 segment) before upgrading.
2 If you use non-NCP protocols or Linux services for user access on the destination OES 11 SP2
server, you must Linux-enable the current users of the volumes before you move the devices.
IMPORTANT: If you use only NCP Server and NCP services for user access, this step is not
necessary.
Use one of the following methods to Linux-enable users of the volumes on the device:
 To enable multiple users at once, use the nambulkadd command.
User IDs are automatically refreshed after the enabling process ends.
 To enable a single user at a time, use iManager.
For information, see the OES 11 SP2: Novell Linux User Management Administration Guide.
3 Deactivate the pool. Issue the following command at the Linux terminal console.
nss /pooldeactivate=pool
4 Remove the eDirectory Storage objects for the NSS pool and each of its volumes.
4a In iManager, click Roles and Tasks
.
4b Click eDirectory Administration > Delete Object.
4c Specify the name and context of the object or objects you want to delete.
4d Click OK.
5 Repeat Step 2 and Step 3 for each pool on the devices you plan to move.
6 If you are using DFS in the tree where the original server is located, run the vldb repair
command.
At the command prompt on the primary VLDB server, enter
vldb repair
The VLDB Repair removes a GUID entry from the VLDB for each of the decommissioned
volumes.
7 Remove or reallocate the devices from the original server. Depending on your storage
configuration, this might require a server shutdown.
11.3.4
Recommissioning Each NSS Pool and Its Volumes on the OES Server
For each NSS pool, recommission the pool and its volumes on the destination server.
1 Relocate or reassign the devices to the destination server.
2 Reboot the destination server to mount the devices.
3 If a pool on the devices you moved is not automatically activated, activate the pool. Issue the
command at the Linux terminal console.
nss /poolactivate=pool
Migrating NSS Devices to OES 11 SP2
127
4 Create the eDirectory Storage objects for the NSS pool and each of its volumes.
4a In iManager, click Roles and Tasks
.
4b Click Storage > Pools.
4c Browse to select the destination server.
4d Select the pool, then click Update eDirectory.
4e In the lower right, select View Volume Details to view all volumes on the selected pool.
iManager opens to the Volumes page with the server and pool preselected.
4f For each volume in the selected pool, select the volume, then click Update eDirectory.
4g Repeat Step 4d through Step 4f for each NSS pool and its volumes.
5 Allow the eDirectory tree to stabilize.
This can take several minutes.
6 Run the vldb repair command.
On the primary VLDB server, at the command prompt, enter
vldb repair
The VLDB Repair adds a GUID entry for each of the recommissioned volumes to the VLDB.
7 Make sure the devices are up and working as expected on the Linux System.
11.4
Moving Clustered Devices with NSS Volumes OES 11 SP2
For information about converting clusters from NetWare 6.5 SP8 to OES 11 SP2, see the OES 11 SP2:
Novell Cluster Services NetWare to Linux Conversion Guide.
128
OES 11 SP2: NSS File System Administration Guide for Linux
12
Managing Partitions
12
Novell Storage Services automatically manages partitions used under the software RAID devices and
NSS pools, whether you create RAIDs and pools in the Storage plug-in for iManager or the NSS
Management Utility.
 Section 12.1, “Understanding Partitions,” on page 129
 Section 12.2, “Viewing a List of Partitions,” on page 131
 Section 12.3, “Viewing Details for a Partition,” on page 132
 Section 12.4, “Deleting an NSS Partition,” on page 133
 Section 12.5, “Adding Other Types of File System Partitions,” on page 134
12.1
Understanding Partitions
 Section 12.1.1, “NSS Partitions,” on page 129
 Section 12.1.2, “Understanding Types of Partitions,” on page 129
 Section 12.1.3, “Understanding Partition Details,” on page 130
12.1.1
NSS Partitions
A partition is a logical division of a physical hard drive. NSS abstracts all partition creation and
deletion in iManager and NSSMU through the Pools page and the Software RAIDs page. When you
create NSS pools or NSS software RAID devices, NSS automatically creates the NSS partitions on the
devices you specify. You can view and label these NSS partitions from the Partitions page.
Partitions are automatically managed by NSS whenever you create pools. You do not manage NSS
partitions directly.
On OES 11 SP2, the Novell Linux Volume Manager (NLVM) tool is used to manage devices and
partitions.
12.1.2
Understanding Types of Partitions
The following table describes the variety of partition types for an OES 11 SP2 system:
The Partitions function in iManager and NSSMU is intended simply as a reporting tool so that you
can see the types of partitions that are being virtualized by higher-level storage entities such as pools
or software RAID devices. Generally, you cannot create or modify partitions with iManager or
NSSMU tools. NSS partitions are created for you automatically when you create a pool. There might
be multiple NSS partitions that are aggregated and managed underneath the single pool of space.
Similarly, the tools automatically create a Virtual Device when you create a software RAID device.
Managing Partitions
129
Table 12-1 Explanation of Partition Types
12.1.3
Partition Type
Description
Cluster Service
A partition used to monitor cluster connectivity and services; it appears
only in shared devices in the cluster.
DOS
A conventional DOS partition.
GPT
GUID Partition Table (GPT) scheme.
Ext3
The partition type for Linux Extended File System 3.
iSCSI
A partition in a target disk server in an iSCSI storage area network; it
appears as an iSCSI device to file servers with iSCSI initiator software.
NSS
The primary partition type for NSS file systems.
Reiser
The partition type for Linux Reiser file systems.
System Configuration
A vendor-specific partition for maintaining metadata about the server
configuration.
Unknown
An partition type that is unknown to the current operating system.
Virtual Device
A partition that serves as a partition in a software RAID 0 or RAID 5
device.
Understanding Partition Details
You can view the following information about partitions:
Table 12-2 Explanation of Partition Details
130
Partition Detail
Description
Partition ID
The partition name assigned by the device manager.
Partition Name
The physical descriptive name of the partition that corresponds to the
device’s physical descriptive name, followed by the type of partition it is.
Type
The abbreviated name of the partition type.
Status
Specifies if a partition is In Use or Available.
Label
The partition name assigned by the administrator.
Starting Offset
Amount of space on the disk that precedes the beginning of the selected
partition.
Size
The storage capacity of this partition.
Device Name
The physical descriptive name of the device where the partition exists. For
software RAIDs, the description might include RAID 0, RAID 1, or RAID 5.
Device ID
The device name assigned by the device manager.
Pool Name
For NSS partitions, specifies the name of the pool that uses the partition.
OES 11 SP2: NSS File System Administration Guide for Linux
12.2
Viewing a List of Partitions
The Partitions page in the Storage plug-in to iManager and in NSSMU is a reporting tool that allows
you to view a list of partitions. You can view the types of partitions that are being virtualized by
higher-level storage entities such as NSS pools or software RAID devices.
In iManager, you can use the Partitions page, or access information about partitions through a related
task.
 Section 12.2.1, “Viewing Partitions on the Server,” on page 131
 Section 12.2.2, “Viewing Partitions on a Device,” on page 131
 Section 12.2.3, “Viewing Partitions in a Software RAID Device,” on page 132
 Section 12.2.4, “Viewing Partitions in an NSS Pool,” on page 132
12.2.1
Viewing Partitions on the Server
The Partitions page in iManager reports the partitions that it finds on a selected server, and displays
information about them.
1 In iManager, click Storage > Partitions.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select the server that you want to manage.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
3 View the list of partitions on the device.
For information about the partition types, see Section 12.1, “Understanding Partitions,” on
page 129.
12.2.2
Viewing Partitions on a Device
1 In iManager, click Storage > Devices.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select the server that you want to manage.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
Managing Partitions
131
3 In the Devices list, select a device.
4 In the Details area, click the arrow on the Partitions drop-down list to expand it.
5 Select a partition, then click the View Details icon.
The Partitions page displays a list of all the partitions that currently exist on the selected device.
12.2.3
Viewing Partitions in a Software RAID Device
1 In iManager, click Storage > Software RAIDs.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select the server that you want to manage.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
3 In the Software RAIDs list, select a device.
Wait for the page to refresh before continuing.
4 In the Details area, click the arrow on the Partitions drop-down list to expand it.
5 To view details about partitions, click the View Partition Details icon.
This opens the Partitions page. It displays a list of all the partitions that currently exist on the
selected device.
6 Select a partition from the Partitions list, then click Details to view more information.
12.2.4
Viewing Partitions in an NSS Pool
Although NSS abstracts the partitions underlying the pool structure, you can view information about
those partitions.
1 In iManager, click Storage > Pools.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select the server that you want to manage.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
3 In the Pools list, select the pool you want to manage.
Wait for the page to refresh and display the details. The pool must be active to see partition
details.
4 If the pool is deactive, make sure the pool is selected, then click Activate.
After the page refreshes, the Partitions drop-down list is available.
5 Click on the arrow next to the Partitions drop-down list to expand the list.
6 To view details about the partitions, click the View Partition Details icon.
A Partitions page opens where you can view details about the pool’s partitions.
7 Select a partition from the Partitions list, then click Details to view more information.
12.3
Viewing Details for a Partition
1 In iManager, go to the Storage > Partitions page, then select a server.
2 in the list of Partitions, select a partition.
132
OES 11 SP2: NSS File System Administration Guide for Linux
3 Click Details to view its Partition Details page.
For information about the details, see Section 12.1.3, “Understanding Partition Details,” on
page 130.
4 Click the View Details icon next to the Device Name to go to the Devices page.
The device is preselected and its details are displayed automatically on the right.
5 Click the View Details icon next to the Pool Name to go to the Pools page.
The pool is preselected and its details are displayed automatically on the right.
12.4
Deleting an NSS Partition
Deleting a partition results in data loss. We recommend that you delete the storage structure that uses
the partition instead of deleting the partition itself.
 Section 12.4.1, “Deleting Partitions in a Pool,” on page 133
 Section 12.4.2, “Deleting Partitions in an NSS Software RAID Device,” on page 133
12.4.1
Deleting Partitions in a Pool
You cannot shrink the size of a pool by deleting its partitions. Use the Pools page in iManager or
NSSMU to delete the pool. For information, see Section 15.6, “Deleting a Pool,” on page 171.
12.4.2
Deleting Partitions in an NSS Software RAID Device
You can delete all of the partitions in a software RAID device by deleting the software RAID. For
information, see Section 13.16, “Deleting a Software RAID Device,” on page 157.
For NSS software RAIDs that provide fault-tolerance, there is limited support for deleting individual
partitions used in the RAID.
NSS Software RAID
Delete a Partition (Yes/
No)
Guidelines
RAID 0
No
Delete the RAID to remove its partitions.
RAID 1 (mirror)
Yes
Delete the RAID, or delete all but one partition.
Managing Partitions
133
NSS Software RAID
Delete a Partition (Yes/
No)
RAID 5
Yes
Guidelines
Delete the RAID, or delete only one failed
partition at a time to repair and restore the RAID.
See also Section 13.15, “Replacing a Failed
Segment in a Software RAID,” on page 155.
For NSS software RAID devices, use the Software RAIDs page in iManager or NSSMU to access and
delete its partitions.
 “Using iManager to Delete a Partition in a Software RAID” on page 134
 “Using NSSMU to Delete a Partition in a Software RAID” on page 134
Using iManager to Delete a Partition in a Software RAID
The Storage plug-in to iManager does not allow you to delete partitions from the Partitions page
because it is designed to abstract all partition management. To delete a single partition in a software
RAID 1 or RAID 5 device:
1 In iManager, click Storage > Software RAIDs.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select the server that you want to manage.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
3 In the Software RAIDs list, select the software RAID 1 or RAID 5 device you want to manage.
Wait for the page to refresh and display the details.
4 Click the View Partition Details icon to open the Partitions page for the selected software RAID.
5 Select a partition, then click Delete to delete the partition and its contents.
Using NSSMU to Delete a Partition in a Software RAID
NSSMU allows you to delete any NSS partition directly from the Partitions page.
1 In NSSMU, select Partitions.
2 Select the partition you want to delete.
3 Click Delete to delete the partition and its contents, then click OK to confirm deletion.
12.5
Adding Other Types of File System Partitions
NSS and Linux POSIX file systems can co-exist on a device. To create a NSS and Linux POSIX
partitions on the same device, the first partition must be a Linux partition. In this mixed mode, you
can have maximum 15 partitions. We recommend you to create four partitions only, if you are using
DOS partitioned disks.
If the first partition on your device is a NSS partition, then you can not create a Linux POSIX file
system partition on the same device.
Use NSSMU or the iManager Storage plug-in to manage NSS pools and volumes on OES 11 SP2. Use
the NSSMU, NLVM, or iManager to create Linux POSIX file systems on the device.
134
OES 11 SP2: NSS File System Administration Guide for Linux
Do not use YaST or LVM2 management tools to create or manage NSS pools and volumes. During the
partition creation process, the YaST partitioner modifies the /etc/fstab configuration file to
incorrectly identify NSS pools as Ext3 partitions instead of NSS partitions. This can make your
system unbootable.
For example, a command like this is added to the /etc/fstab file for each NSS pool:
/dev/pool/poolname /nss/.pools/poolname ext3 defaults 1 2
where poolname is the name of the pool.
Instead of ext3, the partition type should be nss:
/dev/pool/poolname /nss/.pools/poolname nss defaults 1 2
To work around this problem when using the YaST partitioner, after you create a Linux POSIX file
system and before you reboot your system, edit the /etc/fstab file to remove or comment out the
lines that identify NSS partitions as Ext3 partitions.
To recover your system if you reboot your server before editing the /etc/fstab file:
1 Boot your OES 11 SP2 server in Single User mode.
2 Mount the root (/) file system with the remount and read/write (rw) options by entering the
following at a terminal console prompt:
mount -n -o remount,rw /
3 Edit the /etc/fstab file to do the following, then save the file:
 If a line wrongly identifies an NSS partition as an Ext3 partition, correct the entry by
changing ext3 to nss.
 If a line wrongly identifies an NSS partition as an Ext3 partition, and if the entry duplicates
a correct entry for the NSS partition, then remove or comment out the line that wrongly
identifies the NSS partition as an Ext3 partition.
4 Reboot the server to apply the changes.
Managing Partitions
135
136
OES 11 SP2: NSS File System Administration Guide for Linux
13
Managing NSS Software RAID Devices
13
RAID devices help provide data fault tolerance for storage devices. In some RAID configurations, the
read/write performance is also improved. The Novell Storage Services (NSS) File System supports
software RAIDs 0, 1, 5, 0+1, and 5+1.
This section describes the following:
 Section 13.1, “Understanding Software RAID Devices,” on page 137
 Section 13.2, “Planning for a Software RAID Device,” on page 139
 Section 13.3, “Viewing a List of Software RAID Devices on a Server,” on page 143
 Section 13.4, “Viewing Details of a Software RAID Device,” on page 144
 Section 13.5, “Creating Software RAID Devices with iManager,” on page 145
 Section 13.6, “Creating Software RAID Devices with NSSMU,” on page 148
 Section 13.7, “Mirroring an Existing Pool with NSSMU,” on page 149
 Section 13.8, “Recovering a Mirror where All Elements Report ‘Not in Sync’ Using NSSMU,” on
page 150
 Section 13.9, “Creating a RAID 1 Mirror to Duplicate Data,” on page 150
 Section 13.10, “Creating a Software RAID 0+1 with NSSMU,” on page 151
 Section 13.11, “Creating a Software RAID 5+1 with NSSMU,” on page 151
 Section 13.12, “Renaming a Software RAID Device,” on page 152
 Section 13.13, “Increasing the Size of a Software RAID Device,” on page 152
 Section 13.14, “Restriping a Software RAID,” on page 154
 Section 13.15, “Replacing a Failed Segment in a Software RAID,” on page 155
 Section 13.16, “Deleting a Software RAID Device,” on page 157
 Section 13.17, “Viewing Pools on a Software RAID Device,” on page 158
 Section 13.18, “Viewing Partitions on a Software RAID Device,” on page 158
 Section 13.19, “Deleting Partitions on a Software RAID Device,” on page 159
 Section 13.20, “Managing Software RAID Devices with NSSMU,” on page 159
13.1
Understanding Software RAID Devices
A software RAID is a configuration for storage devices that emulates a hardware RAID device. A
software RAID combines partitioned space from multiple physical devices into a single virtual device
that you manage like any device. Each member device contributes an equal amount of space to the
RAID. You can create partitions, pools, and volumes on a RAID device, just as you would with any
physical storage device. Unlike hardware RAID devices, software RAIDs use standard host adapters
and do not require any special RAID hardware.
The following table describes the software RAID devices supported by NSS:
Managing NSS Software RAID Devices
137
Table 13-1 RAID Characteristics
Type of
RAID
Number
of
Purpose
Segment
s
RAID 0
2 to 14
Data
striping
Advantages
Disadvantages
Improves I/O performance for
both reads and writes, which
occur concurrently in parallel to
its member devices.
Does not provide data
redundancy for data fault
tolerance.
RAID 1
2 to 4
Data
mirroring
RAID 5
3 to 14
Data
Provides limited data recovery for
striping
one member disk at a time. If a
with parity single drive in the RAID fails, its
volumes and pools remain active,
but with degraded performance
because the RAID must use
parity to reconstruct the missing
data. You must remove the failed
segment, replace the disk, add
the new segment, and restripe
the data to reconstruct the data
on the replacement drive.
Provides full data redundancy for To achieve the best I/O
failover and instant recovery.
performance, it requires separate
channels for each member disk;
Improves read performance.
otherwise, write performance
decreases slightly.
Equivalent write performance is
possible with a duplex
Each mirror must be on a
connection, which provides a
separate device; it can share no
separate channel for each
disks in common.
member disk.
Can be a member of only one
pool.
Improves read performance if all
drives are present and working
properly. If a drive fails, read
performance is reduced because
of parity reads and data
reconstruction.
RAID 0+1 2 to 4
RAID 0
devices
138
Mirroring
RAID 0
devices
If a single disk fails, the data
cannot be recovered. You must
re-create the RAID 0 and restore
its volumes from a backup copy
before you can use it again.
Read responses are the same
only if data happens to be in
cache when called; otherwise it is
slightly reduced for parity
checking.
I/O performance for writes is
reduced because it takes time to
calculate and write parity to disk.
The more writes to the drive, the
greater is the burden to CPU.
If multiple disks fail, the data
cannot be recovered. You must
re-create the RAID 5 and restore
its volumes from a backup copy
before you can use it again.
Provides full data redundancy for Requires separate channels for
failover and instant recovery.
each member disk to achieve
best I/O performance.
Improves I/O performance for
both reads and writes, but is
RAID 0 devices that you mirror
slower than an unmirrored RAID can share no disks in common.
0 device.
If a single disk fails, you must recreate the RAID 0 and remirror
the entire device. The data is
restored through mirroring.
OES 11 SP2: NSS File System Administration Guide for Linux
Type of
RAID
Number
of
Purpose
Segment
s
RAID 5+1 2 to 4
RAID 5
devices
13.2
Mirroring
RAID 5
devices
Advantages
Disadvantages
Provides full data redundancy for Requires separate channels for
failover and instant recovery.
each member disk to achieve
best I/O performance.
If a single data disk fails, the
RAID 5 device remains up and
RAID 5 devices that you mirror
mirrored. Its performance is
can share no disks in common.
degraded until you replace the
If multiple data disks fail
failed disk.
concurrently on the same
It can handle multiple disk
segment, you must remove the
failures, depending on the
damaged segment from the
number of failures and which
mirror, re-create the RAID 5, and
disks fail.
then mirror the RAID 5. The data
synchronizes with the mirrors.
Planning for a Software RAID Device
Before you create your software RAID device, you must evaluate your storage requirements and
determine which RAID solution best fits your performance and fault tolerance needs.
 Section 13.2.1, “General Guidelines for Software RAID Devices,” on page 139
 Section 13.2.2, “Guidelines for Software RAID 1 Devices,” on page 140
 Section 13.2.3, “Drive Restrictions for NSS Software RAID 0, 1 and 5 Devices,” on page 140
 Section 13.2.4, “Choosing a Software RAID Solution,” on page 141
 Section 13.2.5, “Determining the Partition Size,” on page 142
 Section 13.2.6, “Determining the Number of Partitions,” on page 142
 Section 13.2.7, “Determining the Stripe Size for RAID 0 and RAID 5,” on page 143
13.2.1
General Guidelines for Software RAID Devices
To set up a RAID device, you allocate free space from any of your physical storage devices. NSS
transparently presents the allocated free space as virtual partitions that represent NSS-managed
physical partition areas on the participating drives. These partitions are the basic elements of a
software RAID device. How you allocate each of the partitions to pools depends on the nature of the
pools (shared or not shared for clustering) and the type of RAID device it is.
As an example, the RAID 0 device, illustrated in the following figure, consists of three RAID
partitions. It stripes data across three physical drives. The stripes are written and read in the order
designated.
Managing NSS Software RAID Devices
139
Figure 13-1 Striping Data on a Software RAID 0 Device
Disk 1
Partition
(pool segment)
Stripe 1
Stripe 4
Stripe 7
Disk 2
Partition
(pool segment)
Stripe 2
Stripe 5
Disk 3
Partition
(pool segment)
Stripe 8
Stripe 3
Stripe 6
Stripe 9
Consider the following general guidelines when creating a software RAID device:
 Each partition in the RAID configuration should come from a different device. NSS lets you
obtain RAID partitions from the same device, but this severely impedes the performance of your
file system.
 Do not use space from a drive that contains your system partition (such as the root (/) or /boot
partitions).
 You can use any combination of IDE or SCSI devices in a software RAID device. Make sure these
devices have similar performance characteristics; otherwise, your performance might decrease.
 In a clustered solution using Novell Cluster Services, for software RAIDs on shared disks:
 You can have only one pool associated with that RAID device.
 You must create an NSS pool and volume on that RAID device from the same server node
before the pool can be migrated to other nodes in the cluster.
13.2.2
Guidelines for Software RAID 1 Devices
The following is a list of requirements for mirroring partitions with software RAID 1 devices:
 Mirrored partitions must have the same partition type: NSS partitions to NSS partitions and
Traditional partitions to Traditional partitions.
 Mirrored partitions should be set up on devices that have similar performance thresholds.
 You can mirror only partitions, each from its own Novell partition. If a storage pool spans
multiple devices, each of the individual partitions that make up that pool can be mirrored
independently. All of the pool’s partitions must be mirrored in order for the data in that pool to
be fault tolerant.
 You cannot combine mirror groups (existing groups with multiple mirrored partitions). A
mirror group is expanded by adding a partition from your free space but not by adding an
existing mirror group to the current group.
 All of the devices that participate in a mirror must be marked a shared or not shared for
clustering for each mirror group.
 Avoid setting up multiple mirror groups on a single device. Such configuration heavily
degrades the performance of the file system.
 To mirror software RAID 0 devices, the member devices must have no drives in common.
13.2.3
Drive Restrictions for NSS Software RAID 0, 1 and 5 Devices
When you create or expand an NSS software RAID device, do not use space from the drive that
contains your boot partition or system partition. In a worst-case scenario, you might need to
reinitialize a drive if the partition in the RAID failed.
140
OES 11 SP2: NSS File System Administration Guide for Linux
WARNING: Reinitializing a drive destroys its contents.
13.2.4
Choosing a Software RAID Solution
When choosing a software RAID solution, determine whether you need to address file system
performance, data fault tolerance, or both. The following table highlights the key data fault tolerance,
performance, and configuration issues associated with each RAID type.
NOTE: Using iSCSI devices on the iSCSI initiator server to create NSS software RAID5 devices can
cause poor performance. If you would like RAID5 protection, create the RAID5 on the target server
and present that RAID device to the initiator as a single iSCSI device.
Table 13-2 RAID Performance Characteristics
Requirement RAID 0
RAID 1
RAID 5
RAID 0+1
RAID 5+1
NSS
Yes, for data
volumes only
Yes, for data
volumes only
Yes, for data
volumes only
Yes, for data
volumes only
Yes, for data
volumes only
Data faulttolerance
No
Redundancy
Parity
Redundancy
Redundancy
and parity
Read I/O
performance
Best improved
(parallel reads)
Improved if
parallel
channels to
each mirror
Improved, if all
segments are
present and
working
properly
Improved, with
RAID 0 read
advantage
Improved if
parallel
channels to
each mirror,
with RAID 5
read advantage
Write I/O
performance
Best improved
(parallel writes)
Same if parallel
channels;
otherwise,
slightly
decreased
Somewhat
decreased by
parity
calculation
Slightly
improved,
depending on
channel
configuration
Somewhat
decreased,
depending on
channel
configuration
and parity
calculations
Valid names
2 to 15
characters
(NSSMU)
2 to 15
characters
(NSSMU)
2 to 15
characters
(NSSMU)
2 to 15
characters
(NSSMU)
2 to 15
characters
(NSSMU)
2 to 58
characters
(iManager)
2 to 58
characters
(iManager)
2 to 58
characters
(iManager)
2 to 58
characters
(iManager)
For more
information on
2 to 58
guidelines,
characters
see
Section 5.4,
(iManager)
“Naming NSS
Storage
Objects,” on
page 50.
Number of
segments
2 to 14
2 to 4
3 to 14
Mirror 2 to 4
Mirror 2 to 4
software RAID 0 software RAID 5
devices
devices
Maximum
RAID size
(total for
combined
segments)
2 TB
2 TB
2 TB
2 TB
2 TB
Managing NSS Software RAID Devices
141
Requirement RAID 0
RAID 1
RAID 5
RAID 0+1
RAID 5+1
Maximum
segment size
1 TB, if 2
segments
2 TB for each
mirror
0.66 TB, if 3
segments
2 TB for each
mirror
0.66 TB, if 3
segments
Minimum
segment size
12 MB
12 MB
12 MB
12 MB
12 MB
NOTE: The Maximum segment size and Maximum RAID size values correspond only to DOS
partitioned devices. If all devices are using GPT, the size limits are removed. Since pools are currently
limited to 8TB, RAID1 sizes for pool objects are also limited to 8TB.
13.2.5
Determining the Partition Size
The space that a member device contributes to a software RAID is called a partition or segment. Each
physical device should contribute only one partition to the RAID; otherwise, it negates the benefits of
the RAID. A software RAID device can contain only one partition per device. All member partitions
in a software RAID device must be of the same size.
If any one device that you use to create a software RAID is partitioned using the DOS partition table
scheme, a RAID segment size can be only up to 2TB (where 1 TB = 2E40 bytes). This is necessary
because all segments of a RAID must be of the same size. If all devices are using GPT, a RAID
segment size can be more than 2 TB. Since pools are currently limited to 8TB, RAID1 sizes for pool
objects are also limited to 8TB.
The capacity of the RAID device depends on the RAID type and the number of member partitions:
 RAID 0: Capacity equals the number of partitions times the partition size.
 RAID 1: Capacity equals one partition size.
 RAID 5: Capacity equals the number of partitions minus one, times the partition size.
 RAID 0+1: Capacity equals one partition size of space taken from the RAID 0; it is not limited to
the partition size of partitions in the RAID 0 itself.
 RAID 5+1: Capacity equals one partition size of space taken from the RAID 5; it is not limited to
the partition size of partitions in the RAID 5 itself.
13.2.6
Determining the Number of Partitions
Each software RAID device comprises multiple partitions. You must specify at least the minimum
number of partitions to create the type of RAID you choose. The maximum number of partitions is
limited by the maximum number supported by that RAID type and the maximum device size that
can be seen by NSS and Traditional file systems.
After you set up the software RAID device, you can increase its size by adding segments. In
iManager, click Storage > Software RAIDs > Increase Size, and then add segments up to the maximum
number of segments for each type of RAID.
You cannot remove segments in a RAID device to decrease its size. In general, to reduce the size of a
RAID device: Back up its data, delete the RAID, re-create the RAID with a smaller segment size or
fewer segments, and then restore its data from the backup copy.
For some RAIDs, you can replace a failed partition by removing the segment from the RAID,
replacing the failed disk, and then adding a segment to the RAID to replace the failed one. For
information, see Section 13.15, “Replacing a Failed Segment in a Software RAID,” on page 155.
142
OES 11 SP2: NSS File System Administration Guide for Linux
13.2.7
Determining the Stripe Size for RAID 0 and RAID 5
In RAID 0 and RAID 5 configurations, NSS writes data to each member device in turn. The maximum
amount of data (in KB) committed to each write to a partition is called a stripe. Striping is unrelated
to file block sizes that you might set on your storage device.
Set the stripe size in increments of powers of two, between 4 KB and 256 KB (4, 16, 32, 64, 128, 256).
The default stripe size is 64 KB.
To maximize performance of the RAID, set the stripe size to correspond with your typical data write
requirements. In general, use smaller stripe sizes for data servers and medium-to-large sizes for file
servers. For most implementations, 64 KB provides the best performance.
13.3
Viewing a List of Software RAID Devices on a Server
Use the Software RAIDs task in the iManager Storage plug-in to create and manage your software
RAID devices.
1 In iManager, click Storage > Software RAIDs.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select a server in the NetIQ eDirectory tree where you are logged in.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
A list of software RAID devices appears in the Software RAIDs list. Depending on the number of
devices, this can take a few seconds. Avoid clicking again in the page until it refreshes and
displays the Software RAIDs list.
3 Select a RAID device from the Software RAIDs list to view its details.
For an overview of the subtasks available from this page, see “Software RAIDs” on page 88.
Managing NSS Software RAID Devices
143
13.4
Viewing Details of a Software RAID Device
You can view the following information about a selected software RAID device:
Table 13-3 Explanation of Details for a Software RAID Device
Software RAID Device
Detail
Description
Name
The administrator-specified descriptive name for the RAID.
In iManager, if you do not specify a name for the device at create time, a
name is autogenerated in the format of RAID <type> Device <number>.
Major Number
Minor Number
The device identity in major:minor format. Major and minor numbers are
associated with the device special files in the /dev directory and are
used by the operating system to determine the actual driver and device to
be accessed by the user-level request for the special device file.
GUID
The Global Unique Identifier number that NSS assigns to the storage
object. This number is necessary so your file system can locate the
specific device.
Share State
Shareable for Clustering or Not Shareable for Clustering. The share state
can be modified on the Devices page.
If you assign partitions to a software RAID device, all the devices for those
member partitions must either be marked as Shareable for Clustering, or
all marked as Not Shareable for Clustering.
Capacity
The total storage capacity of the device that is reserved for data storage.
For a RAID 0, the storage capacity is equal to the sum of its partitions. For
RAID 1, the storage capacity is equal to a single partition size; the
duplicate partitions are mirrors. For RAID 5, the storage capacity is equal
to the sum of its partitions minus one partition for parity.
Used Space
The amount of space on the device that is currently in use by NSS
partitions.
Free Space
The total amount of space on the device that is currently not in use.
Pools
The drop-down list shows all pools that exist on this device. To view a
pool’s details or to manage a pool, select the pool from the list, then click
the View Details icon to go to the Pools page for that pool.
NOTE: Shared RAID 0, RAID 5 or RAID 1 would have only one pool
associated with it.
Number of Pools
The total number of pools that use this device.
NOTE: Shared RAID 0, RAID 5 or RAID 1 would have only one pool
associated with it.
144
Type
The type of RAID (such as RAID 0, RAID 1, RAID 5).
Stripe Size
The maximum size (in KB) of a data write, as configured for a RAID 0 or
RAID 5 device.
Partition Size
The size (in MB) of partitioned space per drive.
OES 11 SP2: NSS File System Administration Guide for Linux
Software RAID Device
Detail
Description
Partitions
Lists the member partitions of the selected software RAID device in a
drop-down list. To view a partition’s details, select the partition in the dropdown list, then click the View Details icon.
Number of Partitions
The total number of partitions in the selected software RAID device.
Status
Shows the status of a RAID 1 (mirrored) device’s partitions:
 In Sync: The mirror group is fully synchronized.
 Out of Sync: The mirror group is only partially synchronized, and
the percent mirroring in progress.
 Not Mirrored: The device is not mirrored (only one partition).
Viewing Details in iManager
1 In iManager, click Storage > Software RAIDs.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select a server in the eDirectory tree where you are logged in.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
This opens the Software RAIDs Management page.
The Software RAIDs list displays the virtual RAID devices on the selected server. The list might
include any RAID O, RAID 1, or RAID 5 devices that you created. It does not list any hardware
RAID devices in this list.
3 Select a virtual storage device in the Software RAIDs list to view information about that device,
then wait for the page to refresh.
Viewing Details in NSSMU
1 In NSSMU, select Software RAIDs.
2 Select the RAID device you want to manage and wait for the information to be displayed.
13.5
Creating Software RAID Devices with iManager
1 In iManager, click Storage > Software RAIDs.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select a server in the eDirectory tree where you are logged in.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
3 Click New.
This opens a Wizard that steps you through the process.
Managing NSS Software RAID Devices
145
4 Device Type and Name: Specify the type of RAID device you want to create, type a name for the
RAID device, then click Next.
If you leave the Name field blank, NSS creates a unique name for the device in the form of RAID
<type> Device <sequential_number>. For information about choosing names, see Section 5.4,
“Naming NSS Storage Objects,” on page 50.
This opens the Devices and Space page.
5 Devices and Space: Select devices and the amount of space to use from each, then click Next.
5a In the Partition Size field, type the amount of space in MB to use from each physical device.
NSS identifies devices that have enough free space to meet the partition-size requirements
and are eligible for inclusion in the RAID. For information, see “Determining the Partition
Size” on page 142.
If the amount you specify exceeds the amount of free space available on a minimum
number of physical devices, the RAID creation fails and returns an error message.
5b Select the check box next to each of the storage devices you want to obtain space from.
You can obtain space from multiple devices. Select only devices that have enough space
available to meet your needs. Each segment must be more than 12 MB. If any segment uses
space from a device that is partitioned with the DOS partition table scheme, then the
maximum segment size must be less than 2 TB. If all devices are using GPT, a RAID
segment size can be more than 2 TB. Because pools are currently limited to 8 TB, RAID 1
sizes for pool objects are also limited to 8 TB.
If a device’s available space is smaller than the specified partition size, it is disabled
(dimmed) so that you cannot select it.
146
OES 11 SP2: NSS File System Administration Guide for Linux
IMPORTANT: Unallocated partitions (that is, partitions that are not mirrored and do not
contain pools or other file systems), are deleted in order to present the unused space as free
space for use by the RAID. No data loss occurs by this action.
A single physical device can offer multiple free space areas in the list. After you select a
device, all other free space on that device is disabled for that RAID. Each device should
contribute only one partition to the RAID; otherwise, it defeats the purpose of improved
performance and data protection that a RAID affords.
If the server has both local and shared devices, the partitions of a RAID can reside only on
shared devices or only on local devices. If you select the check box next to a local storage
device, the shared devices are dimmed so that you cannot select them. If you select the box
next to a shared storage device, the local devices are dimmed.
Only devices that have free space appear in the list. If no devices are listed, there is no space
available to create a software RAID device. Cancel the Wizard, add more devices to the
server or free up space on existing devices, then return to the Software RAIDs page to create
a RAID device.
5c If this is a RAID 0 or 5, click Next to go to the Stripe Size page. Otherwise, skip ahead to
Step 7.
This opens the Stripe Size page.
6 Stripe Size: If this is a RAID 0 or 5 device, set the stripe size. For information, see “Determining
the Stripe Size for RAID 0 and RAID 5” on page 143.
7 Click Finish. (Or click Cancel at any time to back out of the process.)
Managing NSS Software RAID Devices
147
NSS creates the software RAID device, then opens to the Software RAIDs task. Your newly
created RAID device appears in the Software RAID Devices list. The name you provided for the
RAID appears in the Description field. This is the device name displayed when the RAID is
listed in the NSSMU Software RAIDs page.
After you create the RAID, manage it as you would a physical device in terms of using it for pools
and volumes. For information on configuring an NSS file system on your RAID, see Chapter 15,
“Managing NSS Pools,” on page 163.
13.6
Creating Software RAID Devices with NSSMU
1 In NSSMU, select RAID Devices from the NSSMU main menu.
2 Press Insert (Ins) to create a new device.
3 Select the RAID type (0, 1, or 5), then press Enter.
4 (Conditional) If this is a RAID 0 or 5 device, specify the stripe size, then press Enter.
The default stripe size is 64 KB, which typically provides the best performance for devices with
NSS volumes.
5 Use the arrow keys to select the partitions that you want to contribute space to the RAID.
If no partitions appear, it is an indication that either there are no partitions large enough or there
are no free partitions. Each segment must be more than 12 MB. If any segment uses space from a
device that is partitioned with the DOS partition table scheme, then the maximum segment size
must be less than 2 TB. If all devices are using GPT, a RAID segment size can be more than 2 TB.
Since pools are currently limited to 8TB, RAID1 sizes for pool objects are also limited to 8TB.
IMPORTANT: Unallocated partitions (that is, partitions that are not mirrored and do not
contain pools or other file systems), are deleted in order to present the unused space as free
space for use by the RAID. No data loss occurs by this action.
After space is selected from a device, other free space associated with that device might not
appear. This prevents you from adding more than one segment from a single physical device,
and consequently, helps ensure the optimum performance of your file system.
The following table lists the number of segments that you can include in a RAID device:
148
OES 11 SP2: NSS File System Administration Guide for Linux
RAID Type
Number of Segments
RAID 0
Minimum of 2 and a maximum of 14
RAID 1
Maximum of 4
RAID 5
Minimum of 3 and a maximum of 14; one segment is used for parity
6 Specify the amount of space to use, then press Enter.
The segment is created and added to the Segments Included in the RAID window.
13.7
Mirroring an Existing Pool with NSSMU
1 Before you begin, review the Section 13.2.2, “Guidelines for Software RAID 1 Devices,” on
page 140.
2 In NSSMU, select Partitions from the NSSMU main menu.
3 From the list of existing partitions, select the NSS partition for the pool you want to mirror.
4 Press F3 to create the RAID 1 device and mirror the partition where the pool resides.
5 From the available devices, select up to three additional devices that you want to use as a
segment, then press Enter.
On the page where you select the second partition, you have to press the space bar to select the
partition for the second mirror. When it is truly selected, the asterisk next to it stays there even if
you move up and down in the list. Then press F3 to mirror. The message pops up to confirm
creating the mirror.
The space assigned is the same size as the existing partition. The segments must reside on
different devices. If no partitions appear, it is an indication that either there are no partitions
large enough or no free space exists on other devices.
6 To confirm the RAID 1 device, select RAID Devices from the NSSMU Main menu. The RAID 1
device ID appears in the RAID Devices window.
7 Select the RAID device, then view its details to make sure that synchronization has begun.
The remirroring status shows a percentage is greater than 0. It is fully synchronized at 100%If
you are mirroring a shared partition (that is, if it contains a cluster-enabled pool), the
synchronization to the mirror does not begin automatically. Continue with Step 8a.
8 If you are mirroring a shared partition (that is, if it contains a cluster-enabled pool), start the
remirroring manually by doing the following:
8a Use one of the following methods to initiate mirroring for the newly created mirror:
 At the server console of a cluster node, enter the following to migrate the cluster
resource to another node:
cluster migrate cluster_resource destination_node_name
Migrating the pool causes load scripts to be executed and causes the mirroring to start
on the new node.
 At the server console of the cluster node where the pool is currently active, enter
dmsetup message raid_device_name 0 remirror=on
WARNING: Issue this command only on the node where the pool is currently active.
Issuing the command on multiple nodes can corrupt the mirror.
Managing NSS Software RAID Devices
149
8b Verify that the remirroring has begun by opening NSSMU on the node where the pool is
currently active, open the RAID page, then select the RAID device.
The remirroring status shows a percentage is greater than 0. It is fully synchronized at
100%.
13.8
Recovering a Mirror where All Elements Report ‘Not in
Sync’ Using NSSMU
If all elements of a mirrored RAID report a status of “not in sync”, use the following procedure to
recover the mirror.
1 Determine which element you believe to be the in-sync element.
2 Log in to the server as the root user, and open a terminal console.
3 Launch nssmu.
4 From the NSSMU menu, select Software RAIDs.
5 In the list of RAIDs, select the RAID, then press Enter to see its elements.
6 Remove all of the elements from the mirror except the element you want to keep. Select the
element to delete, then press Delete. Repeat this step for each of the elements you want to
remove from the Software RAID. Wait for the segment to be removed before you remove the
next segment.
When you are done, you have a RAID1 device that consists of the single element that you
believed to be the in-sync element.
7 Force the single RAID element to be in sync. On the Software RAIDS page, select the RAID
device, then press F6 (Restripe).
8 Add elements back into the mirror as desired.
8a On the Software RAIDs page, select the RAID device and press F3 (Expand).
8b From the list of available devices, select the device or devices that you want to add.
8c Press F3 (Accept).
13.9
Creating a RAID 1 Mirror to Duplicate Data
You can create a RAID 1 mirror to duplicate data on a new device, such as to duplicate data on a new
storage array.
1 Set up the RAID 1 mirror between the initial storage element and new storage element.
For information, see Section 13.5, “Creating Software RAID Devices with iManager,” on
page 145.
2 Let the RAID create a duplicate of the data on the mirror.
3 (Optional) If you want to retain the data on the initial storage element and use the element
elsewhere, remove the disk it is on from the server.
WARNING: If you leave the disk attached to the server while deleting the element from the
RAID, its data is destroyed.
4 Use NSSMU or iManager to delete the initial storage element from the mirror, leaving only the
new storage element active on the server as a single-element mirror.
150
OES 11 SP2: NSS File System Administration Guide for Linux
The RAID 1 group remains in Media Manager and uses only 1 KB of memory. The new array
performs normally, without performance degradation and without consuming additional
resources.
WARNING: Leave the RAID 1 group active because deleting the RAID 1 group deletes all of its
member partitions and destroys the data on them.
In iManager and from the command line, the new array reports that it is Not Mirrored. In
NSSMU, the new array reports that it is In-Sync and 100% remirrored, even though there is only
a single element.
13.10
Creating a Software RAID 0+1 with NSSMU
In NSS, you can mirror your software RAID 0 devices to create a nested RAID 0+1 device. Use
NSSMU to mirror the partition used by the pool on a RAID 0 device. The following procedure
describes how to create the RAID 0+1 in NSSMU. You can also create the RAID 0 devices first in
iManager, but you must use NSSMU to be able to select them for a mirror.
1 In NSSMU, create a software RAID 0 device with 2 to 14 segments.
For information, see Section 13.6, “Creating Software RAID Devices with NSSMU,” on page 148.
2 Repeat Step 1 one to three times to create 2 to 4 RAID 0 devices.
The RAID 0 devices you use to create the mirror must have no drives in common. Each drive
you use to create the RAIDs can belong to only one of the RAID 0 devices.
3 In NSSMU, create a pool on one of the RAID 0 devices.
3a In NSSMU, select Pools from the NSSMU main menu.
3b Press Insert (Ins) to create a pool.
3c From the list of available devices, select one of the RAID 0 devices.
3d Assign all of the available space to the pool, then press Enter.
4 Create a RAID 1 device to mirror the pool.
4a In NSSMU, select Partitions from the NSSMU main menu.
4b Select the NSS partition for the pool you want to mirror.
4c Press F3 to create the RAID 1 device and mirror the partition.
4d From the available devices, select up to three remaining RAID 0 devices you created above.
4e Press F3 to initialize and create the RAID 1 (mirror) device.
After the RAID device is created, the device ID appears in the RAID Devices window. This
window is viewed from the RAID Devices NSSMU main menu. The RAID is a RAID 0+1.
13.11
Creating a Software RAID 5+1 with NSSMU
In NSS, you can mirror your software RAID 5 devices for your server to create a nested RAID 5+1
device. Use NSSMU to mirror the partition used by the pool on a RAID 5 device. The following
procedure describes how to create the RAID 5+1 in NSSMU. You can also create the RAID 5 devices
and NSS pool for the RAID 5 devices in the Storage plug-in for iManager, and then switch to NSSMU
to mirror the pool’s partition.
1 In NSSMU, create a software RAID 5 device with 3 to 14 segments.
For information, see Section 13.6, “Creating Software RAID Devices with NSSMU,” on page 148.
Managing NSS Software RAID Devices
151
2 Repeat Step 1 one to three times to create 2 to 4 RAID 5 devices.
The RAID 5 devices you use to create the mirror must have no drives in common. Each drive
you use to create the RAIDs can belong to only one of the RAID 5 devices.
3 In NSSMU, create a pool on one of the RAID 5 devices.
3a In NSSMU, select Pools from the NSSMU main menu.
3b Press Insert (Ins) to create a pool.
3c From the list of available devices, select one of the RAID 5 devices.
3d Assign all of the available space to the pool, then press Enter.
4 Create a RAID 1 device to mirror the pool.
4a In NSSMU, select Partitions from the NSSMU main menu.
4b Select the NSS partition for the pool you want to mirror.
4c Press F3 to create the RAID 1 device and mirror the partition.
4d From the available devices, select one of the remaining RAID 5 devices you created above.
NOTE: You can select up to three devices.
4e Press F3 to initialize and create the RAID 1 (mirror) device.
After the RAID device is created, the device ID appears in the RAID Devices window. This
window is viewed from the RAID Devices NSSMU main menu. The RAID is a RAID 5+1.
13.12
Renaming a Software RAID Device
1 In iManager, click Storage > Software RAIDs.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select a server in the eDirectory tree where you are logged in.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
3 Click Rename.
This opens a dialog box where you can enter the new name.
4 Type the new name.
Do not leave the field blank when you are renaming because a default RAID name is not
generated for a rename procedure. For information about choosing names, see Section 5.4,
“Naming NSS Storage Objects,” on page 50.
5 Click Finish. (Or click Cancel at any time to back out of the process.)
NSS renames the software RAID device, then opens to the Software RAIDs Page. The details for
the renamed software RAID device are displayed on the page, with the new name in the
Description field.
13.13
Increasing the Size of a Software RAID Device
You can increase the capacity of an existing software RAID 0, 1 or 5 device by adding partitions, up to
the maximum number for the type of RAID. You cannot modify the size of an individual partition
after the device is created.
IMPORTANT: If the software RAID device is shared in a cluster, connect to the node where the
RAID is currently active to manage the RAID and increase the size of the RAID.
152
OES 11 SP2: NSS File System Administration Guide for Linux
To add partitions to an existing software RAID:
1 In iManager, click Storage > Software RAIDs.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select a server in the eDirectory tree where you are logged in.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
3 Make sure that there is no I/O for the volumes on the RAID device by deactivating the volume.
3a Click Volumes.
3b Select the volumes in the RAID device you want to expand.
3c Click Dismount.
4 Select a device in the Software RAID Devices list.
If the device contains the maximum number of partitions, the Increase Size option is dimmed.
You cannot expand the RAID. Do not proceed.
5 Click Increase Size.
This opens the Expand a RAID Wizard to let you choose from available free space on devices
that are not already members in the RAID and that contain enough free space to meet the
RAID’s current partition size.
6 Do one of the following:
 If there are no devices available, you cannot expand the RAID. Click Cancel.
A device must be the same size or larger than the segment size being used in the RAID. You
might need to add or initialize a new device, then try again.
 Select the check box next to each of the storage devices you want to obtain space from.
The partition size is predetermined by the existing RAID. The partition you choose must be
the same size as other partitions comprising the device.
Stripe size is fixed at its current value for the duration of the expansion. If you want to
change the stripe size, restripe after the expansion.
You can choose multiple partitions up to the maximum for the type of RAID it is. For
information, see “Determining the Number of Partitions” on page 142 and “Determining
the Partition Size” on page 142.
7 Click Finish.
After you add a partition, the RAID’s data is restriped across both existing and new partitions.
During the restriping, the RAID’s capacity does not include the added partition. While
restriping, the new device is considered a failed device until it is completely resynchronized.
After the restriping is complete, the RAID’s capacity includes the added partition.
While expanding a RAID 5 device, if one of the drives goes down (either one of the existing
segments or the newly added segment), the pool deactivates. If you remove any device from a
RAID 5 other than the one that was just added for restripe, it considers that as a two-disk error,
and deactivates the RAID and the pool.
Remirroring and Restriping Temporarily Impacts System Performance
For software RAID 1 devices, the additional mirror begins to collect data immediately. Performance is
slightly impacted, but data is available.
Managing NSS Software RAID Devices
153
For software RAID 0 or RAID 5 devices, the system automatically begins restriping the disks to
accommodate the partition you just added. This severely impacts performance until the striping is
complete. The capacity of the RAID is not updated until the restriping is done. If the restriping
process is interrupted before it completes, it begins automatically on system reboot.
13.14
Restriping a Software RAID
In general, there are three reasons for restriping of software RAID 0 and 5 devices:
 Partition Replacement: If a partition fails, you must replace it. Restriping can recover the data in
a single lost partition in a RAID 5 by using parity. However, the data must be restriped from a
backup tape if a partition fails in a RAID 0.
 RAID Expansion: If you expand a RAID 0 or 5 device, the RAID restripes the data across all
members.
 RAID Stripe Size: If you increase or decrease the stripe size of a RAID 0 or 5 device, the RAID
restripes the data across all members. This happens infrequently, unless you are measuring
performance with different striping sizes to determine which best fits your operational needs.
If the restriping process is interrupted, the RAID recognizes that when the system reboots, and
automatically continues the restriping process. You can also use iManager to pause and resume a
restriping process.
IMPORTANT: If the RAID is on a shared system and no NSS pool is on it, it will not restart
automatically after a reboot. In this case, the RAID must be enabled using the nlvm raid enable
<raid_name> command. Ensure that the RAID device is enabled on one node only. Carelessly
enabling a RAID device can lead to corruption.
When expanding a RAID 5, if the newly added drive goes down during the restripe, the restriping
continues without the new partition and puts the RAID in a degraded state with one partition
missing. If the same partition comes back online, it finishes the restripe. If the partition has
completely failed, after the degraded restriping is complete, you can add a new replacement
partition, and the RAID restripes to fix it.
To manually resume or pause the Restripe process:
1 In iManager, click Storage > Software RAIDs.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select a server in the eDirectory tree where you are logged in.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
3 Make sure that there is no I/O for the volumes on the RAID device by deactivating the volume.
3a Click Volumes.
3b Select the volumes in the RAID device you want to expand.
3c Click Dismount.
4 Select a device in the Software RAIDs list.
Wait for the page to refresh before continuing.
5 Click Restripe.
The restriping process begins or pauses immediately. Restriping severely degrades I/O
performance until the restriping is complete.
154
OES 11 SP2: NSS File System Administration Guide for Linux
13.15
Replacing a Failed Segment in a Software RAID
For some RAIDs, you can replace a failed segment by removing the segment from the RAID,
replacing the failed disk, and then adding a segment to the RAID to replace the failed one. The
following table provides recommended actions for each RAID type.
Table 13-4 Recommended Actions on Segment Failure in a Software RAID
Software Remove
RAID
Segments
Recommended Action
RAID 0
Not allowed
If one or more disks in a RAID 0 fails, all data is lost on that device. You
must re-create the RAID and restore its data from a backup copy.
RAID 1
All but one
mirrored segment
The last segment of a RAID 1 device is the primary device that contains
the data.
You can use the NSSMU Partitions option to remove mirrored segments
to free up the related space on those segments, but the data on the last
remaining segment is no longer mirrored.
If a disk fails in one of the segments, all of its data is lost on that
segment. The remaining mirrors continue to operate normally. Remove
the segment, replace the failed disk, and then add the segment as an
element in the mirror. The data is synchronized over time until it is fully
mirrored on the new segment.
RAID 5
One data segment You can temporarily remove one segment at a time in a RAID 5 device to
at a time
replace a failed data disk.
For example, you can replace a single failed data disk while the system
is still operational. Use the NSSMU Partitions option to remove the failed
segment, replace the failed disk, add a segment to the RAID 5, and then
restripe the RAID. The parity is used during restriping to restore the
missing data. Read and write performance is degraded on the failed
segment until the data is recovered because of parity-related reads and
calculations.
If multiple data segments fail concurrently, all data is lost on that device.
You must delete the RAID 5 and re-create it with good disks. Recover its
data from a backup copy.
For example, if a second segment fails before the restriping is completed
for the first drive replacement, this is considered a two-drive failure. You
must recover data from a backup copy.
RAID 0+1 All but one
mirrored segment;
cannot remove
disks from the
underlying RAID 0
The last segment of a RAID 0+1 is a RAID 0 device that contains the
original data.
If one or more disk fails in one of the mirrored segments, all data is lost
on that segment. The remaining mirrors continue to operate normally.
Remove the failed RAID 0 from the RAID 0+1. Delete the RAID 0,
replace the failed disks, re-create the RAID 0, and then add the RAID 0
segment as an element in the mirror. The data is synchronized over time
until it is fully mirrored on the repaired RAID 0 segment.
Managing NSS Software RAID Devices
155
Software Remove
RAID
Segments
RAID 5+1 All but one
mirrored segment;
can safely remove
one segment at a
time per segment
Recommended Action
The last segment of a RAID 5+1 is a RAID 5 device that contains the
original data.
If a single data segment fails in a RAID 5 that is an element in a RAID
5+1, repair the RAID 5 while it is operational, as you would with any
RAID 5.
If multiple disks in a mirrored segment fail concurrently, all data is lost on
that segment. The remaining mirrors continue to operate normally.
Remove the failed RAID 5 from the RAID 5+1. Delete the RAID 5,
replace the failed disks, re-create the RAID 5, and then add the repaired
RAID 5 as a segment in the RAID 5+1. The data is synchronized over
time until it is fully mirrored on the repaired RAID 5 segment.
A Segment Fails in a RAID 0
If a segment fails in a RAID 0, you must delete the software RAID 0 device, create a new RAID 0
device, then copy your data to the RAID from backup media. For information, see Section 13.16,
“Deleting a Software RAID Device,” on page 157.
A Segment Fails in a RAID 1
1 From the command console, enter nssmu.
2 From the NSSMU main menu, select Software RAIDs.
3 Remove the bad segment.
3a Select the software RAID 1 device that you want to manage.
3b Press Enter to show its member segments. The bad segment should show a status of Bad -
Unavailable Partition.
3c Select the bad segment, then press Delete.
4 Expand the RAID with a replacement segment.
4a Select the software RAID 1 device that you want to manage.
4b Press F3 to increase the size of the RAID.
4c From the list of available devices, select the device you want to use for the new segment.
The segment size defaults to the size of existing partitions in the RAID 1.
4d Select OK twice.
5 The data begins mirroring automatically and continues until the segment is 100% mirrored.
A Single Data Segment Fails in a RAID 5
To replace a single failed data segment in a software RAID 5:
1 From the command console, enter nssmu.
2 From the NSSMU main menu, select Software RAIDs.
156
OES 11 SP2: NSS File System Administration Guide for Linux
3 Remove the bad segment.
3a Select the software RAID 5 device that you want to manage.
3b Press Enter to show its member segments. The bad segment should show a status of Bad -
Unavailable Partition.
3c Select the bad segment, then press Delete.
4 Expand the RAID with a replacement segment.
4a Select the software RAID 5 device that you want to manage.
4b Press F3 to increase the size of the RAID.
4c From the list of available devices, select the device you want to use for the new segment.
The partition size defaults to the size of existing partitions in the RAID 5.
4d Select OK twice.
5 The restriping should begin automatically. If it does not, from the Software RAIDs page, select the
RAID 5 device, then press F6 to restripe.
Multiple Segments Fail in a RAID 5
If two or more segments fail concurrently in a RAID 5 or if the parity partition fails, you must delete
the software RAID 5 device, create a new RAID 5 device, then copy your data to the RAID from
backup media. For information, see Section 13.16, “Deleting a Software RAID Device,” on page 157.
13.16
Deleting a Software RAID Device
If you delete a software RAID device, it ends the RAID relationship, and it destroys the NSS file
structure on member partitions. All data is lost. Make sure to back up your data or move it to another
location before deleting the software RAID device.
IMPORTANT: Deleting a single-segmented NSS RAID1 device which is part of a NSS pool from
iManager deletes the pool and all underlying partitions of the pool. To delete such a RAID1 device
and attach the segment directly to the pool, use NSSMU.
NOTE: Only the RAID1 device gets deleted and no data is lost under the following scenarios:
 When the RAID1 device has only one segment and if the device is consumed by a pool, deleting
the RAID1 device deletes only the device. The segment is directly attached to the pool.
 When the RAID1 device has only one segment and if the device is an SBD mirror, deleting the
RAID1 device deletes only the mirror. The mirror's segment becomes the SBD partition.
1 In iManager, click Storage > Software RAIDs.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select a server in the eDirectory tree where you are logged in.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
3 Make sure that there is no I/O for the volumes on the RAID device by deactivating the volume.
3a Click Volumes.
3b Select the volumes in the RAID device you want to delete.
3c Click Dismount.
Managing NSS Software RAID Devices
157
4 Select a device in the Software RAIDs list.
5 Click Delete.
13.17
Viewing Pools on a Software RAID Device
1 In iManager, click Storage > Software RAIDs.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select a server to manage.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
A list of devices appears in the Software RAIDs list.
3 In the Software RAIDs list, select a RAID device.
4 In the Details area, click the arrow on the Pools drop-down list to expand it.
5 Select a pool, then click View Details.
This opens the Pools page where you can view the details of the pool and manage it. See
Section 15.2, “Creating a Pool,” on page 164 for a sample Pools page.
For information about pool management, see “Managing NSS Pools” on page 163.
13.18
Viewing Partitions on a Software RAID Device
1 In iManager, click Storage > Software RAIDs.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select a server to manage.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
3 In the Software RAIDs list, select a device.
Wait for the page to refresh before continuing.
4 In the Details area, click the arrow on the Partitions drop-down list to expand it.
158
OES 11 SP2: NSS File System Administration Guide for Linux
5 To view information about partitions, click the Partitions View Details icon.
This opens the Partitions page. It displays a list of all the partitions that currently exist on the
selected device.
6 Select a partition from the Partitions list, then click Details to view its details.
13.19
Deleting Partitions on a Software RAID Device
You can delete all but one partition of a RAID 1 (mirror) and only one partition at a time for a RAID 5.
To delete a RAID 1 partition, you must delete its RAID.
1 In iManager, click Storage > Software RAIDs.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select a server to manage.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
3 Make sure that there is no I/O for the volumes on the RAID device by deactivating the volume.
3a Click Volumes.
3b Select the volumes in the RAID device pool.
3c Click Dismount.
4 In the Software RAIDs list, select a device.
Wait for the page to refresh before continuing.
5 Click the Partitions View Details icon.
This opens the Partition page. It displays a list of all the partitions that currently exist on the
selected device.
6 Select a partition from the Partitions list, click Delete, then click OK.
13.20
Managing Software RAID Devices with NSSMU
You can use different keystrokes that enable you to view, expand, restripe, create, and delete a RAID
device using NSSMU. For more information, see Table 9-11 on page 92.
Managing NSS Software RAID Devices
159
160
OES 11 SP2: NSS File System Administration Guide for Linux
14
Managing Multipath I/O to Devices
14
Novell Storage Services for Linux does not provide multipath I/O (MPIO) support. This section
describes how to use Linux tools to configure and manage multipathing for devices, and how to
configure Linux multipathed devices for use with NSS.
 Section 14.1, “Understanding Multipath I/O,” on page 161
 Section 14.2, “NSS Errors When Linux Multipath Is Not Configured,” on page 162
 Section 14.3, “Configuring Multipath,” on page 162
14.1
Understanding Multipath I/O
Multipath I/O software resolves multiple paths to a device into a single device and manages the
traffic flow across the paths transparently for file systems on the devices. NSS on Linux does not
provide an LVM-based software solution for managing multiple paths like the Media Manager
multipath solution on NetWare. Instead, you can use Linux multipath I/O tools to configure and
manage multiple paths for devices where you want to create NSS software RAIDs, pools, and
volumes. You can also use solutions from the storage array vendor or third-party vendor.
Devices have multiple connection paths when you implement hardware configurations such as the
following:
 The server has multiple host bus adapters for connection to external devices.
 The external storage device has multiple interconnects for connection to one or more host bus
adapters.
 A server with multiple host bus adapters is connected to a storage device through intermediate
devices, such as a Fibre Channel SAN switch.
In a Linux host, when there are multiple paths to a storage controller, each path appears as a separate
block device, which results in multiple block devices for single LUN. The Device Mapper Multipath
utility detects multiple paths with the same LUN WWID, and creates a new multipath device with
that WWID.
For example, a host with two HBAs attached to a storage controller with two ports via a single
unzoned Fibre Channel switch sees four block devices:
/dev/sdb
/dev/sdc
/dev/sdd
/dev/sde
Device Mapper Multipath creates a single block device, /dev/mpath/mpath1 that reroutes I/O
through those four underlying block devices.
Managing Multipath I/O to Devices
161
14.2
NSS Errors When Linux Multipath Is Not Configured
If you have not started multipathing before you attempt to configure NSS pools or volumes, NSS
cannot resolve the multiple paths and attempts the command on all the paths. You get the following
error:
Error 21621: zERR_MSAP_POOL_ALREADY_IN_USE.
After you have configured Linux multipathing for a device, the multipath device appears in NSSMU
or iManager.
14.3
Configuring Multipath
For detailed information about configuring and managing multipath I/O for devices, see Managing
Multipath I/O for Devices (http://www.suse.com/documentation/sles11/stor_admin/?page=/
documentation/sles11/stor_admin/data/multipathing.html) in the SLES 11: Storage Administration
Guide (http://www.suse.com/documentation/sles11/stor_admin/?page=/documentation/sles11/
stor_admin/data/bookinfo.html).
For information about using Linux multipath solutions with clustered pool resources on Novell
Cluster Services, see “Multipath I/O Configuration Requirements” in the OES 11 SP2: Novell Cluster
Services for Linux Administration Guide.
162
OES 11 SP2: NSS File System Administration Guide for Linux
15
Managing NSS Pools
15
Novell Storage Services uses storage pools to efficiently acquire and use all free space available on
devices. A pool is an area of storage that consists of space, called a partition, obtained from one or
more of the storage devices available on a server. The amount of space that each storage device
contributes can differ for each member device.
Use the iManager Storage plug-in to configure and manage NSS pools. For information about
iManager, see Section 9.1, “Novell iManager and Storage-Related Plug-Ins,” on page 77.
You can also use the console-based NSS Management Utility to configure and manage NSS pools. For
information, see Section 9.2, “NSS Management Utility (NSSMU) Quick Reference,” on page 92.
For information about installing Novell Cluster Services and cluster-enabling shared NSS pools in the
Novell Cluster Services clusters, see “Configuring and Managing Cluster Resources for Shared NSS
Pools and Volumes” in the OES 11 SP2: Novell Cluster Services for Linux Administration Guide. For OES
2 SP3 to OES 11 SP2 upgrades, see “Requirements and Guidelines for Upgrading Clusters from OES
2 SP3” in the OES 11 SP2: Novell Cluster Services for Linux Administration Guide. For NetWare 6.5 SP8 to
OES 11 SP2, see “Managing File Systems in Mixed-Mode Clusters” in the OES 11 SP2: Novell Cluster
Services NetWare to Linux Conversion Guide.
This section describes how to configure and manage NSS pools by completing the following tasks:
 Section 15.1, “Guidelines for Creating a Pool,” on page 164
 Section 15.2, “Creating a Pool,” on page 164
 Section 15.3, “Activating and Deactivating Pools,” on page 168
 Section 15.4, “Increasing the Size of a Pool,” on page 169
 Section 15.5, “Renaming a Pool,” on page 170
 Section 15.6, “Deleting a Pool,” on page 171
 Section 15.7, “Viewing Pools on a Server,” on page 171
 Section 15.8, “Viewing Pool Details,” on page 172
 Section 15.9, “Viewing Partition Information for a Pool,” on page 173
 Section 15.10, “Viewing Volume Information for a Pool,” on page 174
 Section 15.11, “Viewing Device Information for a Pool,” on page 174
 Section 15.12, “Moving a Pool,” on page 174
 Section 15.13, “Preventing Pools from Activating on Multiple Servers,” on page 175
 Section 15.14, “Updating eDirectory Pool Objects,” on page 179
 Section 15.15, “Updating eDirectory for Shared Pool,” on page 180
 Section 15.16, “What’s Next,” on page 181
Managing NSS Pools
163
15.1
Guidelines for Creating a Pool
Devices must be initialized before any space is shown as available for creating a pool. Without
initializing the devices, no space will be shown available for pool creation. For instructions, see
Section 10.5, “Initializing a Disk,” on page 112.
Novell NCP Server must be installed, configured, and running. For information, see Section 5.6.1,
“NCP,” on page 58.
Novell CIFS must be installed, configured, and running before you can use the CIFS option when
cluster-enabling an NSS pool. For information, see Section 5.6.3, “Novell CIFS,” on page 58.
Novell AFP must be installed, configured, and running before you can use the AFP option when
cluster-enabling an NSS pool. For information, see Section 5.6.2, “Novell AFP,” on page 58.
15.2
Creating a Pool
1 In iManager, click Storage > Pools.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select a server to manage.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
A list of pools appears in the Pools list.
3 To create a new pool, click New.
The New Pool Wizard opens to guide you through the process.
164
OES 11 SP2: NSS File System Administration Guide for Linux
4 Specify a name for the new storage pool, then click Next.
For guidelines about naming pools, see Section 5.4, “Naming NSS Storage Objects,” on page 50.
In the following example, the device is not shared, so the Cluster Enable on Creation check box is
not displayed.
5 Specify device parameters and the space to use, then click Next.
5a Select the check box next to one or more of the available devices you want to use in the pool.
5b In Used Size, specify the amount of space in megabytes (MB) to add to the pool from each
device you selected, up to the amount of free space available for that device.
To update the Total Pool Size as you enter the device’s Used Size, click anywhere within the
Wizard dialog box. If any entry exceeds a device’s available space, the pool expansion fails
and returns an error message. When expanding a pool, the devices that have at least 1 MB
of free space is displayed in the list because NSS pools can only be expanded on a MB
boundary.
The pool can be up to 8 TB.
You can obtain space from one or more of the devices listed. Only devices that have free
space appear in the list. If no devices are listed, it might be because you need to initialize a
recently added device, or it might be that there is no space available on any devices. Cancel
the Wizard, add more devices to the server or free up space on existing devices, then return
to the Pools page to increase the size of this pool.
5c Select Activate on Creation (Mount on Creation for Linux) to activate (mount) the device
automatically after it is created.
This parameter is automatically enabled. Deselect the check box to turn it off.
Managing NSS Pools
165
5d If the selected device is shareable, the Cluster Enable on Creation check box is automatically
selected so the pool can be shared in a cluster configuration. Deselect the check box if you
do not want to cluster-enable this pool for sharing.
5e If the pool is cluster-enabled, click Next to specify its cluster parameters. Otherwise, skip
ahead to Step 7.
6 If the Cluster Enable on Creation check box is selected, an additional page appears that allows you
to specify the cluster information.
Specify the following shared pool clustering parameters:
 NCP Virtual Server Name: The name assigned to the virtual server that represents the
shared pool in the cluster.
When you cluster-enable a pool, a virtual Server object is automatically created in NetIQ
eDirectory and given the name of the Cluster object plus the name of the cluster-enabled
pool. For example, if the cluster name is cluster1 and the cluster-enabled pool name is
pool1, then the default virtual server name will be cluster1_pool1_server. You can edit
the field to change the default virtual server name.
166
OES 11 SP2: NSS File System Administration Guide for Linux
 CIFS Virtual Server Name: The name assigned to the virtual server for handling CIFS
(Common Internet File System) requests. This is the name of the server as it appears in a
Windows system. For CIFS Virtual Server Name, the maximum supported character length
is 15.
NOTE: If the NCP Virtual Server Name is less than or equal to 15 characters and CIFS
check-box is enabled under the Advertising Protocols, the CIFS Virtual Server Name is autopopulated with the same name as NCP Virtual Server Name. You can edit the field to
change the CIFS virtual server name.
 IP Address: The IP address that you want to assign the virtual server.
Each cluster-enabled NSS pool requires its own IP address. The IP address is used to
provide access and failover capability to the cluster-enabled pool (virtual server). The IP
address you assign to the pool remains assigned to the pool regardless of which server in
the cluster is accessing the pool.
IMPORTANT: The IP address for the virtual server must be in the same IP subnet as the
server nodes in the cluster where you plan to use it.
To specify an IP address, tab between the different entries; no dot is required in the fields.
For example, if the IP address is 192.168.1.1, type the following:
192 168 1 1
 Advertising Protocols: Protocols that give users native file access to data.
Specify one or more advertising protocols by selecting the check boxes of the protocols you
want to enable for data requests to this shared pool.
NOTE: For OES 2 and earlier, Novell CIFS and Novell AFP are not available. CIFS and AFP
check boxes can be selected, but CIFS and AFP functionality does not apply to Linux.
Selecting the check boxes has no effect.
 NetWare Core Protocol (NCP) is the Novell networking protocol used by the Novell
Client. It is selected by default. Selecting NCP causes commands to be added to the
pool-resource load and unload scripts to activate the NCP protocol on the cluster. This
lets you ensure that the cluster-enabled pool you are creating is highly available to
Novell clients.
 CIFS is the Windows networking protocol. Selecting CIFS causes commands to be
added to the pool-resource load and unload scripts to activate the CIFS protocol on the
cluster. This lets you ensure that the cluster-enabled pool you are creating is highly
available to CIFS/Samba clients.
 Apple Filing Protocol (AFP) is the Macintosh networking protocol. Selecting AFP
causes commands to be added to the pool-resource load and unload scripts to activate
the AFP protocol on the cluster. This lets you ensure that the cluster-enabled pool you
are creating is highly available to AFP clients.
Managing NSS Pools
167
7 Click Finish.
For NSS, the create time might take longer than expected. Typically, the pool creation takes less
than a minute, and the volume creation takes less than 10 seconds. However, if you have a large
tree or the server does not hold an eDirectory replica, the create time can take up to 3 minutes.
8 Create a volume on the pool. For information, see Section 18.3, “Creating Unencrypted NSS
Volumes,” on page 216 or Section 19.3, “Creating an Encrypted Volume,” on page 242.
15.3
Activating and Deactivating Pools
You might need to temporarily restrict user access to an pool. Instead of bringing down the server,
you only need to deactivate the specific pool.
The Activate option on the Pools page makes the selected pools and all the volumes in them available
for user access. The Deactivate option on the Pools page takes the selected pools and all the volumes in
them temporarily unavailable to users. It does not destroy volumes in the pools, nor does it destroy
the data contained in the volumes.
To change the state of a pool:
1 In iManager, Storage > Pools.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select a server to manage.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
3 In the Pools list, select the pool that you want to activate or deactivate.
Wait for the page to refresh. It displays the pool’s details and enables management options. The
State field shows whether the device is Active or Deactive.
168
OES 11 SP2: NSS File System Administration Guide for Linux
4 Depending on the pool’s current state, to change the state of the pool:
 Click Actions > Activate.
 Click Deactivate > Actions.
15.4
Increasing the Size of a Pool
Using the Increase Size option on the Pools page expands the storage capacity of a selected pool by
adding new partitions. You can increase the size of your storage pools, but you cannot reduce their
size.
1 In iManager, click Storage > Pools.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select a server to manage.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
3 In the Pools list, select the pool that you want to expand.
Wait for the page to refresh. It displays the pools’s details and enables management options.
4 Click Increase Size.
This opens an Expand a Pool Wizard that guides you through the process of adding partitions
from available devices.
5 Select the devices you want to use and the amount of space to use from each device.
In the Used Space field, type the amount of space in megabytes (MB) to add, up to the amount of
free space available for that device. If any entry exceeds a device’s available space, the pool
expansion fails and returns an error message.
Software RAID 1 (mirrored) devices can contain only one pool per device. If you select a RAID 1
device to add a partition to your pool, NSS automatically allocates all of the available space to
the pool.
The Total Pool Size is the sum of the partitions you define plus the current pool size. Initially, the
Total Pool Size field displays the current size of the pool. To update the Total Pool Size as you enter
values in the Used Size field, click anywhere within the Wizard dialog box.
You can obtain space from one or more of the devices listed. Only devices that have free space
appear in the list. If no devices are listed, there is no space available to increase the size of the
pool. Cancel the Wizard, add more devices to the server or free up space on existing devices,
then return to the Pools page to increase the size of this pool.
6 Click Finish, or click Cancel at any time to back out of the process.
Managing NSS Pools
169
15.5
Renaming a Pool
The Rename option on the Pools page lets you to modify the name of the selected pool. For example,
you might want to assign a pool name that relates to a department name change. The pool must be in
the active state when you rename the pool so that eDirectory can be updated.
For an NSS pool, NLVM must unload and reload the pool in order to rename it. Depending on the
pool’s load-time behavior and share state, the pool might be in a deactive state after the rename and
require administrator action to reload the pool and its volumes. Because the volumes are temporarily
unavailable, it is best to perform a pool rename during a period of little or no user activity. See Table
15-1 to determine what actions to take after renaming a pool:
Table 15-1 Actions Required after Renaming an NSS Pool
Pool Share
State
Pool Load-Time State
Pool State After a
Rename
Unshared
Autoloaded
Active with volumes
dismounted
Mount the pool’s volumes
Unshared
Not autoloaded
Deactive
Activate the pool, then
mount its volumes
Shared
Load and unload is
Deactive
controlled by Novell Cluster
Services. Before you
rename a cluster-enabled
pool, make sure to offline
the pool resource, activate
the pool by using iManager
or NSSMU instead of using
the load script, then you
can rename the pool by
using iManager or NSSMU.
Action Required
Online the pool resource to
activate the pool and its
volumes. Novell Cluster
Services automatically
updates the pool resource
load and unload scripts to
reflect the name change.
Also, NSS automatically
changes the Pool Resource
object name in eDirectory.
Because renaming involves changing information in the Pool object in eDirectory, you must ensure
that the shared pool is active on a cluster node that has its NCP server object in the same context as
the Pool object of the pool you are going to rename.
1 In iManager, click Storage > Pools.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select a server to manage.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
3 In the Pools list, select the pool that you want to rename.
Wait for the page to refresh and display the details.
4 If the pool is deactive, activate it by clicking Activate.
Wait for the page to refresh and display the details.
5 Click Rename.
This opens the Rename a Pool Wizard that guides you through the process.
6 Specify a name, then click Finish.
If the name is valid and unique, the pool is successfully renamed.
170
OES 11 SP2: NSS File System Administration Guide for Linux
If not, you receive an error, and you must repeat the process. For information about valid pool
names, see Section 5.4, “Naming NSS Storage Objects,” on page 50.
7 If the Pools page does not automatically update to show the new name for the pool, in Roles and
Tasks, click Pools to refresh the current page.
8 Activate the pool if it is deactive, then mount the pools’s volumes.
15.6
Deleting a Pool
You might need to delete an NSS pool to create more free space for other pools. The Delete option on
the Pools page removes one or more selected pools from the server, including all member partitions
and the data on them. Deleting a pool removes the ownership of the space it occupied, freeing the
space for reassignment. If the pools you want to delete are active, deactivate them before you delete
them.
WARNING: Deleting a pool destroys all the volumes and data in it. These volumes cannot be
restored.
If the pool is created on RAID1 device, deleting the pool will also delete the RAID1 device.
 Section 15.6.1, “Prerequisites for Deleting a Pool,” on page 171
 Section 15.6.2, “Procedure,” on page 171
15.6.1
Prerequisites for Deleting a Pool
If the pool is shared in a Novell Cluster Services cluster, you must offline the cluster resource before
you attempt to delete the clustered pool or its cluster resource.
If the pool has pool snapshots, you must delete the pool snapshots before you delete the pool. For
information, see Section 17.10, “Deleting a Pool Snapshot,” on page 207.
15.6.2
Procedure
1 In iManager, click Storage > Pools.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select a server to manage.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
3 In the Pools list, select one or more pools that you want to delete.
Wait for the page to refresh. It displays the pools’s details and enables its management options.
4 Click Delete.
5 Click Yes to confirm the deletion, or click No to cancel the deletion.
WARNING: If you click Yes, the pool and all the volumes and data on it are immediately
destroyed.
15.7
Viewing Pools on a Server
1 In iManager, click Storage > Pools.
Managing NSS Pools
171
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select a server to manage.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
When the page refreshes, a list of pools appears in the Pools list. Depending on the number of
pools, this can take a few seconds. Avoid clicking again in the page until it refreshes and
displays the Pools list.
15.8
Viewing Pool Details
The Pool Details area of the Pools page displays information about a selected pool in the Pools list, as
described in the following table:
Table 15-2 Explanation of Pool Details
172
Device Detail
Description
Name
The descriptive pool name assigned by the administrator.
Mount Point
On your Linux system, this is the mount location for the NSS pool. The
default mount location is /opt/novell/nss/mnt/.pools/
poolname, where poolname is the name of the selected pool.
Partitions
A list of all of the partitions that are part of the selected pool. To view
information about any of the partitions, select the partition in the dropdown list, then click the View Details icon.
Number of Partitions
The total number of partitions currently assigned to the selected pool.
State
The current state of the selected pool, as Active or Deactive. Active pools
are available to the users; deactive pools are not available to users.
LSS Type
The type of Loadable Storage System, such as ZLSS, CDDVD, or
DOSFAT.
Share State
Shows whether the selected pool is on a device that is marked as
Shareable for Clustering or as Not Shareable for Clustering. The system
pool cannot reside on a device that is shareable for clustering. Use the
Devices page to set this device attribute.
Volumes
A list of all existing volumes residing in the selected pool. To view
information about any of the volumes or to manage any of the volumes,
select the volume in the drop-down list, then click the View Details icon.
You can also select a pool, then click Volumes in Roles and Tasks.
Number of Volumes
The total number of volumes residing in the selected pool.
Devices
A list of the descriptive device names of all logical devices contributing
space in the selected pool.
Number of Devices
The total number of devices currently assigned to the selected pool.
Total Space
The total amount of space assigned to the selected pool.
Free Space
The total amount of space that is currently not in use on the selected pool.
OES 11 SP2: NSS File System Administration Guide for Linux
Device Detail
Description
Used Space
The total amount of space that is in use on the selected pool.
 Purgeable Space: The total amount of space in the selected pool
that is currently in use as a salvage area or partitioned space that is
not yet otherwise assigned.
 Other In-Use Space: The total amount of space in the selected pool
that is currently in use and cannot be easily deleted without
destroying data.
Block Size
The maximum amount of data committed to a single write. Possible sizes
include 4, 8, 16, 32, or 64 KB. The default setting for NSS is 4 KB.
Creation Date
The time stamp (date and time) that the pool was created.
Last Update
The time stamp (date and time) that the pool was last modified by a
management action.
Procedure
1 In iManager, click Storage > Pools.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select a server to manage.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
3 In the Pools list, select a pool to view its details.
Wait for the page to refresh and display the pool’s details.
4 The pool must be active to display its details. If the Details area is empty, select the pool, then
click Activate.
When the page refreshes, you can view the pool’s details.
15.9
Viewing Partition Information for a Pool
Although NSS abstracts the partitions underlying the pool structure, you can view information about
those partitions.
1 In iManager, click Storage > Pools.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select a server to manage.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
3 In the Pools list, select the pool you want to manage.
Wait for the page to refresh and display the details. The pool must be active to see partition
details.
4 If the pool is deactive, make sure the pool is selected, then click Activate.
After the page refreshes, the Partitions drop-down list is available.
5 Click on the arrow next to the Partitions drop-down list to expand the list.
6 To view details about a partition, select the partition, then click View Details.
A Partition Information page opens where you can view details about the partition.
Managing NSS Pools
173
15.10
Viewing Volume Information for a Pool
1 In iManager, click Storage > Pools.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select a server to manage.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
3 In the Pools list, select a pool.
Wait for the page to refresh and display the details in the Details area.
4 To view the volumes in the pool, use one of these methods:
 In the Details area, click the arrow on the Volumes drop-down list.
To view details for a volume in the list, select the volume, then click View Details. The
Volumes page opens with the server and volume preselected.
 Click Storage > Volumes.
The Volumes page opens with the server preselected. To view details for a volume in the
Volumes list, select the volume, then wait for the page to refresh.
For information about Volume management, see “Managing NSS Volumes” on page 209.
15.11
Viewing Device Information for a Pool
1 In iManager, click Storage > Pools.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select a server to manage.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
When the page refreshes, a list of pools appears in the Pools list.
3 In the Pools list, select a pool.
Wait for the page to refresh and display the pool’s details in the Details area.
4 To view the list, in the Details area, click the arrow on the Devices drop-down list.
5 (Optional) Select the device, then click View Details to view its details.
The Devices page opens with the server and device preselected. Wait for the page to refresh to
view the device’s details.
For information about device management, see “Managing Devices” on page 105.
15.12
Moving a Pool
You can move an NSS pool from one location to another on the same system. The pool remains active
during this process. All the segments in the pool are consolidated and moved to the specified
device(s). If a specified device is larger than the original device, the pool is automatically expanded
on completion of the move job.
If a clustered pool is moved, on performing cluster resource migration the Move pool job is resumed
on other node.
174
OES 11 SP2: NSS File System Administration Guide for Linux
NOTE: You can use NSSMU or the NLVM command line option to move a pool. You cannot use
iManager.
1 In NSSMU, select Pools from the main menu.
2 Select the pool that you want to move.
3 Press M.
4 Select the device to which you want to move the pool, then press Ins/ENTER.
5 Specify the partition size.
You can select more than one device at a time. Repeat Step 4 and Step 5 for all the selected
devices.
6 (Optional) To remove a selected device from the list, select the device in the lower list and press
Delete.
7 (Optional) To change the partition size of a selected device, select the device in the lower list,
press Ins/Enter; then modify the size for that device.
8 Press F3 after configuring the devices.
Pool Information displays the status of move pool as a percentage (%).
9 Press F5 to refresh the status.
In a cluster setup, the exact status of the pool move can be seen only on the node where the pool
move job is in progress.
10 When you select a pool in the Pools list, NSSMU looks at the pool's move status. If the move is
not complete (<100%), NSSMU displays the percent complete. If the move is complete (100%),
NSSMU finalizes the move by setting the pool to the new location and removing the original
location after a confirmation.
If the move is not complete, you can refresh the pool's display to cause NSSMU to check the
move status again. For example, you can use any of these methods to refresh the display:
 Select another pool, then re-select the pool being moved.
 While viewing the page for the pool being moved, do a fast refresh by pressing the spacebar
or by pressing F5.
For more information on using the NLVM command line option to move the pool, see “Move” in the
OES 11 SP2: NLVM Reference guide.
15.13
Preventing Pools from Activating on Multiple Servers
Multiple Server Activation Prevention (MSAP) prevents some accidental activations of a pool on
more than one server at a time. You should never purposely attempt to activate a pool on two servers
at the same time.
 Section 15.13.1, “Understanding MSAP,” on page 176
 Section 15.13.2, “Enabling or Disabling MSAP for All NSS Pools,” on page 176
 Section 15.13.3, “Enabling or Disabling MSAP for a Given NSS Pool,” on page 177
 Section 15.13.4, “Rebuilding the MSAP Block for a Given NSS Pool,” on page 178
 Section 15.13.5, “Determining If MSAP Is Enabled or Disabled on NSS Pools,” on page 179
 Section 15.13.6, “Managing MSAP with XML or APIs,” on page 179
 Section 15.13.7, “Additional Information,” on page 179
Managing NSS Pools
175
15.13.1
Understanding MSAP
MSAP is enabled by default for all pools on the server. When enabled, it helps prevent some
accidental activations of a pool on more than one server at a time. It does not catch all multiple
activations. MSAP is not meant as a replacement of clustering software that controls shared pools.
MSAP protects pools on systems that do not have clustering installed but are attached to a shared
disk by accident. For example, a pool might not be marked with the Shareable for Clustering
attribute, but it exists on shared disks seen by multiple servers.
Pool MSAP also protects against dangerous conflicts that can occur if you disable the Shareable for
Clustering flag in order to force an activation of a shared pool, or if you use override=shared when
activating a pool. If MSAP detects a conflict, it deactivates the pool before massive corruption occurs.
If you unload Novell Cluster Services NLM software, or if you are not running it, pool MSAP
provides an extra level of protection. The clustering software watches pools that are marked with the
Shareable for Clustering attribute; MSAP detects conflicting connection from multiple servers and
prevents corruption of pools even on devices that are marked as Not Shareable for Clustering.
In some cases, the MSAP software causes pools to take up to 30 seconds to activate. This delay might
occur on the next pool activation after the Server ID or the Cluster ID changes for a given server pool.
The Server ID changes if the registry is corrupted. The Cluster ID changes if the Cluster eDirectory
object ID is lost.
If MSAP is enabled, all active NSS pools are read every 14 seconds. If your storage media are not
shared between multiple servers such as in a SAN, you can clear the zpool_feature_msap bit. You
should not clear this bit if your pools are on physically shared storage media.
If a pool can be accessed by older servers not running the Support Pack with the MSAP software,
then multiple pool activations can still occur.
IMPORTANT: MSAP does not protect against partition conflicts for pools. It does not prevent
multiple servers from creating a pool in the same partition.
15.13.2
Enabling or Disabling MSAP for All NSS Pools
By default, MSAP is enabled for all pools on the server when the server is booted.
To manually enable or disable MSAP for all pools on the server, issue the following MSAP console
commands at the NSS Console (nsscon) as the root user.
nss /msapserver
Enables MSAP for all the pools on the server. By default, MSAP is enabled for every pool on the
server.
nss /nomsapserver
Disables MSAP for all the pools on the server. This command remains in effect only until the
server is next rebooted.
IMPORTANT: We recommend that you never disable MSAP.
176
OES 11 SP2: NSS File System Administration Guide for Linux
15.13.3
Enabling or Disabling MSAP for a Given NSS Pool
Use the procedures in this section to enable or disable MSAP for a given pool.
The /PoolMSAP option enables MSAP for a given pool on the server. Use the command when the pool
is activated. MSAP is enabled the next time the pool is activated.
The /NoPoolMSAP option disables MSAP for a given pool. Use the command when the pool is
activated. MSAP is disabled the next time the pool is activated.
IMPORTANT: We recommend that you never disable MSAP.
1 Open a terminal console, then log in as the root user.
2 If the pool is not active, activate it now.
2a Start NSSMU by entering the following at the terminal console prompt:
nssmu
2b Go to the Pools page.
2c Select the pool, then activate it by pressing F7.
2d Exit NSSMU.
3 Enable or disable MSAP for a given pool.
3a Start the NSS Console by entering the following at the terminal console prompt:
nsscon
3b At the nsscon prompt, do one of the following:
 Enable MSAP: Enter
nss /poolmsap=poolname
 Disable MSAP: Enter
nss /nopoolmsap=poolname
3c Close the NSS Console by entering
exit
4 Deactivate the pool, then activate it again.
4a Start NSSMU by entering the following at the terminal console prompt:
nssmu
4b Go to the Pools page.
4c Select the pool, then deactivate it by pressing F7.
4d Select the pool, then activate it by pressing F7 again.
4e Exit NSSMU.
MSAP is now enabled or disabled, depending on your action in Step 3.
5 Verify that MSAP is enabled or disabled for the given pool.
5a Start the NSS Console by entering the following at the terminal console prompt:
nsscon
5b At the nsscon prompt, enter
Managing NSS Pools
177
nss /pools
5c Review the messages to determine if the pool was successfully enabled or disabled as
follows:
 MSAP Enabled: The Multi-Use Detect message is displayed for the pool.
 MSAP Disabled: The Multi-Use Detect message is not displayed for the pool.
5d Close the NSS Console by entering
exit
15.13.4
Rebuilding the MSAP Block for a Given NSS Pool
If the MSAP block for a pool becomes corrupt, it prevents a pool from going into the Maintenance
state. Use the /MSAPRebuild option to rebuild a pool’s corrupt MSAP block. Before issuing the
command to rebuild, you must deactivate the pool. Rebuilding an MSAP block does not give the
rebuilder ownership of the pool.
1 Open a terminal console, then log in as the root user.
2 Deactivate the pool.
2a Start NSSMU by entering the following at the terminal console prompt:
nssmu
2b Go to the Pools page.
2c Select the pool, then deactivate it by pressing F7.
2d Exit NSSMU.
3 Rebuild the MSAP block for the pool.
3a Start the NSS Console by entering the following at the terminal console prompt:
nsscon
3b At the nsscon prompt, enter
nss /msaprebuild=poolname
3c Close the NSS Console by entering
exit
4 Activate the pool.
4a Start NSSMU by entering the following at the terminal console prompt:
nssmu
4b Go to the Pools page.
4c Select the pool, then activate it by pressing F7.
4d Exit NSSMU.
The pool should now be able to be placed in Maintenance mode.
178
OES 11 SP2: NSS File System Administration Guide for Linux
15.13.5
Determining If MSAP Is Enabled or Disabled on NSS Pools
The nss /pools command displays the message Multi-Use Detect for NSS pools that have MSAP
enabled.
1 At the NSS Console (nsscon) prompt, enter
nss /pools
2 For each pool, review the messages to determine whether MSAP is enabled or disabled as
follows:
 MSAP Enabled: The Multi-Use Detect message is displayed for the pool.
 MSAP Disabled: The Multi-Use Detect message is not displayed for the pool.
15.13.6
Managing MSAP with XML or APIs
The _admin\manage_nss\pool\poolname\zlss\msap.xml file contains MSAP statistics for the pool.
One file exists for each pool.
The MSAP attribute is displayed in the Enabled Attributes (<enabledAttributes>) tag of the
poolinfo.xml management file.
For manage.cmd, the pool operation getPoolInfo returns the MSAP tag (<msap>) in the Supported
Attributes tag (<supportedAttributes>) and the Enabled Attributes tag (<enabledAttributes>).
For APIs, the pool feature zpool_feature_msap can be viewed and controlled using the zGetInfo and
zModifyInfo commands.
15.13.7
Additional Information
For more information about the MSAP commands used in this section, see Section A.20, “Multiple
Server Activation Prevention (MSAP) Commands,” on page 387.
15.14
Updating eDirectory Pool Objects
On the Pools page the Update eDirectory option, adds or updates the NetIQ eDirectory pool object at
the same context level as the server. NSS searches for the object. If the pool object exists, NSS prompts
you with two options: Delete and replace the existing object, or Retain the existing object. If the pool
object does not exist, NSS adds the object to the same context level where the server exists.
NOTE: Updating eDirectory pool object is a recovery process it is required only when the pool object
is lost, corrupted, or deleted.
To update eDirectory from iManager, perform the following steps:
1 In iManager, click Storage > Pools.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select a server to manage.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
3 In the Pools list, select the pool you want to update.
Managing NSS Pools
179
Wait for the page to refresh. It displays the pools’s details and enables its management options.
4 Click Update eDirectory.
To update eDirectory from NSSMU, perform the following steps:
1 Go to NSSMU.
2 Select the pool you want to update.
3 Press F4 to perform NDS update.
NOTE: When doing an NDS Update on a non-replica server, the operation might fail because of
the eDirectory synchronization issue on non-replica servers.
Retry the eDirectory object update after the completion of eDirectory synchronization, and the
object update operation will succeed.
15.15
Updating eDirectory for Shared Pool
To update eDirectory for Shared Pool from iManager, perform the following steps:
1 Make the cluster resource offline.
1a In iManager, click Clusters > My Clusters, select the name link of the cluster to open the
Cluster Manager page. If the cluster does not appear in the My Clusters list, you can add it.
Click Add, browse for the cluster, then click OK.
1b Select the check box next to the resource that you want to take offline, then click Offline.
2 Delete the cluster resource.
2a In iManager, click Clusters > My Clusters, select the name link of the cluster, then click the
Cluster Options tab.
2b Select the check box next to the resource that you took offline in Step 1, click Delete.
IMPORTANT: When you delete (or delete and replace) a Pool object in eDirectory, the
attributes for that pool is removed in the User objects for any users that reference that pool.
When that pool object is deleted, eDirectory needs to clean up all references to the object
being deleted.
3 Activate the Pool.
3a In iManager, click Storage > Pools.
3b Select the server where the pool exists.
3c Select the pool from the pools list that you want to activate.
3d Click Activate.
4 Update the eDirectory.
4a Select the pool and click on Update eDirectory.
IMPORTANT: Updating a pool's eDirectory object might delete the pool's existing
volumes' eDirectory objects.
4b (Conditional) If the pool's existing volumes' eDirectory objects are not available, update the
eDirectory objects for existing volumes.
5 Cluster enable the pool. For more information, see “Cluster-Enabling an Existing NSS Pool and
Its Volumes” in the OES 11 SP2: Novell Cluster Services for Linux Administration Guide.
180
OES 11 SP2: NSS File System Administration Guide for Linux
6 Bring the cluster resource online.
6a In iManager, click Clusters > My Clusters, select the name link of the cluster to open the
Cluster Manager page.
6b Select the check box next to the cluster resource that you want to bring online, then click
Online.
To update eDirectory for Shared Pool from command prompt, perform the following steps:
1 Make the cluster resource offline.
Run the command, cluster offline <resource name>.
2 Delete the cluster resource from iManager. For more information, see Step 2 on page 180.
3 Activate the Pool.
3a Go to NSSMU.
3b Select the pool you want to activate.
3c Press F7.
4 Update the eDirectory.
4a Press F4 to NDS Update the pool.
IMPORTANT: Updating a pool's eDirectory object might delete the pool's existing
volumes' eDirectory objects.
4b (Conditional) If the pool's existing volumes' eDirectory objects are not available, update the
eDirectory objects for existing volumes.
5 Cluster enable the Pool. For more information, see “Cluster-Enabling an Existing NSS Pool and
Its Volumes” in the OES 11 SP2: Novell Cluster Services for Linux Administration Guide.
6 Bring the cluster resource online.
Run the command, cluster online <resource name>.
15.16
What’s Next
You can now create volumes in the NSS pools you created. For information, see Chapter 18,
“Managing NSS Volumes,” on page 209.
Managing NSS Pools
181
182
OES 11 SP2: NSS File System Administration Guide for Linux
16
Verifying and Rebuilding NSS Pools and
Volumes
16
This section describes how to verify and rebuild Novell Storage Services pools to restore the
consistency of a pool’s metadata structure, and thus, the metadata structure of its volumes.
 Section 16.1, “When to Use Verify and Rebuild,” on page 183
 Section 16.2, “Verifying and Rebuilding an NSS Pool and Its Volumes,” on page 185
 Section 16.3, “ReZIDing Volumes in an NSS Pool,” on page 189
16.1
When to Use Verify and Rebuild
NSS allows you to temporarily deactivate individual storage pools to fix volume problems instead of
bringing down the server. However, when you deactivate a storage pool, users do not have access to
any of the volumes in that pool. All of the volumes on the pool are part of the verify or rebuild
process.
WARNING: After rebuilding an NSS pool, previously deleted volumes can not be salvaged.
The purpose of the Pool Verify and Pool Rebuild utilities is to make sure you have a valid metadata
structure for a pool. Use the utilities only when you have problems with the pool’s metadata
structure.
 Section 16.1.1, “What Verifying Does Not Do,” on page 183
 Section 16.1.2, “What Rebuilding Does Not Do,” on page 183
 Section 16.1.3, “Before Verifying a Pool,” on page 184
 Section 16.1.4, “Before Rebuilding a Pool,” on page 184
16.1.1
What Verifying Does Not Do
Verifying a pool does not fix any problems. It is a read-only assessment of the pool’s metadata
structure to identify the types of errors, the severity of errors, and in what volumes the errors occur.
16.1.2
What Rebuilding Does Not Do
Rebuilding a pool restores the consistency of the pool’s metadata structure. Rebuilding a pool does
not restore lost data and does not repair the data integrity of corrupted data.
Rebuilding a pool does not fix problems for the following:
 Journaling errors
 Hardware and media errors
Verifying and Rebuilding NSS Pools and Volumes
183
 File system trustee assignments, trustee rights, and inherited rights filters
 File system attributes for files and directories
 Opportunistic locking
 Content of files
16.1.3
Before Verifying a Pool
Volume errors are typically transactions left unfinished during a system crash of some kind. Most
volume errors are fixed automatically during volume mount when NSS resolves the journaled errors.
If the pool can be mounted, mount its volume to allow the NSS journaling feature to repair any
transactional errors that occurred during a system failure.
Afterwards, if there are still problems use diagnostic tools to rule out hardware problems as the
cause.
If non-hardware errors persist, and if you have a viable backup to restore the pool to the last known
good state, restore the backup to recover the pool and restore the data. It is probably not necessary to
verify or rebuild the pool.
If non-hardware errors persist, and if you do not have a viable backup, use the following Pool Verify
utilities to identify any errors in the pool’s metadata:
 ravsui (verify option)
 ravview (reformats log files to human-readable format)
16.1.4
Before Rebuilding a Pool
Review the verification log to determine the type and severity of problems with the pool’s metadata.
If all of the following conditions exist, then you should rebuild the pool to restore its metadata
integrity.
 Errors were not corrected by mounting the volume, or you could not mount the volume.
 Errors were not caused by media or hardware problems, or they persisted after correcting any
media or hardware issues.
 You have no viable backup of the pool’s volumes to restore the pool to the last known good
state.
 The Pool Verify process reports errors in the physical integrity of any of the volumes’ metadata
that would definitely cause data corruption if no action is taken.
 More data will be lost from continued data corruption than will be lost from rebuilding the
pools now.
WARNING: You should rebuild a pool only as a last resort to restore the consistency of the pool’s
metadata. The rebuild repairs the metadata; it does not recover lost data or repair the integrity of the
data itself. Data loss occurs during a rebuild if the utility must prune leaves in the data structure to
restore metadata consistency.
If the Pool Verify process did not report errors, but you cannot create files or directories, you should
run rebuild with the ReZID option. For information, see Section 16.3, “ReZIDing Volumes in an NSS
Pool,” on page 189.
184
OES 11 SP2: NSS File System Administration Guide for Linux
If you are not sure whether you can tolerate a system rebuild, take a pool snapshot and run the
rebuild against the pool snapshot instead. Then if the rebuild is acceptable, you can replace the pool
with the rebuilt snapshot. For information about pool snapshots, see Chapter 17, “Managing NSS
Pool Snapshots,” on page 193.
If necessary, rebuild the pool’s metadata by using the following utilities:
 ravsui (build option)
 ravview (reformats log files to human-readable format)
16.2
Verifying and Rebuilding an NSS Pool and Its Volumes
 Section 16.2.1, “Mounting the Volume to Repair Journaled Errors,” on page 185
 Section 16.2.2, “Ruling Out Hardware Causes,” on page 185
 Section 16.2.3, “Verifying the Pool to Identify Metadata Inconsistencies,” on page 186
 Section 16.2.4, “Reviewing Log Files for Metadata Consistency Errors,” on page 186
 Section 16.2.5, “Rebuilding NSS Pools to Repair Metadata Consistency,” on page 187
16.2.1
Mounting the Volume to Repair Journaled Errors
Volume errors are typically transactions left unfinished during a system crash of some kind. This
type of error is fixed automatically during volume mount by the NSS journaling feature.
If errors persist after you mount the volume, or if you cannot mount the volume, first rule out
hardware causes for the problems. For information, see Section 16.2.2, “Ruling Out Hardware
Causes,” on page 185.
16.2.2
Ruling Out Hardware Causes
If a volume cannot be mounted or problems persist after journaling errors are resolved, check the
hardware for faulty media or controller problems.
1 Make sure you have a good backup of the data.
2 Use the latest diagnostic software and utilities from the manufacturer of your hard drives and
controllers to troubleshoot the hard drives without destroying the data.
For example, verify the media integrity and that devices are operating correctly.
3 If necessary, repair the media or controllers.
If errors persist after you have ruled out hardware causes, and you do not have a viable backup to
restore to the last known good state, you should check the pool for metadata inconsistencies. For
information, see Section 16.2.3, “Verifying the Pool to Identify Metadata Inconsistencies,” on
page 186.
Verifying and Rebuilding NSS Pools and Volumes
185
16.2.3
Verifying the Pool to Identify Metadata Inconsistencies
The verify process is a read-only assessment of the pool. The Pool Verify option searches the pool for
inconsistent data blocks or other errors in the file system’s metadata and reports data in the
verification log. For information on where to find the verification log and how to interpret any
reported errors, see Section 16.2.4, “Reviewing Log Files for Metadata Consistency Errors,” on
page 186.
1 Place the pool in maintenance mode.
1a At a terminal prompt, enter
nsscon
1b In nsscon, enter
nss /PoolMaintenance=poolname
2 Start the pool verify by entering the following at the terminal console prompt:
ravsui verify poolname
3 Use RAVVIEW to read the logs.
For information about using RAVVIEW, see Section B.8, “ravview,” on page 419.
4 Do one of the following:
 If the log reports no errors with the pool’s metadata, it is safe to activate the pool and mount
the volumes.
 If the log reports no errors with the pool’s metadata, but you still cannot create files or
directories, run a Pool Rebuild with the ReZID option. For information, see Section 16.3,
“ReZIDing Volumes in an NSS Pool,” on page 189.
 If the log reports errors with the pool’s metadata, the affected volumes remain in
Maintenance mode. Decide whether to rebuild the pool based on the type of error and
potential outcomes. For information about rebuilding the pool, see Section 16.2.5,
“Rebuilding NSS Pools to Repair Metadata Consistency,” on page 187.
16.2.4
Reviewing Log Files for Metadata Consistency Errors
Make sure to check the error log whenever an NSS volume does not come up in active mode after a
verify or rebuild.
 “Log Files and On-Screen Display” on page 186
 “Warnings Reported” on page 187
 “Errors Reported” on page 187
 “No Errors Reported, but Cannot Create Files or Directories” on page 187
Log Files and On-Screen Display
Messages are written to the following logs:
186
OES 11 SP2: NSS File System Administration Guide for Linux
Table 16-1 Location of Log Files for the NSS Pool Verify and Pool Rebuild Utilities
Log
Purpose
/var/opt/novell/log/nss/rav/
filename.vbf
Log of the pool verify process using ravsui verify.
This is the default location, but you can
specify the location and the filename.
/var/opt/novell/log/nss/rav/
filename.rtf
If a volume has errors, the errors are displayed on the
screen and written to this log file of errors and transactions.
Use the ravview utility to read logs. For information, see
Section B.8, “ravview,” on page 419.
Log of the pool rebuild process using ravsui rebuild.
This log contains information about data that has been lost
during a rebuild by the pruning of leaves in the data
structure.
Whenever you verify or rebuild a pool, the new information is appended at the end of the log file. If
you want to keep old log files intact, rename the log file or move it to another location before you
start the verify or rebuild process.
Warnings Reported
Warnings indicate that there are problems with the metadata, but that there is no threat of data
corruption. Performing a data restore from a backup tape or rebuilding the pool’s metadata are
optional. However, rebuilding a pools’s metadata typically results in some data loss.
Errors Reported
Errors indicate that there are physical integrity problems with the pool’s metadata, and data
corruption will definitely occur, or it will continue to occur, if you continue to use the pool as it is.
If you decide to rebuild the pool, use the Pool Rebuild utility. For information, see Section 16.2.5,
“Rebuilding NSS Pools to Repair Metadata Consistency,” on page 187.
No Errors Reported, but Cannot Create Files or Directories
If the verify log does not report errors, but you continue to be unable to create files or directories on
volumes in the pool, it might be because the files’ ID numbers have exceeded the maximum size of
file numbering field. You might need to rebuild the pool with the ReZID option. For information
about how to decide if a ReZID is needed, see Section 16.3, “ReZIDing Volumes in an NSS Pool,” on
page 189.
16.2.5
Rebuilding NSS Pools to Repair Metadata Consistency
The purpose of a pool rebuild is to repair the metadata consistency of the file system. Rebuild uses
the existing leaves of an object tree to rebuild all the other trees in the system to restore visibility of
files and directories. It checks all blocks in the system. Afterwards, the NSS volume remains in
maintenance mode if there are still errors in the data structure; otherwise, it reverts to the active state.
WARNING: Data will be lost during the rebuild.
Verifying and Rebuilding NSS Pools and Volumes
187
A pool rebuild depends on many variables, so it is difficult to estimate how long it might take. The
number of storage objects in a pool, such as volumes, directories, and files, is the primary
consideration in determining the rebuild time, not the size of the pool. This is because a pool rebuild
is reconstructing the metadata for the pool, not its data. For example, it would take longer to rebuild
the metadata for a 200 GB pool with many files than for a 1 TB pool with only a few files. Other key
variables are the number of processors, the speed of the processors, and the size of the memory
available in the server.
You do not need to bring down the server to rebuild a pool. NSS allows you to temporarily place an
individual storage pool in maintenance mode while you verify or rebuild it. While the pool is
deactivated, users do not have access to any of the volumes in that pool.
If you do not place the pool in maintenance mode before issuing the rebuild or verify commands, you
receive NSS Error 21726:
NSS error: PoolVerify results
Status: 21726
Name: zERR_RAV_STATE_MAINTENANCE_REQUIRED
Source: nXML.cpp[1289]
To rebuild the pool:
1 Depending on the nature of the reported errors, you might want to open a call with Novell
Support before you begin the rebuild process.
2 Place the pool in maintenance mode.
2a At a terminal prompt, enter
nsscon
2b In nsscon, enter
nss /PoolMaintenance=poolname
3 Start the pool rebuild. At the terminal console prompt, enter
ravsui rebuild poolname
For information, see Section B.7, “ravsui,” on page 416 for options to set the pruning parameters
for the rebuild.
Rebuilding can take several minutes to several hours, depending on the number of storage
objects in the pool.
4 Review the log on-screen or in the filename.rtf file to learn what data has been lost during
the rebuild.
For information, see Section 16.2.4, “Reviewing Log Files for Metadata Consistency Errors,” on
page 186.
5 Do one of the following:
 No Errors: If errors do not exist at the end of the rebuild, the pool’s volumes are available
for mounting.
 Errors: If errors still exist, the pool remains in the maintenance state. Repeat the pool verify
to determine the nature of the errors, then call Novell Support for assistance.
188
OES 11 SP2: NSS File System Administration Guide for Linux
16.3
ReZIDing Volumes in an NSS Pool
 Section 16.3.1, “What Is a ZID?,” on page 189
 Section 16.3.2, “Understanding ReZID,” on page 189
 Section 16.3.3, “When to ReZID,” on page 189
 Section 16.3.4, “Viewing the Highest ZID for a Volume,” on page 190
 Section 16.3.5, “ReZIDing Volumes,” on page 190
16.3.1
What Is a ZID?
When a file is created, it is assigned a unique file number, called a ZID. In NSS, the maximum
number of file ZIDs available is a 64-bit number, which provides for up to 8 trillion (8E12) ZIDs, so
NSS was designed to not re-use ZIDs. However, NCP clients and other traditional applications can
only work with 32-bit IDs, which support up to 4 billion (4E9) ZIDs. NSS restricts ZIDs, and thus the
number of files, to the lower value.
It is possible for a file system to reach the 32-bit limit on the ZID number. For example, if a lot of files
and directories are regularly created, exist for a short time, and are then deleted, the ZIDs are used up
at that rate. Otherwise, reaching the upper limit is something that happens rarely.
The ZID of a file is an internal file system bit of information. Under Linux, inode number and ZIDs
are the same. You can view a file's ZID using the command ls --inode. However, the highest ZID
number in use for each volume is reported when you verify a pool.
16.3.2
Understanding ReZID
The ReZID option for a pool rebuild changes the ZIDs for all the files on the volume, thus freeing
ZIDs so they are available for creating new files and directories. The rezid does not modify any other
metadata on the volumes, nor does it modify any file’s content. The reZID is unrelated to any other
rebuild activities that might occur.
IMPORTANT: The reZID step in a rebuild adds a third review of the pool and can increase the time
of a rebuild by 50%.
16.3.3
When to ReZID
After verifying a pool, the log reports the highest ZID (highestZID parameter) for each of the pool’s
volumes. If the highest ZID number is close to the 4 billion ZID limit (4E9), you should reZID the
volume. For NSS volumes, if the highest ZID is 2 billion (2E9) or greater, the reZID option occurs
automatically when you rebuild a pool. You can optionally specify a different ZID limit to trigger the
rezid, or use the /noReZID option to stop the reZID from occurring with that rebuild.
There are no errors reported if ZIDs are nearing the upper limit of 4 billion for a volume. You might
get errors creating a file or directory that suggest a reZID needs to be done. For example:
 NDS database is locked.
 Server hangs at the end of load stage 1.
 Cannot copy to a volume.
Verifying and Rebuilding NSS Pools and Volumes
189
NSS API calls return Error 20108 zERR_ZID_GREATER_THAN_32_BITS, which means that the ZID
numbering has exceeded the 4 billion (4E9) limit. NSS also sends a volume alert to the server console
that reZID needs to be done on a specified volume. The calling application gets only a generic error
when it attempts and fails to create the file.
After rebuilding a pool with the ReZID option, the errors you were getting when creating files and
directories no longer occur. You can also verify the pool again, then check the highest ZID number
reported for the pool’s volumes to know that each is well under the 4 billion ZIDs limit.
If you do not place the pool in maintenance mode before rebuilding the pool with the ReZID option,
you receive NSS Error 21726:
NSS error: PoolVerify results
Status: 21726
Name: zERR_RAV_STATE_MAINTENANCE_REQUIRED
Source: nXML.cpp[1289]
16.3.4
Viewing the Highest ZID for a Volume
To view the highest ZID per volume:
 On verifying a pool, look in the log to find the highest ZID value that has been assigned for each
of the pool’s volume. Look at each value to see whether you should rezid the pool as part of the
rebuild process.
OR
 Go to the file, _admin\Manage_NSS\Volume\SYS\VolumeInfo.xml and search for
nextAllocatableZid.
You should be aware of the rate at which you are consuming ZIDs by creating and deleting files. If
the highest ZID for a given volume reaches the limit of 4 billion (4E9), you cannot create new files on
the volume until you rezid the pool.
16.3.5
ReZIDing Volumes
1 Place the pool in maintenance mode.
1a At a terminal prompt, enter
nsscon
1b In nsscon, enter
nss /PoolMaintenance=poolname
2 If you have not already verified the volume, enter the following at a command prompt:
ravsui verify poolname
For information, see Section B.7, “ravsui,” on page 416.
3 Review any errors on-screen or in the filename.vbf file, located where you specified.
For information, see Section 16.2.4, “Reviewing Log Files for Metadata Consistency Errors,” on
page 186.
4 Rebuild a pool by entering the following at a command prompt
ravsui --rezid=zid rebuild poolname
190
OES 11 SP2: NSS File System Administration Guide for Linux
Replace zid with the value of a threshold to cause a reZID of a volume. The default value is
0xefffffff. For information, see Section B.7, “ravsui,” on page 416 for options to set the
pruning parameters for the rebuild.
For NSS, rebuild automatically causes a reZID of a volume if rebuild finds a ZID over 2 billion.
This checks all blocks in the system. Rebuilding can take several minutes to several hours,
depending on the number of objects in the pool. For all systems, reZID adds a third pass to the
rebuild, which increases the time to rebuild a volume by about 50%.
5 Review the log on-screen or in the filename.rtf file to learn what data has been lost during the
rebuild.
For information, see Section 16.2.4, “Reviewing Log Files for Metadata Consistency Errors,” on
page 186.
6 Do one of the following:
 Errors: If errors still exist, the pool remains in the maintenance state. Repeat the pool verify
to determine the nature of the errors, then contact Novell Support for assistance.
 No Errors: If errors do not exist, the pool’s volumes are mounted automatically.
Verifying and Rebuilding NSS Pools and Volumes
191
192
OES 11 SP2: NSS File System Administration Guide for Linux
17
Managing NSS Pool Snapshots
17
Novell Storage Services supports pool snapshots to improve backup and restore services. This section
describes the following:
 Section 17.1, “Understanding Pool Snapshots,” on page 193
 Section 17.2, “Guidelines for Using and Managing Pool Snapshots,” on page 195
 Section 17.3, “Creating a New Pool Snapshot,” on page 200
 Section 17.4, “Viewing a List of Snapshots for a Given Pool,” on page 202
 Section 17.5, “Viewing Pool Snapshot Details,” on page 203
 Section 17.6, “Modifying the Stored-On Location for Snapshots,” on page 204
 Section 17.7, “Onlining or Offlining a Pool Snapshot,” on page 204
 Section 17.8, “Viewing and Managing an Online Pool Snapshot,” on page 206
 Section 17.9, “Restoring Data from an Online Pool Snapshot,” on page 207
 Section 17.10, “Deleting a Pool Snapshot,” on page 207
17.1
Understanding Pool Snapshots
 Section 17.1.1, “How Snapshots Work,” on page 193
 Section 17.1.2, “Benefits of Using Snapshots,” on page 194
17.1.1
How Snapshots Work
A pool snapshot is a metadata copy of a storage data pool that preserves a point-in-time view of a
data pool. The pool snapshot function uses copy-on-write technology to enable the instantaneous
block-level snapshot of a pool, while requiring only a fraction of the storage space of the original data
pool. A pool snapshot does not save an exact copy of the original data pool. Instead, the snapshot is a
metadata-based copy that stores only the blocks of data that change subsequent to the instant of the
snapshot. The snapshot combines the metadata and stored block data with the unchanged data on
the original pool to provide a virtual image of an exact copy of the data at the instant the snapshot
was taken, plus any end-user modifications made to that snapshot.
Before the snapshot can occur, the snapshot function must quiesce the original pool by briefly halting
all data transaction activity when current transactions complete. It temporarily prevents new writes
to the pool and flushes the file system cache to make the pool current with existing writes. Any open
files are seen by the snapshot feature as being closed after these outstanding writes occur. Then, it
snapshots the now-stable pool, and allows data transaction activity to resume.
The quiescence process provides a transactionally consistent image at the instant the snapshot is
made. Because the snapshot is consistent, it is not necessary to check the consistency of the file
system or database if you activate the snapshot for access.
Managing NSS Pool Snapshots
193
When the pool is active, each of its snapshots is active. For each write, the snapshot function
determines which blocks in the original pool will change. It temporarily suspends the write activity
while it copies the original block data to each snapshot's stored-on partition, then it writes the changed
data to the blocks on the original pool. This copy-on-write process keeps the snapshot metadata
consistent in time with the exact instant the snapshot was taken.
The snapshot grows as more blocks are changed on the original pool. Theoretically, if all of the
original blocks changed, each snapshot's stored-on partition would need to be as big as the original
pool. Typically, the disk space needed for each pool snapshot is 10 percent to 20 percent of the
original pool size. The amount of space required depends on the snapshot retention policy and the
turnover rate for data in the original pool. A snapshot should never be allowed to completely fill its
stored-on partition.
The number of snapshots is limited only by the kernel memory required to create the snapshot and
buffers. Each additional snapshot incrementally consumes more kernel memory and degrades I/O
performance. On Linux, each snapshot functions independently of the others. The copy-on-write
process must copy the block content to every snapshot before it can write the modified block data to
the pool.
17.1.2
Benefits of Using Snapshots
Pool snapshots save time and preserve data. They provide an instant copy of a pool that can help
expedite routine maintenance procedures to back up, archive, and protect data on that pool. Because
traditional methods of duplicating large amounts of data can be expensive and time-consuming, the
efficiency of snapshots can be an important benefit for your enterprise. You can make snapshots as
frequently as needed to meet your data availability and resilience requirements.
You can use pool snapshots in a variety of ways to enhance your current storage infrastructure,
including the following scenarios.
 “Supporting Backup Operations” on page 194
 “Archiving Data” on page 194
 “Restoring Data” on page 195
 “Re-Creating Operational and Development Environments” on page 195
 “Testing and Training” on page 195
Supporting Backup Operations
A pool snapshot facilitates non-disruptive backups because the snapshot becomes the source of the
backup. When you back up volumes in a pool from a pool snapshot, your backup can capture every
file in the pool, even those that are in use at the time. You can create, manage, and delete a pool
snapshot for any pool on your server.
As contrasted to a traditional, full-data copy of the pool, the metadata copy only takes a moment to
create and occurs transparently to the user. With traditional backups, applications might be shut
down throughout the backup routine. In comparison, the pool snapshot process makes the original
pool available with almost imperceptible delay.
Archiving Data
You can archive pool snapshots to maintain a point-in-time history of the changes made to the
original data pool.
194
OES 11 SP2: NSS File System Administration Guide for Linux
Restoring Data
Pool snapshots can serve as a source for recovering a point-in-time version of a file. After you take a
snapshot, you can activate it at a later time to access the original pool’s data as it existed at the time of
the snapshot. Both the pool and its snapshots can be active and available concurrently. You access
data on the active pool snapshot just as you would do on any other pool, even while data is changing
on the original pool you snapped. To restore data, manually copy the old version of the file from the
online snapshot volume to the original volume. For information, see Section 17.7, “Onlining or
Offlining a Pool Snapshot,” on page 204.
Two common reasons to restore information are user error and application errors.
 A user might inadvertently make changes to a file that need to be reversed. Files can become
corrupted or deleted. The pool snapshot provides a quick and easy way to locate and reinstate
selected files.
 An application might be infected by a virus or be corrupted by other problems, causing the
application to store erroneous data throughout the pool. With a pool snapshot, you can easily
restore all or part of the original pool to a point in time before the virus or problem was known
to exist in the system.
Re-Creating Operational and Development Environments
You can also write to the pool snapshot, just as you would any pool. You can work with and modify
the snapshot version of the data. For example, in a software development environment, engineers
might want to repeat builds and tests of data within a given snapshot.
Testing and Training
Snapshots can provide a convenient source for testing and training environments and for data
mining purposes.
17.2
Guidelines for Using and Managing Pool Snapshots
Use the guidelines in this section when planning your snapshot solution:
 Section 17.2.1, “Differences Between Snapshots on Linux and NetWare,” on page 196
 Section 17.2.2, “Guidelines for Creating a Pool Snapshot,” on page 197
 Section 17.2.3, “Guidelines for Creating Pool Snapshots of Clustered Pools,” on page 197
 Section 17.2.4, “Guidelines for Naming Pool Snapshots,” on page 197
 Section 17.2.5, “Guidelines for Choosing the Stored-On Location,” on page 198
 Section 17.2.6, “Guidelines for Maintaining the Stored-On Location,” on page 198
 Section 17.2.7, “Guidelines for Onlining Pool Snapshots,” on page 199
 Section 17.2.8, “Guidelines for Deleting Pool Snapshots,” on page 199
Managing NSS Pool Snapshots
195
17.2.1
Differences Between Snapshots on Linux and NetWare
For NSS on Linux, snapshot volumes are not automatically mounted on reboot as they are in
NetWare. The snapshots are active and performing their snapshot functions, but they are not
mounted. If a snapshot was mounted when the server went down and you want the snapshot
mounted after reboot, you must mount it manually. Mounting a snapshot is necessary only if users
require access to the point-in-time version of the data.
In NSSMU, devices that contain NSS pool snapshots cannot be re-initialized. To initialize the device,
you must first delete all NSS pool snapshots on the device. For information about deleting snapshots,
see Section 17.10, “Deleting a Pool Snapshot,” on page 207.
Table 17-1 identifies the differences for using snapshots for NSS pools on Linux and NetWare:
Table 17-1 Comparison of NSS Pool Snapshots on Linux and NetWare
Capability
NetWare
Linux
Rename a snapshot
Supported
Not supported.
If you attempt to rename a
snapshot, you get an eDirectory
error because eDirectory objects
are not automatically created for
pool snapshots.
Snapshot stored-on location
An NSS pool is designated as the An NLVM-managed Linux
stored-on partition for a given
partition is designated as the
original pool.
stored-on partition for a given
original pool.
Snapshots of cluster-enabled
pools
Supported
The stored-on partition must be
the same as the original pool.
eDirectory object for the snapshot eDirectory objects are
automatically created for the
snapshot pools and volumes.
196
Not supported
No eDirectory objects are
created.
In order to allow users to access
data on an NSS pool snapshot,
you must first activate and mount
the snapshot as an NSS pool,
then use the Update eDirectory
option on the Storage > Pools
page to create an eDirectory
object for the snapshot pool or
volume.
Taking new snapshots
The stored-on partition must be
activated and mounted.
The stored-on Linux partition
must be mounted.
Deleting snapshots
Delete pool snapshots in a firstcreated, first-deleted order.
Snapshots can be deleted in any
sequence.
Shredding deleted snapshots
Supported
Not supported
OES 11 SP2: NSS File System Administration Guide for Linux
17.2.2
Guidelines for Creating a Pool Snapshot
Create a pool snapshot when you want to capture a point-in-time view of a active data pool. The
original pool must be active when you create the snapshot. For instructions on creating a pool
snapshot, see “Creating a New Pool Snapshot” on page 200.
17.2.3
Guidelines for Creating Pool Snapshots of Clustered Pools
Pool snapshots are not supported for clustered NSS pools on Linux.
17.2.4
Guidelines for Naming Pool Snapshots
You name a pool snapshot at the time you order the snapshot. Specify a unique name for each
snapshot. Because the name also serves as the snapshot’s poolname when active, the name you give it
should be unique among snapshots and among pools. The combination of the snapshot’s name and
time stamp when the snapshot was taken can help you identify the snapshot version you want to
manage.
 “Default Naming Scheme When Using iManager” on page 197
 “Alternate Naming Scheme” on page 198
 “Considerations for Naming” on page 198
Default Naming Scheme When Using iManager
When you create a snapshot in iManager, the snapshot name is by default a modified version of the
original pool’s name, which allows a simple identification of all snapshots for any given pool. NSS
adds a letter and number designator (_Sn) to the original pool name. The S indicates that it is a
snapshot. The n represents an incremental number (1 to 500) of snapshots taken for this pool.
IMPORTANT: When you create a snapshot in NSSMU, no default name is suggested. You can
optionally adopt the default naming scheme when you provide a name for the pool snapshot.
For example, pool snapshot names for POOLA might be POOLA_S1, POOLA_S2, and so on.
If you delete snapshots out of sequence, it is possible that the numbers could be reused. A simple sort
by snapshot name could be confusing. Make sure to verify the create stamp on the pool when you
work with pool snapshots that use the default naming scheme.
The snapshot name can be 2 to 16 characters in length and must adhere to the same character
conventions as for poolnames. If the poolname is too long to allow the snapshot identifier to be
appended, the poolname is truncated so that the length of the pool snapshot name is 16 characters.
For example, if the poolname is POOL_EUR_MANUF (14 characters), its name would be truncated then
the snapshot identifier appended. The number of characters to be truncated would depend upon the
pool snapshot number, such as POOL_EUR_MANU_S1, POOL_EUR_MAN_S12, or
POOL_EUR_MA_S102
If you bring a pool snapshot online, its volume names are automatically renamed to indicate that they
are snapshot volumes. By default, _SV is added to volume names to indicate the storage object is a
volume in a pool snapshot.
For example, if your original pool is named users, its default pool snapshot name is users_s1. If its
volumes are named users_aj and users_ kz, the volumes in the snapshot pool are users_aj_sv,
users_aj_sv001 and users_kz_sv, users_kz_sv001 and so on.
Managing NSS Pool Snapshots
197
Alternate Naming Scheme
You can optionally adopt your own naming convention for pools. If you create multiple snapshots of
a pool each day, consider using a logical naming convention that identifies the poolname and
numbers that allow sequential listing based on the order the snaps were taken. Of course, the time
stamp shows the exact time that the snapshot was taken, and you can always refer to it to be sure you
have the right snapshot.
Considerations for Naming
It is also important to consider the names of existing pools and pool snapshots and your naming
conventions when you name new data pools and volumes. You should get errors if you attempt to
create pools with names that are in use by pool snapshots, and vice versa.
If a volume with the same name as a snapshot volume exists on the server when you mount a
snapshot pool, NSS automatically appends the snapshot volume name with an additional sequential
number (such as VOL1_SV001) or characters (such as VOL1_SV_SV) as it onlines the volume. This
makes the snapshot name unique with respect to the active volumes while the pool snapshot is
online.
If name conflicts occur, you might need to rename a pool or pool snapshot to a unique name in order
to bring the pool snapshot online.
17.2.5
Guidelines for Choosing the Stored-On Location
Each snapshot that you create for a pool needs its own stored-on partition. Specify an NLVMmanaged device where you want to create the snapshot partition. Each snapshot partition must have
the same shared state as the original pool being snapshot.
IMPORTANT: Creating snapshots of clustered NSS pools is not supported.
Snapshots are not supported on RAID1 devices.
When you create a pool’s snapshot, you select the device and specify the size to use. You cannot
increase the size of the partition later, so make sure you allow sufficient space.
Each snapshot grows as more blocks are changed on the original pool. Theoretically, if all of the
original blocks changed, each snapshot's stored-on partition would need to be as big as the original
pool. Typically, the disk space needed for each pool snapshot is 10 to 20 percent of the original pool
size. The amount of space required depends on the snapshot retention policy and the turnover rate
for data in the original pool. A snapshot should never be allowed to completely fill its stored-on
partition. If a snapshot's stored-on partition runs out of space, the copy-on-write blocks cannot be
written to it, and write errors occur on the original pool. Allow ample space for the snapshot to grow
over time when you specify a size for the snapshot’s stored-on partition.
17.2.6
Guidelines for Maintaining the Stored-On Location
The status of any given pool snapshot partition is closely tied to the operational status of the original
pool. You can deactivate the original pool, as needed, without adversely impacting the pool snapshot
or the status of the stored-on partition. If the original pool is deactive, there are no active transactions
198
OES 11 SP2: NSS File System Administration Guide for Linux
for the pool snapshot function to process. For Linux, the stored-on partition hosts only a single
snapshot, so it can be safely deactivated after you deactivate its original pool. Make sure that you reactivate the stored-on partition first when bringing the original pool back into service.
In contrast, deactivating the stored-on partition first can cause the ungraceful deactivation of the
corresponding original pool.
IMPORTANT: The stored-on partition should remain active as long as it hosts any pool snapshots.
You can deactivate it safely after its original pool is deactivated, and for the duration of the pool’s
deactivation. Activate the stored-on partition before re-activating the original pool.
17.2.7
Guidelines for Onlining Pool Snapshots
For pool snapshots, Online and Offline are conditions related to the visibility of the pool snapshot to
users. The pool snapshot is offline by default. The snapshot functions are working in the background
to capture any changes being made to the original pool whether the pool is offline or online.
You activate a pool snapshot as a pool by bringing it online whenever you want to access the data on
it, such as for data retrieval and data backup. After the pool snapshot is online, it appears by its
snapshot name in the pool list. Treat it as you would any pool to activate and mount its volumes.
Because you are working with a snapshot and not the original pool and its volumes, other
management tasks are limited.
The names of volumes on the pool snapshot are a modified version of the volumes on the original
pool. By default, the characters _SV (snapshot volume) are appended to the volumes’ names. When
you deactivate the pool snapshot, the corresponding snapshot volumes are automatically
deactivated, and the pool snapshot is no longer listed in the pool list.
The Pool object and Volume object for a snapshot are not automatically created in NetIQ eDirectory
when you bring an NSS volume snapshot online. These objects are needed only if you want to verify
the NSS metadata information on a snapshot volume. For this case, you must create the objects
manually by using the Update eDirectory option to create the storage objects for the online snapshot
NSS pool and each of its volumes. For information, see Section 15.14, “Updating eDirectory Pool
Objects,” on page 179 and Section 18.5, “Updating eDirectory Volume Objects,” on page 219.
If you reboot the server while a pool snapshot is online, the snapshot might be online or offline after
the restart, depending on the platform. If the pool snapshot is online when the server goes down, the
pool snapshot is automatically set in the Offline state after a reboot.
For instructions, see Section 17.7, “Onlining or Offlining a Pool Snapshot,” on page 204.
17.2.8
Guidelines for Deleting Pool Snapshots
Delete pool snapshots as follows:
 Delete all pool snapshots before you move devices that contain the original pool from NetWare
to Linux. Different technologies are used for pool snapshots on Linux and NetWare, so existing
pool snapshots cannot be used on a different platform.
WARNING: Without first deleting the pool’s snapshots, you might not be able to access or
manage the original pool after you move the pool’s device cross-platform.
 Delete a pool snapshot when you no longer need it.
 Delete a pool snapshot when you need free space on the device where the stored-on partition
exists.
Managing NSS Pool Snapshots
199
Make sure that the pool snapshot is not mounted as an online pool before you delete it.
Snapshots can be deleted in any sequence.
For instructions, see Section 17.10, “Deleting a Pool Snapshot,” on page 207.
17.3
Creating a New Pool Snapshot
 Section 17.3.1, “Prerequisites for Creating a Pool Snapshot,” on page 200
 Section 17.3.2, “Using iManager,” on page 200
 Section 17.3.3, “Using NSSMU,” on page 202
17.3.1
Prerequisites for Creating a Pool Snapshot
 The pool you want to snapshot must already exist and be active.
 Free space must be available on a device that you want to use as the stored-on partition.
 You cannot create snapshots of shared NSS pools. Pool snapshots are not supported for shared
pools.
17.3.2
Using iManager
1 In iManager, click Storage > Pools.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 In the server field on the Pools page, select a server to manage to view a list of pools.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
3 If the pool you want to snapshot is not active, select the pool from the Pools list, then click
Activate.
4 In the Pools list, select the active pool that you want to snapshot, then click Snapshot to go to the
Snapshots for: poolname page.
IMPORTANT: If the selected pool is a online pool snapshot, the Snapshot option is not available.
Select the original pool instead.
200
OES 11 SP2: NSS File System Administration Guide for Linux
5 In the Snapshots menu, select New to open the New Snapshot for: <poolname> page.
6 On the New Snapshot page, specify the following:
 Name: Optionally modify the default snapshot name.
For information about pool snapshot names, see “Guidelines for Naming Pool Snapshots”
on page 197.
 Size: Type the amount of free space (in MB) to use for the stored-on partition.
 Stored-on Partition: From the list of active devices, select the device where you want to
create the stored-on partition.
Each snapshot is stored on a separate partition. The partition for the snapshot cannot be
expanded after it is created. If the pool is shared in a cluster, the snapshot feature is not
supported. For information, see “Guidelines for Choosing the Stored-On Location” on
page 198.
7 Click Finish to create the snapshot, or click Cancel to back out of the process.
8 After NSS creates the pool snapshot, NSS automatically opens to the Snapshots page so that you
can further manage the snapshot. The Snapshots list contains the newly created snapshot.
IMPORTANT: You might see an error message if the iManager connection to the server you are
managing times out before the snapshot is created. The pool snapshot creation should continue
on the managed server. If a timeout error occurs, navigate to the Snapshots page to view and
manage the snapshot.
By default, the snapshot is always Offline and Active. This means that the snapshot is
functioning, but that the pool snapshot is not mounted as an online pool.
Managing NSS Pool Snapshots
201
17.3.3
Using NSSMU
1 Open a terminal console, then log in as the root user.
2 At the terminal console prompt, enter
nssmu
3 If the pool that you want to snapshot is not currently mounted, mount it now by using your
normal mount methods.
4 In NSSMU, select Snapshot.
5 Press Insert to begin the create process.
6 Specify a name for the pool snapshot.
7 From the Pools list, select the pool you want to snapshot, then press Enter.
8 From the Devices list, select the device where you want to create a partition for the snapshot
stored-on partition, then press Enter.
Each snapshot is stored on a separate partition, not on a pool.
9 Specify how much space in MB to allocate to the partition.
The partition for the snapshot cannot be expanded after it is created.
10 Press Enter to create the snapshot.
The newly created snapshot appears in the Snapshots list.
17.4
Viewing a List of Snapshots for a Given Pool
1 In iManager, click Storage > Pools.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 In the server field on the Pools page, select a server to manage to view a list of pools.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
3 If the original pool for the snapshots you want to view is not active, select the pool from the Pools
list, then click Activate.
4 In the Pools list, select the active pool that has snapshots you want to manage, then click Snapshot
to go to the Snapshots for: poolname page.
IMPORTANT: If the selected pool is an online pool snapshot, the Snapshot option is not
available. Select the original pool instead.
202
OES 11 SP2: NSS File System Administration Guide for Linux
The Snapshots report includes the snapshot name, status (Offline (default) or Online), state (Active
(default) or Deactive), the name and size of the stored-on partition, and the percent of space used
on the partition.
17.5
Viewing Pool Snapshot Details
In iManager, you can view the following details about a pool snapshot:
Table 17-2 Explanation of Pool Snapshot Details
Snapshot Details
Parameter
Description
Snapshot
Status
Offline (default) or Online.
Online pool snapshots appear in the Pools page
and its snapshot volumes appear in the Volumes
page.
State
Deactive (default) or Active.
Active indicates that the pool snapshot operation is
active.
Disabled/Full means the snapshot is invalid
because the stored-on partition is full. You should
delete the invalid snapshot. Other snapshots
continue working. Disabled/Full snapshots are
ignored by the pool and do not impact the I/O
performance for the pool.
Snapshot of
Size
The current size of the pool snapshot. The pool’s
snapshot can continue to add metadata for the
snapshot until it reaches the size allocated.
Creation Date
The time stamp that shows the date and time the
original pool was snapped.
Name
The name of the original pool that was snapped.
Total space
The total space allocated for the original pool.
Used space
The amount of space currently in use for the
original pool.
Managing NSS Pool Snapshots
203
Snapshot Details
Parameter
Description
Stored-On Location
(partition)
Name
The partition where the snapshot metadata resides.
State
The current state of the stored-on partition, either
Active (default) or Deactive.
Total space
The total space allocated for the partition where the
snapshot is stored.
Used space
The amount of space currently in use for the
partition where the snapshot is stored.
To view details of a pool snapshot:
1 In iManager, log on to the tree for the server you want to manage.
2 In Roles and Tasks, click Storage > Pools to open the Pools page.
3 Select the server that contains the original pool of the snapshot you want to view.
4 In the Pools list, select the original pool of the snapshot you want to view, then click Snapshot.
5 On the Snapshots page, select the snapshot, then click Actions > Details to open the Details dialog
box.
17.6
Modifying the Stored-On Location for Snapshots
After you specify a partition to use for a given snapshot, that partition cannot be changed or resized.
When you delete the snapshot, the stored-on partition is also deleted.
17.7
Onlining or Offlining a Pool Snapshot
You can mount a pool snapshot as a pool in order to make the point-in-time versions of the volumes
available.When it is mounted, the pool snapshot appears on the Pools page by its snapshot name. The
volumes on the pool appear on the Volumes page with an _SV appended to the name, such as
VOL1_SV.
For example, you might want to mount a snapshot as a pool to back up data from the snapshot
version of the volumes. Snapshot functions occur while the snapshot pool is active, whether the
snapshot is mounted or dismounted. Snapshots are typically not mounted for general user access
purposes.
 Section 17.7.1, “Using iManager to Online a Pool Snapshot,” on page 204
 Section 17.7.2, “Using iManager to Offline a Pool Snapshot,” on page 205
 Section 17.7.3, “Using NSSMU to Online or Offline a Pool Snapshot,” on page 205
17.7.1
Using iManager to Online a Pool Snapshot
You can use iManager to online or offline pool snapshots.
1 In iManager, log on to the tree for the server you want to manage.
2 In Roles and Tasks, click Storage > Pools to open the Pools page.
3 Select the server that contains the original pool of the snapshot you want to manage.
204
OES 11 SP2: NSS File System Administration Guide for Linux
4 In the Pools list, select the original pool of the snapshot you want to manage, then click Snapshot
to open the Snapshots page.
5 Select one or more pool snapshots that you want to manage, then click Actions > Online.
This activates the selected pool snapshots and their volumes. The volumes are not mounted
automatically.
6 If you need to access a pool snapshot volume, mount the volume.
6a In Roles and Tasks, click Storage > Volumes to open the Volumes page.
6b If you need to verify NSS metadata information for the snapshot volume while it is online,
select the snapshot volume, then click Update eDirectory to create a Volume object for the
volume.
6c On the Volumes page, select the snapshot volume you want to manage, then click Mount.
17.7.2
Using iManager to Offline a Pool Snapshot
You can use iManager to offline pool snapshots.
1 In iManager, log on to the tree for the server you want to manage.
2 In Roles and Tasks, click Storage > Pools to open the Pools page.
3 Select the server that contains the original pool of the snapshot you want to manage.
4 If the snapshot volumes for the pool snapshot are currently mounted, go to the Volumes page,
select the mounted snapshot volumes, then click Dismount.
5 In the Pools list, select the original pool of the snapshot you want to manage, then click Snapshot
to open the Snapshots page.
All snapshot volumes are automatically offlined at this time.
6 Select one or more pool snapshots that you want to manage, then click Actions > Offline.
This makes the selected pool snapshots and all the volumes in them unavailable to users. It does
not destroy volumes in the snapshots, nor does it destroy the data contained in the volumes.
17.7.3
Using NSSMU to Online or Offline a Pool Snapshot
You can use NSSMU to online or offline the pool snapshot.
1 Open a terminal console, then log in as the root user.
2 At the terminal console prompt, enter
nssmu
3 In NSSMU, select Snapshot.
Managing NSS Pool Snapshots
205
4 Select the pool snapshot that you want to manage, the press F7 to mount or dismount the pool
snapshot.
The pool snapshot functions continue even though the snapshot is not mounted for general user
access.
17.8
Viewing and Managing an Online Pool Snapshot
When online, pool snapshots appear and function as a pool in the Pools list on the Pools page.
1 In iManager, click Storage > Pools.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select a server to manage to view a list of its pools.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
3 In the Pools list, select the online pool snapshot.
Wait for the page to refresh. It displays the pool snapshot’s details and enables its management
options. The Name is followed by snapshot in parens to indicate that the selected pool is a
snapshot.
4 Click Storage > Volumes to go to the volumes page.
The snapshot volumes are listed in the Volumes list. They are deactive and unmounted by
default.
5 Optionally select the snapshot volume, then click Mount to mount the snapshot volume so that
you are able to access its data.
206
OES 11 SP2: NSS File System Administration Guide for Linux
6 When you are done, go to the Pools page, select the snapshot pool, then click Offline to take the
pool snapshot offline and dismount its snapshot volumes.
17.9
Restoring Data from an Online Pool Snapshot
You can restore a point-in-time version of data from a pool snapshot by manually copying the data
from an online snapshot volume to another location.
1 In iManager, click Storage > Pools.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select a server to manage to view a list of its pools.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
3 Online the pool snapshot that contains the version of the file you want to restore.
For information, see Section 17.7.1, “Using iManager to Online a Pool Snapshot,” on page 204.
4 Select Storage > Volumes to go to the volumes page.
5 Select the snapshot volume (such as VOL1_SV) from the Volumes list, then click Mount.
Snapshot volumes are mounted Read Only. You cannot modify the content of files on the
snapshot.
6 Use any normal method to copy the file of interest from the mounted snapshot volume to a new
location.
17.10
Deleting a Pool Snapshot
Use the Delete option to permanently remove one or more selected pool snapshots from the server.
Deleting a pool snapshot removes the ownership of the space it occupied, freeing the space for
reassignment. For guidelines, see “Guidelines for Deleting Pool Snapshots” on page 199.
Managing NSS Pool Snapshots
207
IMPORTANT: Delete the oldest snapshot first in a first-created, first-deleted manner.
 Section 17.10.1, “Using iManager to Delete a Pool Snapshot,” on page 208
 Section 17.10.2, “Using NSSMU to Delete a Pool Snapshot,” on page 208
17.10.1
Using iManager to Delete a Pool Snapshot
1 In iManager, click Storage > Pools.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 In the server field on the Pools page, select a server to manage to view a list of its pools.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
3 If the pool that has snapshots you want to manage is not active, select the pool from the Pools list,
then click Activate.
4 If a pool snapshot for the selected pool is currently online, do the following to offline the pool
snapshot:
4a If the snapshot volumes for the pool snapshot are currently mounted, go to the Volumes
page, select the mounted snapshot volumes, then click Dismount.
4b On the Pools page, select the pool snapshot from the Pools list, then click Offline.
5 On the Pools page, select the active pool from the Pools list that has snapshots you want to
manage, then click Snapshot to go to the Snapshots for: <poolname> page.
IMPORTANT: If the selected pool is a online pool snapshot, the Snapshot option is not available.
Select the original pool instead.
6 In the Snapshots list, select one or more snapshots that you want to delete.
7 Click Delete, then click Yes to confirm the delete.
17.10.2
Using NSSMU to Delete a Pool Snapshot
1 Open a terminal console, then log in as the root user.
2 At the terminal console prompt, enter
nssmu
3 If the pool that has snapshots you want to manage is not active, select Pools from the main menu,
select the pool, then press F7 to activate the pool.
4 In NSSMU, click Snapshots.
5 If the pool snapshot you want to delete is currently online, select the pool snapshot from the
Snapshots list, the press F7 to dismount the pool.
6 Select the pool snapshot that you want to delete from the Snapshots list, then press Delete to
delete the snapshot.
7 Click Y (Yes) to confirm the delete.
208
OES 11 SP2: NSS File System Administration Guide for Linux
18
Managing NSS Volumes
18
Novell Storage Services uses storage volumes to logically organize your data. After creating NSS
pools, you can create any number of NSS volumes for each pool, depending on the physical space
available.
This section describes how to configure and manage NSS volumes by completing the following tasks:
 Section 18.1, “Understanding Volume Properties,” on page 209
 Section 18.2, “Guidelines for NSS Volumes,” on page 214
 Section 18.3, “Creating Unencrypted NSS Volumes,” on page 216
 Section 18.4, “Configuring Encrypted NSS Volumes with NSSMU,” on page 219
 Section 18.5, “Updating eDirectory Volume Objects,” on page 219
 Section 18.6, “Viewing the Details of an NSS Volume,” on page 219
 Section 18.7, “Viewing Properties of an NSS Volume,” on page 220
 Section 18.8, “Modifying Attributes of an NSS Volume,” on page 222
 Section 18.9, “Modifying the NSS Volume Size,” on page 222
 Section 18.10, “Configuring the Name Space for an NSS Volume,” on page 224
 Section 18.11, “Mounting NSS Volumes with Linux Commands,” on page 225
 Section 18.12, “Renaming an NSS Volume,” on page 226
 Section 18.13, “Renaming (Modifying) the Mount Point for an NSS Volume,” on page 227
 Section 18.14, “Activating and Deactivating an NSS Volume,” on page 228
 Section 18.15, “Mounting and Dismounting an NSS Volume,” on page 228
 Section 18.16, “Exporting and Importing NSS Volumes for NFS Access,” on page 230
 Section 18.17, “Deleting an NSS Volume,” on page 235
 Section 18.18, “Finding the Filename for a Given ZID,” on page 236
 Section 18.19, “Verifying or Rebuilding NSS Volumes,” on page 236
 Section 18.20, “Moving Volumes with DFS,” on page 236
 Section 18.21, “Splitting Volumes with DFS,” on page 236
 Section 18.22, “What’s Next,” on page 236
18.1
Understanding Volume Properties
 Section 18.1.1, “Volume Attributes,” on page 210
 Section 18.1.2, “Encryption Support,” on page 214
 Section 18.1.3, “Enhanced Hard Link Support,” on page 214
Managing NSS Volumes
209
18.1.1
Volume Attributes
Figure 18-1 shows the volume attributes for an NSS volume that can be set in iManager when you
create volumes. An explanation of each attribute is provided below.
Figure 18-1 Volume Attributes
 “Backup” on page 211
 “Compression” on page 211
 “Data Shredding” on page 211
 “Directory Quotas” on page 211
 “Flush Files Immediately (NetWare)” on page 211
 “Migration (to Third-Party Near-Line or Offline Storage)” on page 211
 “Modified File List” on page 212
 “Salvage Files” on page 212
 “Snapshot (File-Level)” on page 212
 “User Space Quotas” on page 212
210
OES 11 SP2: NSS File System Administration Guide for Linux
 “User-Level Transaction Model (NetWare Only)” on page 212
 “Mount Point” on page 213
 “Lookup Namespace” on page 213
Backup
The Backup attribute sets a flag to indicate to the backup software that the volume contains data you
want to back up. Disable this flag if the volume is empty or if backing up the data is unnecessary. This
backup flag is independent of the third-party backup system you use; your backup system might not
recognize this option, even if you select it. The Backup attribute is enabled by default.
Compression
The Compression attribute activates file compression in NSS volumes. Compression can be activated
at creation time only and this choice persists for the life of the volume. Data in the volume might be
stored normally or in compressed form, depending on how frequently it is used. Compression
parameters can be set at the server level to control compression behavior. For information, see
“Managing Compression on NSS Volumes” on page 263.
Data Shredding
The Data Shredding attribute allows you to electronically overwrite deleted and purged data areas to
prevent unauthorized users from using a disk editor to access purged files. You can specify the
number of times (1 to 7) to shred data. For information, see Section 20.3, “Using Data Shredding to
Prevent Access to Purged Files,” on page 257.
Directory Quotas
The Directory Quotas attribute enables you to assign a maximum quota of space that a directory can
consume. For information, see “Managing Space Quotas for Volumes, Directories, and Users” on
page 281.
Flush Files Immediately (NetWare)
The Flush Files Immediately attribute enables NSS to immediately write to disk all data in cache that
is pending writes to the file when you close the file. Otherwise, the data in cache must wait until the
next write cycle to be written to the disk, putting the information at risk for loss during the interim,
for example, if the server failed. For information, see “Enabling Flush Files Immediately to Write
Data to the Disk on Close” in the NW 6.5 SP8: NSS File System Administration Guide.
IMPORTANT: On Linux, a group write function controls how writes to disk occur. For information,
see Section 27.3, “Configuring or Tuning Group I/O,” on page 334.
Migration (to Third-Party Near-Line or Offline Storage)
The Migration attribute sets a flag that indicates to third-party software that this volume’s data can be
migrated to near-line or offline storage media after it is inactive for specified lengths of time. This
attribute requires third-party software to take advantage of the capability.
Managing NSS Volumes
211
Modified File List
The Modified File List (MFL) attribute enables NSS to create a list of all files modified since the
previous backup. The log is available only through third-party software.
NOTE: This feature is seldom-used since the introduction of the Event File List support. Consider
using the Event File List instead. For information, see “FileEvents.xml Definitions” (http://
developer.novell.com/documentation/vfs/vfs__enu/data/ak7gh2x.html) in NDK: Virtual File Services
(http://developer.novell.com/documentation/vfs/vfs__enu/data/bktitle.html).
Salvage Files
The Salvage Files attribute enables deleted files to remain on the volume until the Purge Delay time
expires or until space is needed on the volume for other data. Until the Purge Delay time expires, the
Salvage feature tracks the deleted files and allows the deleted files to be salvaged and restored. If
space is needed, the oldest deleted files are purged to clear space. Salvage is enabled by default.
If the Salvage Files attribute is disabled, deleted files are purged immediately on deletion.
IMPORTANT: The Salvage Files attribute does not affect whether deleted volumes can be salvaged
or purged. Salvage for deleted volumes is determined at the server level with the nss /
ImmediatePurgeOfDeletedFiles=<on | off> setting. For more information, see Section 23.2.2,
“Setting the Immediate Purge of Deleted Files for All NSS Volumes,” on page 298.
For information, see Chapter 23, “Salvaging and Purging Deleted Volumes, Directories, and Files,”
on page 295.
Snapshot (File-Level)
The File-level Snapshot attribute enables a backup utility to capture the last closed version of a file
that is open at the time a backup is in progress. You must manually deactivate the volume, then
activate the volume after setting this attribute to let the volume set up the virtual volume for the
metadata about file snapshots.
If the File Snapshot attribute is enabled, Novell Storage Management Services (SMS) saves the
snapshot version of the file to backup media if a file is in use when the backup occurs.
IMPORTANT: Not all third-party backup software can take advantage of the file snapshot attribute,
even if you set it.
User Space Quotas
The User Space Quotas (user space restrictions) attribute enables you assign a maximum quota of
space that a user’s data can consume across all directories in the volume.
For information, see “Managing Space Quotas for Volumes, Directories, and Users” on page 281.
User-Level Transaction Model (NetWare Only)
The User-Level Transaction mode enables the Transaction Tracking System (TTS) function for NSS
volumes. TTS logs changes made to a file contents, and protects database applications by backing out
transactions that are incomplete because of a system failure.
212
OES 11 SP2: NSS File System Administration Guide for Linux
Mount Point
Specify the mount point for the NSS volume, such as /media/nss/VOLA.
The default mount path for NSS volumes is /media/nss/volumename, where volumename is the
name of the volume. You can optionally specify another path as the mount point.
Allow Mount Point to Be Renamed: Select this option if you want to allow the mount point to be
renamed if the volume is renamed.
This feature works only if the volume is mounted in its default location (/media/nss/volumename).
For example, you have a volume VOL1 and the default mount point location is /media/nss/VOL1.
For this volume, you have selected the option Allow Mount Point to be Renamed.
You renamed VOL1 to MYVOL, the default mount point after renaming becomes as /media/nss/MYVOL.
Lookup Namespace
NSS provides multiple name spaces for the volume: Long, UNIX, DOS, and Macintosh. The Lookup
Namespace attribute sets the primary name space to use when you mount the volume, but all name
spaces are available for use by various applications.
For NSS volumes, the Long name space is highly recommended because names on NSS are case
insensitive by default. The UNIX name space supports case-sensitive naming.
For OES 2 SP1 and later, Long is also the default name space for NSS volumes. Using the Long name
space as primary improves performance over using the UNIX name space, especially if you expect to
store millions of files on the volume.
NOTE: In OES 2 and earlier, UNIX was the default name space for mounting NSS volumes on Linux.
NCP tools require only the Long or UNIX be set as the primary name space. With DOS or Mac set as
the primary name space, you cannot view or manage the volume from Novell Remote Manager, and
users are unable to map to the volume using NCP clients. If you use the Long or UNIX name space,
the DOS and Mac name spaces are still available, but they are not the primary.
The UNIX name space supports some special characters that are not allowed in the Long name space,
such as characters 0x01 through 0x07 and 0x10 through 0x1f. If you need to use these special
characters in filenames, choose UNIX as the default name space.
If you change the name space for an existing shared volume by using NSSMU or the Storage plug-in
for iManager, you must modify the load script for the pool cluster resource to add the name space to
the ncpcon mount command for the volume. Otherwise, the cluster assumes the default name space
for mounting the volume. You can do this by using the /opt=ns=<long|unix|dos|mac> switch in
the ncpcon mount command.
For example, to specify the LONG name space, add the /opt=ns=long switch as follows:
ncpcon mount /opt=ns=long <VOLUMENAME>=<VOLUMEID>
For example, to specify the UNIX name space, add the /opt=ns=unix switch as follows:
ncpcon mount /opt=ns=unix <VOLUMENAME>=<VOLUMEID>
Managing NSS Volumes
213
18.1.2
Encryption Support
Encryption provides password-protected activation of encrypted NSS volumes. Encryption can be
activated at creation time only, and this choice persists for the life of the volume. Encrypted volume
support is available on OES 2 Linux and later.
Encrypted volumes can be created only from NSSMU. Encrypted volumes require special handling
on the first activation after startup, but all attributes are available for encrypted volumes and are
managed the same as for unencrypted volumes. For information about creating and activating
encrypted volumes, see “Managing Encrypted NSS Volumes” on page 239.
18.1.3
Enhanced Hard Link Support
Enhanced hard link support for an NSS volume allows users to create multiple names for a single,
existing file object in the same directory or in multiple directories in the same NSS volume. NSS
supports zero to 65,535 hard links per file on NSS volumes.
You must upgrade the media format before you can set the Hard Links attribute and create new hard
links on a volume. For information about upgrading the media format, see Chapter 4, “Upgrading
the NSS Media Format,” on page 41.
After the media upgrade for enhanced hard links support, the Hard Links attribute must be enabled
or disabled by using commands. The attribute cannot be enabled or disabled in NSSMU or in
iManager. For information about enabling Hard Link support for a volume, see Section 24.3,
“Enabling or Disabling the Hard Links Attribute,” on page 311.
After the volume has been enabled for enhanced hard links, you can create hard links. For
information about creating and managing hard links, see Chapter 24, “Managing Hard Links,” on
page 307.
Beginning in OES 2 SP1, Novell Storage Management Services supports the backup and restore of
hard links on NSS volumes.
Hard links are lost when you use the Move Volume or Split Volume features of Distributed File
Services.
18.2
Guidelines for NSS Volumes
 Section 18.2.1, “Guidelines for Sizing Volumes,” on page 214
 Section 18.2.2, “Guidelines for Name Spaces,” on page 215
 Section 18.2.3, “Guidelines for NSS Volumes in a Cluster,” on page 215
 Section 18.2.4, “Guidelines for NSS Volumes in a Mixed-Node Cluster,” on page 215
18.2.1
Guidelines for Sizing Volumes
NSS volumes are logical storage media that acquire space from pools of storage. When you create a
logical volume, you can either assign it a fixed quota as the maximum size, or allow it to expand to
the pool size. To grow a volume, you might need to add new segments to grow the pool first, up to
the maximum pool size of 8 TB.
If a pool contains multiple volumes, the cumulative administrative maximum sizes of all volumes
can exceed the pool’s maximum size by overbooking, although real total size is bound by physical
limitations. Because space is allocated to volumes as needed, a volume might not reach its quota. As
214
OES 11 SP2: NSS File System Administration Guide for Linux
the overbooked volumes consume the available physical space, you need to add more disk space to
the pool to accommodate the growth, or consider moving or splitting volumes to move data to other
pools.
For example, suppose you have an 800 MB storage pool with eight volumes set at 100 MB each. The
administrative size equals the physical limits. To overbook the pool, you can add volumes, set one or
more of the volumes to expand to the pool size, or increase the size of existing volumes, with the
understanding that these are administrative maximum sizes, not physical sizes.
Because volume sizes can be overbooked in a pool, NSS automatically considers what space is
remaining in a pool in order to report the maximum size that is currently possible for the volume. In
addition to other volumes that can consume space in a pool, NSS snapshots and third-party
snapshots can consume space that might not be reported in all of the management tools that report
space. NSS reports the total space possible for the volume and the amount of space used by the
volume so that tools can properly calculate the maximum free space available.
18.2.2
Guidelines for Name Spaces
NSS recognizes DOS, Macintosh, UNIX, and Long name spaces. Volume names, directory names,
and filenames in NSS are case insensitive. This differs from Linux POSIX file systems, which are case
sensitive by default. For information, see “Lookup Namespace” on page 213.
18.2.3
Guidelines for NSS Volumes in a Cluster
You must create at least one shared volume in a cluster-enabled pool. Typically, all volumes are
created when you initially set up the cluster resource and before you need to cluster migrate or fail
over the resource to other servers in the cluster.
The Server, Pool, Volume, Cluster Resource, and Cluster objects are recommended to be in the same
context (such as ou=ncs,o=novell).
If the objects are in different contexts, you might receive an eDirectory error when you attempt to
modify the pool, create or modify the volumes, home directories, Distributed File Services junctions,
or any other elements that are managed using eDirectory objects. To resolve the problem, you must
cluster migrate the pool cluster resource back to the node where the pool was created in order to
perform those management tasks.
18.2.4
Guidelines for NSS Volumes in a Mixed-Node Cluster
In a clustered storage area network with Novell Cluster Services, NSS volumes can fail over between
kernels, allowing for full data and file system feature preservation when migrating data to Linux.
However, you cannot SAN boot cross-platform.
For information about using NSS volumes cross-platform, see the following:
 Section 7.2, “Cross-Platform Issues for NSS Volumes,” on page 69
 Section 7.3, “Cross-Platform Issues for NSS Features,” on page 70
 Section 7.4, “Cross-Platform Issues for File Access,” on page 71
For information about clustering, see the following:
 OES 11 SP2: Novell Cluster Services for Linux Administration Guide
 NW6.5 SP8: Novell Cluster Services 1.8.5 Administration Guide
Managing NSS Volumes
215
18.3
Creating Unencrypted NSS Volumes
This section describes how to create an unencrypted volume with iManager. Encrypted volumes can
be created only in NSSMU. For information on creating encrypted volumes, see “Managing
Encrypted NSS Volumes” on page 239.
1 In iManager, click Storage > Volumes.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select a server to manage.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
A list of volumes appears in the Volumes list, as illustrated in the following figure.
3 To create a new volume, click New.
This opens the New Volume Wizard to guide you through the process.
4 Specify a name for the new volume, then click Next.
If the name you provide is not unique, you receive an error message. For guidelines about
naming volumes, see Section 5.4, “Naming NSS Storage Objects,” on page 50.
216
OES 11 SP2: NSS File System Administration Guide for Linux
5 Do one of the following to specify the pool to use:
 Select an existing pool from the list where you want the new volume to reside.
 If no pools exist, click New Pool, create a pool to use, select the pool.
 If existing pools do not have sufficient space for the volume you want to create, click Cancel
to close the Wizard. You must add more segments of free space to the pool, then return to
the Volumes page to create the new volume.
 If no pools exist and no space is available to create one, click Cancel to close the Wizard. You
must add more devices to the server or free up space on existing pools, then return to the
Volumes page to create the new volume.
6 Specify the size of the volume:
 No Volume Quota: Select Allow Volume Quota to Grow to the Pool Size if you want the volume
to expand to the size of the pool. This is the default.
Pools can be overbooked; each volume can potentially grow to the size of the pool. NSS
allocates space as it is needed.
 Volume Quota: Deselect Allow Volume Quota to Grow to the Pool Size, then type a Volume
Quota size in MB for the volume if you want to limit the size of the volume.
7 Click Next.
8 On the Attribute Information page under the Attributes section, set the attributes for the new
volume you are creating. The Backup and Salvage attributes are selected by default.
For information about volume attributes, see Section 18.1, “Understanding Volume Properties,”
on page 209.
Managing NSS Volumes
217
9 On the Attribute Information page under the On Creation section, set the following preferences:
 Activate. Activates logical volumes as soon as you create them.
 Mount. Mounts logical volumes as soon as you create them.
10 On the Attribute Information page under File Information, specify the following parameters:
 Mount Point: For a Linux server, specify the mount point for the NSS volume, such as /
media/nss/VOLA.
The default mount path for NSS volumes is /media/nss/volumename, where volumename
is the name of the volume. You can optionally specify another path as the mount point.
 Allow Mount Point to Be Renamed: Select this option if you want to allow the mount
point to be renamed if the volume is renamed.
This feature works only if the volume is mounted in its default location (/media/nss/
volumename).
 Lookup Name Space: Select the name space to use when you mount the volume. The name
spaces are UNIX, Long, DOS, or Macintosh. The default name space is Long.
The recommended setting is Long. This setting ensures that filenames are case insensitive
whether the volume is mounted. It also improves performance over using UNIX, especially
if you expect to store millions of files on the volume.
11 Click Finish.
12 If you enabled the Directory Quotas attribute, restart NCP2NSS by entering at a terminal prompt:
/etc/init.d/ncp2nss restart
218
OES 11 SP2: NSS File System Administration Guide for Linux
18.4
Configuring Encrypted NSS Volumes with NSSMU
NSS Encrypted Volume Support allows you to create encrypted NSS volumes using NSSMU. For
information, see “Managing Encrypted NSS Volumes” on page 239.
18.5
Updating eDirectory Volume Objects
In NetIQ eDirectory, each NSS volume is represented by a Volume object. Volume objects are leaf
objects that represent a physical or logical volume on the network.
The Volume object’s properties contains the following information:
 The server where the volume resides
 The volume name recorded when the volume was initialized on the server
 The volume’s owner (login username of the administrator who created it)
 Space use restrictions for users
 A description of the volume’s use
 Statistical information on disk space availability, block size, directory entries, name space
support, and so on.
Usually, NSS creates the NetIQ eDirectory Volume object when you create the volume, and it updates
the properties of the volume as needed. The Update eDirectory option on the Volumes page allows you
to add or replace a Volume object for a selected volume at the same context level as the server.
IMPORTANT: When you delete (or delete and replace) a Volume object in eDirectory, the home
directory attribute is removed in the User objects for any users that reference that Volume. The home
directory attribute points to a particular Volume object. When that Volume object is deleted,
eDirectory needs to clean up all references to the object being deleted.
When you select Update eDirectory, NSS searches for the object.
1 In iManager, click Storage > Volumes.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select a server to manage.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
3 In the Volumes list, select the volume you want to update.
Wait for the page to refresh and display the details.
4 Click Update eDirectory.
Wait while NSS searches for the Volume object in the server context.
5 Do one of the following:
 If the Volume object does not exist, NSS adds the Volume object to the context level.
Confirm the addition.
 If the Volume object exists, NSS prompts you with two options: Delete and Replace the
existing object or Retain the existing object. Select one option and confirm your choice.
18.6
Viewing the Details of an NSS Volume
1 In iManager, click Storage > Volumes.
Managing NSS Volumes
219
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select a server to manage.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
3 In the Volumes list, select a volume that you want manage.
When the page refreshes, the details for the volume appear in the Details area. The volume must
be mounted and active for the details to be available.
4 (Conditional) Activate the volume, select the volume, then click Activate.
18.7
Viewing Properties of an NSS Volume
After you set up and configure NSS volumes, you can view the properties, such as attribute settings,
volume statistics, and volume usage.
1 In iManager, click Storage > Volumes.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select a server to manage.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
3 In the Volumes list, select a volume that you want manage.
4 Click Properties.
The Properties page has three tabs: Attributes, Statistics, and Quota Usage. It opens by default to
the Attributes tab.
Use the Attributes page to view the volume’s attribute configuration, the volume quota, and the
volume mount point. For information about modifying attributes, see Section 18.8, “Modifying
Attributes of an NSS Volume,” on page 222.
5 Click the Statistics tab to view the current space usage statistics for the selected volume.
220
OES 11 SP2: NSS File System Administration Guide for Linux
If the Salvage attribute is enabled, values are displayed for the salvage parameters. The low and
high watermark displays the default settings for the pool-level watermarks for the pool where
the volume resides. For information about managing salvage parameters, see Chapter 23,
“Salvaging and Purging Deleted Volumes, Directories, and Files,” on page 295.
If the Compression attribute is enabled, statistics are displayed for the compression data. For
information about configuring compression parameters, see Chapter 21, “Managing
Compression on NSS Volumes,” on page 263.
6 Click the Quota Usage tab to view the volume and pool space usage for the selected volume.
Managing NSS Volumes
221
18.8
Modifying Attributes of an NSS Volume
After you set up and configure NSS volumes, you can modify most of the attribute settings. The
Encrypted Volume Support attribute and the Compression attribute can be set only at the time the
volume is created. If you try to modify those settings, iManager or NSSMU returns an error message.
You can also specify a Volume Quota or modify the mount point.
1 In iManager, click Storage > Volumes.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select a server to manage.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
3 In the Volumes list, select a volume that you want manage.
4 Click Properties.
The Properties page has three tabs: Attributes, Statistics, and Quota Usage. Use the Attributes page
to view or modify the attributes for the selected volume.
5 Do one or more of the following:
 Select or deselect a modifiable attribute, then click Apply.
The Encryption and Compression attributes can be set only at the time the volume is
created. If you try to modify those settings, iManager returns an error message.
For information about attributes, see Section 18.1, “Understanding Volume Properties,” on
page 209.
 Specify a volume quota, then click Apply.
 Specify the default Lookup Namespace to use when mounting the volume, then click Apply.
The next time the volume is mounted, this will be the name space used. The default name
space is Long.
 Specify a new Mount Point for your volume, then click Apply. For example:
/media/nss/VOL1
6 If you enabled or disabled the Directory Quotas attribute, restart NCP2NSS by entering at a
terminal prompt:
/etc/init.d/ncp2nss restart
For information about setting quotas after you have enabled the Directory Quotas attribute or User
Space Quotas attribute, see “Managing Space Quotas for Volumes, Directories, and Users” on
page 281.
18.9
Modifying the NSS Volume Size
1 In iManager, click Storage > Volumes.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select a server to manage.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
3 In the Volumes list, select a volume that you want manage.
4 Click Properties.
222
OES 11 SP2: NSS File System Administration Guide for Linux
The Properties page has three tabs: Attributes, Statistics, and Quota Usage. It opens to the Attributes
tab.
5 In the Quota field, do one of the following:
 No Quota: Select Allow Volume Quota to Grow to the Pool Size. NSS pools allow overbooking
so the administrative sum of all volumes’ quotas in a pool can exceed the physical pool
quota.
 Quota: Deselect Allow Volume Quota to Grow to the Pool Size, then specify the maximum size
you want to allow volume to grow. The quota cannot exceed the pool size.
If you set a quota that is less than the volume’s current size, no files can be saved to the file
until you purge files to make room on the volume.
6 Click Apply.
Managing NSS Volumes
223
7 Click the Quota Usage tab to view the volume and pool space usage for the selected volume and
to verify the new setting.
18.10
Configuring the Name Space for an NSS Volume
NSS supports the Long, DOS, UNIX, and Macintosh name spaces. By default, names on the NSS file
system are case insensitive, which is supported by the Long name space. The Long name space is the
default setting used when mounting NSS volumes. In order to mount a volume with a different name
space, you must specify the name space explicitly in the mount command, or you can specify the
name space to use as a property of the NSS volume.
On Linux POSIX file systems, the UNIX name space is typically used. If your volume contains large
directories with millions of files, using the default UNIX name space on NSS volumes can cause
volumes to mount very slowly. Using the Long name space allows the NSS volume to mount
normally. Unless you need to support case sensitive filenames, we strongly recommend using the
Long name space.
The preferred name space can be set when you create the volume and set its attributes, or at any time
by modifying the Lookup Namespace attribute in the volume’s properties.
You can also mount the volume by specifying the name space to use as an option of the mount
command. For instructions, see Section 18.11, “Mounting NSS Volumes with Linux Commands,” on
page 225.
To view or modify the Lookup Namespace attribute for the NSS volume:
1 In iManager, click Storage > Volumes.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select a server to manage to view a list of its volumes.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
3 In the Volumes list, select a volume that you want manage.
4 Click Properties to view the volume’s properties.
224
OES 11 SP2: NSS File System Administration Guide for Linux
The Properties page has three tabs: Attributes, Statistics, and Quota Usage.
5 On the Attributes page, view the current setting of the Lookup Namespace for the selected volume.
6 On the Attributes page, optionally modify the Lookup Namespace to use by selecting the radio
button next to it.
 Long (recommended, default)
 DOS
 UNIX
 Mac
This is the new value that is applied automatically whenever you mount the volume.
7 Click Apply to save your changes.
8 On the Volumes page, click Dismount to unmount the volume.
Wait until the volume unmounts gracefully before continuing.
9 On the Volumes page, click Mount to mount the volume to mount it the new name space.
10 If you change the name space for an existing shared volume by using NSSMU or the NSS plug-in
for iManager, you must modify the load script for the pool cluster resource to add the name
space to the ncpcon mount command for the volume. Otherwise, the cluster assumes the default
name space for mounting the volume. You can do this by using the /
opt=ns=<long|unix|dos|mac> switch in the ncpcon mount command. After you modify the
load script, you must take the pool cluster resource offline, then bring it online to apply the new
name space setting for the volume.
For example, to specify the LONG name space, add the /opt=ns=long switch as follows:
ncpcon mount /opt=ns=long <VOLUMENAME>=<VOLUMEID>
For example, to specify the UNIX name space, add the /opt=ns=unix switch as follows:
ncpcon mount /opt=ns=unix <VOLUMENAME>=<VOLUMEID>
For more information, see “Configuring a Load Script for the Shared NSS Pool” in the OES 11
SP2: Novell Cluster Services for Linux Administration Guide.
18.11
Mounting NSS Volumes with Linux Commands
When mounting an NSS volume, specify the Long name space to make its directory names and
filenames case insensitive. Long is the default name space, and improves performance over using the
UNIX name space. If your volume contains large directories with millions of files, using the UNIX
name space can cause volumes to mount very slowly and can degrade performance.
Mounting an NSS Volume
To mount an NSS volume from a terminal command line, enter
mount -t nssvol volname mount_point -o name=volname,ns=long
For the -t option, nssvol is the file system type for NSS volumes. Volnameis the name of the NSS
volume.
The mount_point is the full path with the volume name where you want to mount the volume, such
as /media/nss/VOL1. The default mount location for NSS volumes is in the /media/nss/ directory.
For the -o option, specify the volume name and the primary name space type. Valid name space
options are dos, mac, long, or unix.
Managing NSS Volumes
225
For example, to mount an NSS volume named VOL1 as case insensitive, enter the following at a
terminal prompt:
mount -t nssvol VOL1 /media/nss/VOL1 -o name=VOL1,ns=long
Mounting an NSS Volume Automatically on System Reboot
You can automatically mount the NSS volume on system reboot by adding a line to the /etc/fstab
file in the following general format:
label mount_point fstype mount_options dump_frequency fsck_order
For example:
VOL1 /media/nss/VOL1 nssvol noauto,rw,name=VOL1,ns=long 0 0
Using Samba with NSS Volumes
When using Samba, make sure to do the following:
 Mount the NSS volume as case insensitive by using the Long name space.
 Specify Case
Sensitive=No when exporting Samba shares for NSS volumes with case
insensitive name spaces.
Edit the /etc/samba/smb.conf file to set the Case Sensitive parameter to No.
This improves performance for your NSS volumes, especially those with larger directories.
On Windows, to make hidden attribute work on Samba share with NSS file system, add the
following parameters in the share configuration (smb.conf) file:
map hidden = no # No default setting
map system = no # Default No
map archive = no # Default Yes
store dos attributes = yes
18.12
Renaming an NSS Volume
You can rename NSS volumes. For example, you might want to change the name of a volume to
reflect the department or organization that uses it.
1 In iManager, Storage > Volumes.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select a server to manage.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
3 In the Volumes list, select a volume.
4 Click Rename.
The Rename Volume Wizard opens.
5 Specify the new name of the volume.
6 Click Finish.
After the page refreshes, the volume appears in the Volumes list with its new name.
226
OES 11 SP2: NSS File System Administration Guide for Linux
18.13
Renaming (Modifying) the Mount Point for an NSS Volume
The default mount point for NSS volumes is /media/nss/volumename. You must enable the Allow the
Mount Point to Be Renamed option for the volume to allow the mount point to be renamed.
NOTE: Currently, renaming the mount point is not supported for clustered NSS volumes using
NSSMU, NLVM and iManager. For more information on renaming, see “Renaming the Mount Point
Path for a Shared NSS Volume (Using a Custom Mount Point for a Shared NSS Volume)” in the OES
11 SP2: Novell Cluster Services for Linux Administration Guide.
 Section 18.13.1, “Renaming the Mount Point for a New Volume,” on page 227
 Section 18.13.2, “Enabling the Mount Point for the NSS Volume to Be Renamed,” on page 227
 Section 18.13.3, “Renaming the Mount Point for an Existing NSS Volume,” on page 227
18.13.1
Renaming the Mount Point for a New Volume
The default mount point for NSS volumes is /media/nss/volumename. You can specify a different
mount point (that is, modify the volume’s directory path) as you create the volume if you create the
volume in iManager. Creating the volume in the NSSMU (nssmu) does not allow for the mount point
to be changed during the volume setup, but you can change it afterwards.
18.13.2
Enabling the Mount Point for the NSS Volume to Be Renamed
The Allow the Mount Point to Be Renamed option enables the NSS volume’s mount point to be
renamed. This option is disabled by default. Enable the option as you create the volume, or enable it
at any time for an existing volume by modifying the setting on the Attributes page (Storage > Volumes >
Properties > Attributes) in iManager.
1 In iManager, click Storage, then click Volumes.
2 Select the server you want to manage to view a list of its volumes.
3 From the Volumes list, select the volume, then click Properties to view the volume attributes.
4 On the Attributes page, select Allow the Mount Point to Be Renamed.
5 Click Apply to save the change.
18.13.3
Renaming the Mount Point for an Existing NSS Volume
Whenever you change the mount point for an existing NSS volume, you must also restart NetIQ
eDirectory to update the NetWare Core Protocol (NCP) Server cache. When an NSS volume is
created, the NCP Server gets the path to the volume and caches it, assuming that it never changes.
When you later run ncpcon and enter the volume command, it reports which volumes are still found
at their respective mount points. Only the volumes that are still valid as compared to the list in cache
are reported. Restarting eDirectory forces the NCP Server volume cache to update, so that the correct
path is stored for reporting volume status.
1 Use either iManager or nssmu to change the volume’s mount point.
The following instructions are for iManager.
1a In iManager, click Storage > Volumes.
1b Select the server you want to manage to view a list of its volumes.
1c From the Volumes list, select the volume, then click Properties to view the volume attributes.
Managing NSS Volumes
227
1d If the Allow the Mount Point to be Renamed option is not selected, select it and click Apply.
NOTE: Select this option if you want to allow the mount point to be renamed if the volume
is renamed. This feature works only if the volume is mounted in its default location (/
media/nss/volumename).
1e In Mount Point, type the new mount point.
The default mount point for NSS volumes is /media/nss/volumename. The new path
should also include the volumename.
/mnt/nss/volumes/volumename
1f Click Apply to save the change.
2 Open a terminal console on the server, then log in as the root user or equivalent.
3 Restart eDirectory by entering
/etc/init.d/ndsd restart
Restarting eDirectory causes the NCP Server’s volume cache to be updated.
18.14
Activating and Deactivating an NSS Volume
After you set up and configure NSS volumes, you can activate and deactivate volumes to make them
available to users and applications. To view details of a volume, it must be active.
1 In iManager, Storage > Volumes.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select a server to manage.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
3 In the Volumes list, select one or more volumes that you want to make active or deactive.
4 Click Activate or Deactivate.
After the page refreshes, you can see that each volume’s state matches the state you specified. If
a selected volume is already in the specified state, no change occurs. The details of deactive
volumes are not available.
18.15
Mounting and Dismounting an NSS Volume
After you set up and configure NSS volumes, you can mount and dismount volumes to make them
available to users and APIs. After you mount a volume, it is only available to APIs until you activate
it. Dismounting a volume makes it unavailable to users and to APIs.
 Section 18.15.1, “Dismounting an NSS Volume from the NCP Server,” on page 229
 Section 18.15.2, “Mounting or Dismounting an NSS Volume with iManager,” on page 229
 Section 18.15.3, “Mounting an Encrypted NSS Volume with NSSMU,” on page 229
 Section 18.15.4, “Dismounting an Encrypted NSS Volume with NSSMU,” on page 229
228
OES 11 SP2: NSS File System Administration Guide for Linux
18.15.1
Dismounting an NSS Volume from the NCP Server
Before you can dismount an NSS volume, you must dismount the volume from NCP Server;
otherwise, the dismount function fails.
1 At the server prompt, open the NCP Console by entering
ncpcon
2 Dismount the volume from NCP.
The volume is no longer accessible or visible to NCP clients.
18.15.2
Mounting or Dismounting an NSS Volume with iManager
1 In iManager, click Storage > Volumes.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select a server to manage.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
3 In the Volumes list, select one or more volumes that you want to mount or dismount.
4 Click Mount or Dismount.
After the page refreshes, you can see that the volume’s state changed. If a selected volume is
already in the specified state, no change occurs. The details of dismounted volumes are not
available.
18.15.3
Mounting an Encrypted NSS Volume with NSSMU
You must mount an encrypted NSS volume from NSSMU on the first time it is started after a reboot
in order to be able to supply the password. NSSMU allows you to mount only one volume at a time
so that you can enter its password.
1 In NSSMU, select Volumes.
2 In the Volumes list, select the encrypted volume that you want to mount.
3 Press F7 to mount the volume.
4 If you are prompted to enter the password, enter the password, then click OK.
You are prompted for the password on the first time the volume is mounted after a system
reboot.
18.15.4
Dismounting an Encrypted NSS Volume with NSSMU
1 In NSSMU, select Volumes.
2 In the Volumes list, select the encrypted volume that you want to mount.
3 Press F7 to dismount the mounted volume.
Managing NSS Volumes
229
18.16
Exporting and Importing NSS Volumes for NFS Access
NSS volumes on OES 11 SP2 and their directories are NFSv3 exportable and accessible from remote
systems. NFSv4 is not supported for NSS, but exports for other file systems using NFSv4 can coexist
with NSS exports using NFSv3.
 Section 18.16.1, “Understanding NFS Export and Mount Options,” on page 230
 Section 18.16.2, “Exporting NSS Volumes for NFSv3,” on page 232
 Section 18.16.3, “Importing NSS Volumes,” on page 234
18.16.1
Understanding NFS Export and Mount Options
 “Host Options” on page 230
 “Mount Options for Export via NFSv3” on page 230
 “Mount Options for Import via NFSv3” on page 231
 “Additional Information” on page 232
Host Options
The following table describes options for specifying which servers on the network can import the
NFS volume. For more information, see the exports(5) man page.
Table 18-1 Host Options for NFSv3 Export of NSS Volumes
Mount Option
Description
Single host
Specify a single host by its fully qualified domain name or its IP address.
Netgroups
Specify NIS netgroups as @groupname, such as @trusted.
Wildcards
Specify a asterisk (*) to specify all hosts. Use the wildcard characters of
asterisk (*) and question marks (?) in server names to match multiple
servers. For example, proj*.example.com matches all hosts in the
domain example.com that begin with proj.
IP networks
Specify all hosts on a network or subnetwork by specifying the IP address
and netmask pair as address/netmask. For example: 10.10.10.1/
255.255.252.0.
Mount Options for Export via NFSv3
Table 18-2 describes mount options available for mounting NSS volumes for export via NFSv3. For
more information, see the exports(5) man page and the mount(8) man page.
Table 18-2 Mount Options for NFSv3 Export of NSS Volumes
230
Mount Option
Description
rw
Mount the NSS file system with Read/Write (rw) access.
OES 11 SP2: NSS File System Administration Guide for Linux
Mount Option
Description
no_root_squash
Disable root squashing for the superuser with the No Root Squash
(no_root_squash) option. This allows root users on client computers
to have root access on the server. With the No Root Squash option,
mount requests for root are not mounted to the anonymous user
(nobody). This option is needed for diskless clients.
NSS volumes are logical volumes. They are not directly mounted on
devices, but are associated with pools, which are mounted on devices.
Because NSS volumes do not have a device directly associated with
them, NFS treats the volume like a diskless client, which makes the
no_root_squash option necessary when you mount NSS volumes.
sync
Specify the Sync (sync) option, which requires all file system writes to be
committed to disk before the request can be completed.
fsid=value
Importing with the fsid option works around the fact that there is no
device associated with a logical volume.
You must import the NSS volume or directory with the FSID option set on it
for export:
fsid=n
Replace n with an integer value greater than 0. The numbers do not need
to be sequential. For example, fsid=1 and fsid=10. Make sure to use a
unique fsid number for each NSS volume or directory you are exporting.
IMPORTANT: FSID=0 is reserved for NFSv4 as the pseudo root of the
exported file system for exported volumes on the Linux server.
Mount Options for Import via NFSv3
Table 18-3 describes mount options available for mounting NSS volumes for import via NFSv3. For
more information, see the mount(8) man page.
Table 18-3 Mount Options for NFSv3 Import of NSS Volumes
Mount Option
Description
rw
Mount the NSS file system with Read/Write (rw) access.
sync
Specify the Sync (sync) option, which requires all file system writes to be
committed to disk before the request can be completed.
noatime
NSS also supports the optional use of the noatime for importing and
mounting NSS volumes by using NFS. The noatime option disables the
updating of the access time for files so that reading a file does not update
its inode access time (atime).
For more information, see Section A.22, “noatime and nodiratime Support
for Linux open, mount, nfsmount, and /etc/fstab,” on page 389.
Managing NSS Volumes
231
Additional Information
When you use NFS to export or import NSS volumes, other supporting services are needed,
including DNS, NIS, and NFS. For information about configuring and managing these services, see
the following sections in the SLES 11 Administration Guide (http://www.suse.com/documentation/
sles11/book_sle_admin/?page=/documentation/sles11/book_sle_admin/data/
book_sle_admin_pre.html):
 “The Domain Name System” (http://www.suse.com/documentation/sles11/book_sle_admin/
?page=/documentation/sles11/book_sle_admin/data/cha_dns.html)
 “Using NIS” (http://www.suse.com/documentation/sles11/book_security/?page=/
documentation/sles11/book_security/data/cha_nis.html)
 “Sharing File Systems with NFS” (http://www.suse.com/documentation/sles11/book_sle_admin/
?page=/documentation/sles11/book_sle_admin/data/cha_nfs.html)
18.16.2
Exporting NSS Volumes for NFSv3
1 In a terminal console, log in as the root user.
2 In YaST, select Network Services, then select NFS Server to open the NFS Server Configuration page.
3 If NFS Server is not started and enabled, you must configure the NFS Server.
3a NFS Server: Select Start.
3b Firewall: Select Open Port in Firewall to allow access to the NFS service from remote
computers, then click Firewall Details to specify the network interfaces where you want to
open the port.
3c Enable NFSv4: Make sure that Enable NFSv4 is not selected if you are exporting only via
NFSv3.
IMPORTANT: NFSv4 is not supported for NSS, but exports for other file systems using
NFSv4 can coexist with NSS exports using NFSv3. If you enable NFSv4, make sure that you
enter the NSS directories for export with options that use non-zero settings for their FSIDs,
and do not bind the NSS directories to paths in the pseudo-root file system that you set up
for NFSv4 exports.
232
OES 11 SP2: NSS File System Administration Guide for Linux
3d Enable GSS Security: To enable Kerberos secure access to the server, click Enable GSS
Security. A prerequisite for this is to have Kerberos installed in your domain and both the
server and the clients are kerberized.
3e Click Next to continue to the Directories to Export page.
4 On the Directories to Export page, do the following for each NSS volume on the server that you
want to export via NFSv3.
4a Under Directories, click Add Directory, to open a dialog box where you can configure the
settings for a volume.
4b Specify the NSS volume that you want to export by typing the Linux path for the volume, or
browse the Linux file system to locate and select the NSS volume, then click OK.
The default location of NSS volumes is /media/nss/volumename, such as /media/nss/
VOL1.
4c In the Host Wildcard field, specify the servers where you want to be able to mount the NSS
volume via NFSv3.
A default asterisk (*) wildcard indicates all servers. You can specify a single host,
netgroups, wildcards, or IP networks. For information, see “Host Options” on page 230.
4d Enter the following required mount options:
rw,no_root_squash,sync,fsid=value
For NFSv3, make sure you do not include the fsid=0 or bind=/pseudo_rootdir/
volumename options. Not using these options allows the export to be processed as an NSFv3
export.
For example:
rw,no_root_squash,sync,fsid=1
Separate the options with commands and no spaces. For information, see “Mount Options
for Export via NFSv3” on page 230.
Managing NSS Volumes
233
4e Click OK to save your settings and return to the Directories to Export page.
5 On the Directories to Export page, click Finish to apply the settings.
18.16.3
Importing NSS Volumes
1 On the OES 11SP2 server where you want to import the NSS volume via NFS, open YaST.
2 In YaST, select Network Services, then select NFS Client to open the NFS Client Configuration page.
3 Near the bottom of the page, select Open Port in Firewall to allow access to the NFS service from
remote computers, then click Firewall Details to specify the network interfaces where you want to
open the port
4 Do the following for each NSS volume on remote servers that you want to import via NFS.
4a Click Add to open a dialog box where you can specify the information for the volume you
want to import.
4b In NFS Server Hostname, specify the remote server where the volume resides. Type the fully
distinguished name (such as servername.ou_context.o_context.com), or click Choose, select the
NFS server from a list of servers, then click OK.
4c In Remote File System, specify the path on the remote server where the volume resides. Type
the full path such as /media/nss/VOL1, or click Select to open the Exported Directories
dialog box, then select the path from the list of NFS exported directories on the selected
server, then click OK.
4d In Mount Point (local), specify the path on the server (the NFS Client location) where you
want to mount the remote volume, such as /mnt/nfs/volumename, or click Browse to locate
and select the location.
The Browse option allows you to create a new folder on the server for the target path.
4e Enter the following required mount options:
234
OES 11 SP2: NSS File System Administration Guide for Linux
rw,sync
You can optionally specify the noatime option. For information, see Table 18-3 on page 231
and the mount(8) man page.
4f Click OK to save your settings and return to the NFS Client Configuration page.
The entry you just made should appear in the list.
4g When you are done adding volumes to be imported, continue with the next step.
5 On the NFS Client Configuration page, click Finish to apply the settings.
18.17
Deleting an NSS Volume
Deleting a volume removes the data in the volume and frees the space to be used by other volumes in
the same pool. When you delete a volume, it is salvageable until one of the following events occurs:
 Volume Purge Delay times out. The deleted volume is purged automatically. For information,
see Section 23.2.1, “Setting the Purge Delay for All Deleted Volumes,” on page 298.
 You manually purge the deleted volume. For information, see Section 23.4, “Viewing, Salvaging,
or Purging Deleted NSS Volumes in a Pool,” on page 301.
During the purge delay time, the deleted volume is salvageable, but the space belonging to the
deleted volume is not available to other volumes and. When the purging process begins, the volume
is no longer salvageable.
If it is necessary, you can restore a deleted volume before it is purged. See Section 23.4, “Viewing,
Salvaging, or Purging Deleted NSS Volumes in a Pool,” on page 301.
1 In iManager, click Storage > Volumes.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select a server to manage.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
3 In the Volumes list, select one or more volumes that you want to delete.
Wait for the page to refresh and make the Delete option available for the selected volume.
4 Click Delete.
5 Click Yes to confirm the deletion, or click No to cancel the deletion.
Managing NSS Volumes
235
18.18
Finding the Filename for a Given ZID
You might get an error report that identifies the ZID for a file, but not the filename and path on the
volume. To find the associated filename and full path for the file on a given volume and name space,
use the /ZIDtoFilename option.
1 At the NSS Console (nsscon) prompt, enter the following command (all on the same line, of
course):
nss /ZIDtoFilename=ZIDnumber /ZIDNameSpace=namespace /ZIDVolumeName=volumename
Replace ZIDnumber with the ZID of the file. Replace namespace with the Long, UNIX, Macintosh,
or DOS name space to use for the search. Replace volumename with the name of the volume for
the search.
18.19
Verifying or Rebuilding NSS Volumes
You cannot rebuild or verify an NSS volume independently of other volumes in the same pool. For
guidelines and procedures for verifying and rebuilding NSS pools and volumes, see Chapter 16,
“Verifying and Rebuilding NSS Pools and Volumes,” on page 183.
18.20
Moving Volumes with DFS
The Move Volume function uses Novell Distributed File Services to move a volume’s file structure,
data, and the file system trustee rights information from the original location to a new volume in the
network. For guidelines and procedures for moving volumes, see “Using DFS to Move NSS
Volumes” in the OES 11 SP2: Novell Distributed File Services Administration Guide for Linux.
18.21
Splitting Volumes with DFS
The Split Volume function uses Novell Distributed File Services to move a specified part of a
volume’s file structure, data, and the file system trustee rights information from the original location
to a new volume in the network. A DFS junction replaces the selected directory and its contents in the
source volume. The data and metadata in the directory are moved to the target location, which can be
the root directory or other directory in the destination volume. For guidelines and procedures for
splitting volumes, see “Using DFS to Split NSS Volumes” in the OES 11 SP2: Novell Distributed File
Services Administration Guide for Linux.
18.22
What’s Next
For information about advanced volume features, see the following:
 “Managing Encrypted NSS Volumes” on page 239
 “Securing Access to NSS Volumes, Directories, and Files” on page 247
 “Managing Compression on NSS Volumes” on page 263
 “Managing Space Quotas for Volumes, Directories, and Users” on page 281
 “Salvaging and Purging Deleted Volumes, Directories, and Files” on page 295
 “Managing Hard Links” on page 307
236
OES 11 SP2: NSS File System Administration Guide for Linux
 OES 11 SP2: Novell Distributed File Services Administration Guide for Linux
 OES 11 SP2: Dynamic Storage Technology Administration Guide
Managing NSS Volumes
237
238
OES 11 SP2: NSS File System Administration Guide for Linux
19
Managing Encrypted NSS Volumes
19
Novell Storage Services provides optional Encrypted Volume Support (EVS) for NSS volumes on
OES 2 and later operating systems.
This section describes the following:
 Section 19.1, “Understanding Encrypted Volume Support,” on page 239
 Section 19.2, “Security Considerations for Encrypted Volumes,” on page 241
 Section 19.3, “Creating an Encrypted Volume,” on page 242
 Section 19.4, “Mounting an Encrypted NSS Volume with NSSMU,” on page 243
 Section 19.5, “Mounting Encrypted NSS Volumes with NSS Commands,” on page 243
 Section 19.6, “Dismounting an Encrypted NSS Volume with NSSMU,” on page 244
 Section 19.7, “Using Encrypted Volumes in a Server Cluster,” on page 244
 Section 19.8, “Removing Encrypted Volumes,” on page 245
 Section 19.9, “What’s Next,” on page 245
19.1
Understanding Encrypted Volume Support
NSS Encrypted Volume Support meets the legal standard of making data inaccessible to software
that circumvents normal access control, such as if the media were stolen. EVS is available only for
newly created NSS volumes. EVS stores user data in encrypted format on the NSS volume, yet works
transparently with most applications, NLM programs, and backup utilities that currently work with
NSS.
Any NSS volume can be designated at volume creation time to be an encrypted volume. The
Encrypted attribute stays with the volume throughout its life. An encrypted volume cannot later be
converted to be unencrypted, nor can an unencrypted volume later be converted to be encrypted.
This is a creation-time-only decision.
Dynamic Storage Technology (NSS) does not support using encrypted volumes in a DST shadow
volume pair.
Encryption is transparent above the physical read/write layer of an NSS volume. It requires no
changes for applications. All the rules of file system trustee assignments, trustee rights, ownership,
sharing, visibility, locking, transactions, and space restrictions remain the same. Performance for an
encrypted volume is slightly degraded compared to an unencrypted volume under the same
conditions.
 Section 19.1.1, “Encryption Method,” on page 240
 Section 19.1.2, “Encryption Password,” on page 240
 Section 19.1.3, “How Encrypted Volume Support Works,” on page 240
 Section 19.1.4, “Guidelines for Using Encrypted Volumes,” on page 240
Managing Encrypted NSS Volumes
239
19.1.1
Encryption Method
Encrypted volume support uses the NICI libraries for all cryptographic support. NICI generates a
128-bit AES key for encryption that persists for the life of the volume. You cannot change the
password because it is the key used to encrypt data. NICI uses the password to wrap the key and
other volume-specific cryptographic information into a 128-bit package that is persistently stored in
two locations on the NSS media: the Volume Data Block and the Volume Locator storage object. After
the cryptographic data is wrapped for the activated volume, EVS eliminates the password from
memory.
19.1.2
Encryption Password
The encryption password can be 2 to 16 standard ASCII characters, with a suggested minimum of 6.
The password generates a 128-bit NICI key for encryption. The password is set when you create the
volume. It persists for the life of the volume; it cannot be changed later.
19.1.3
How Encrypted Volume Support Works
On the first activation after a system reboot, you must enter a valid password. When the volume is
activated, NSS loads the volume’s persistent data from the Volume Data Block. If the Encrypted
attribute is enabled for a volume, NSS searches in memory for a known key in the list of volume
names and keys. If the key is present, it is used. If no key is present, NSS consults the list of volumes
and passwords. If a password is available, it is used to unwrap the key from the persistent data and
the new key is placed in the list of volumes and keys. The password is eliminated from memory.
After the encrypted volume is activated, all encryption operations on user data are transparent to file
system applications that use normal file I/O functions. Data written to files is held in cache until the
time it would be normally written. At physical write time, the data is encrypted to a temporary write
buffer and written to the volume in encrypted format.
During reads, the cache is consulted, as it would normally be, to determine if a requested block is
already in memory. If the requested data block is in cache, the clear-text data is transferred. If it is not,
a physical read request is made, with the read directed to a temporary buffer. After read completion,
but before control is returned to the calling program, the encrypted data in the temporary buffer is
decrypted into a cache buffer. The read proceeds normally, with clear-text data being made available
to all future requestors.
19.1.4
Guidelines for Using Encrypted Volumes
 We recommend that you avoid mixing encryption and compression features in a volume. Use
one or the other, but not both.
 You can enable the Encryption attribute only at volume creation time.
 If it is enabled, the Encrypted volume attribute persists for the life of the volume.
 To encrypt an existing volume, you must create a new encrypted volume, then migrate existing
data from the unencrypted volume to the encrypted volume.
 The encryption password is 6 to 16 standard ASCII characters.
240
OES 11 SP2: NSS File System Administration Guide for Linux
19.2
Security Considerations for Encrypted Volumes
 Section 19.2.1, “Choosing a Strong Encryption Password,” on page 241
 Section 19.2.2, “Backing Up Data from an Encrypted Volume,” on page 241
 Section 19.2.3, “Excluding the NSS Cache Memory from Core Dumps,” on page 241
 Section 19.2.4, “Disabling Logs,” on page 241
 Section 19.2.5, “Archiving Files from an Encrypted Volume,” on page 242
 Section 19.2.6, “Using Direct I/O to an Encrypted Volume,” on page 242
 Section 19.2.7, “Sharing Encrypted NSS Volumes in a Cluster,” on page 242
19.2.1
Choosing a Strong Encryption Password
The encryption password is 6 to 16 standard ASCII characters. Make sure to employ security best
practices for passwords. For information, see Section 30.15, “Creating Strong Passwords,” on
page 366.
19.2.2
Backing Up Data from an Encrypted Volume
Make sure to encrypt the data from an encrypted volume on backup media. Backups of an encrypted
volume are not encrypted, unless it is a feature of the backup software you use. For information, see
Section 30.4, “Protecting Data During Backup and on Backup Media,” on page 359.
19.2.3
Excluding the NSS Cache Memory from Core Dumps
Make sure that you exclude the NSS cache memory from core dumps; otherwise, encrypted NSS
volume data might be displayed in the clear. For information, see Section 30.5, “Preventing Exposure
of Sensitive Data in a Core Dump,” on page 360.
19.2.4
Disabling Logs
When working with encrypted volumes, it is important to realize that the volume password and key
information is exchanged between user and kernel space as encrypted volumes are created and/or
mounted. If you have logging enabled on the Linux server when you enter the encryption password,
your password and volume key information might show up in the log file.
Even though the logging mechanisms are root user protected, we strongly recommend that you
make sure logging is disabled when creating an encrypted volume or mounting the encrypted
volume after a system reboot in order to protect the secrecy of your password credentials at these
critical times when you are entering the encryption password.
For information, see Section 30.6, “Preventing Exposure of the Encryption Password in a Log,” on
page 360.
Managing Encrypted NSS Volumes
241
19.2.5
Archiving Files from an Encrypted Volume
If you use Novell Archive and Version Services to archive files from an encrypted volume, the
destination path for Archive Manager must be on an encrypted volume. If the destination path is an
unencrypted volume, the versioned data is stored in an unencrypted state.
Make sure to encrypt the archive volume when you create it on the archive server. Otherwise, the file
versions are not secure.
For information about managing Archive and Version Services, see the following:
 OES 11 SP2: Novell Archive and Version Services Administration Guide
 NW 6.5 SP8: Novell Archive and Version Services 2.1 Administration Guide
19.2.6
Using Direct I/O to an Encrypted Volume
Direct I/O to an encrypted volume bypasses the EVS encryption engine and allows data to be stored
in unencrypted format on the encrypted volume. This capability is useful for diagnostic, repair, or
special-purpose applications, but should be avoided otherwise.
You should avoid using direct-I/O applications on encrypted volumes, especially for user data that
you intend to be stored in encrypted format.
19.2.7
Sharing Encrypted NSS Volumes in a Cluster
When you mount the shared volume and enter the password, NSS uses the password to create a key,
which it stores in the server memory. The Novell Cluster Services software passes the key to other
nodes. After all servers hold the key, the volume is available while any one of the servers is still
participating actively in the cluster. If all servers in the cluster fail, you must repeat this procedure
when you recover the cluster and restart services.
19.3
Creating an Encrypted Volume
NSS Encrypted Volume Support allows you to create encrypted NSS volumes using NSSMU version
3.20 build 940 or later. You can create encrypted user data volumes only after the installation or
upgrade process.
If you choose to encrypt a volume, you cannot roll back the system to earlier versions of OES 2
without taking steps to preserve your data before the rollback. For information, see Section 19.8,
“Removing Encrypted Volumes,” on page 245.
WARNING: We strongly recommend that you verify that your system is working as desired before
creating encrypted volumes on the system.
1 In NSSMU, select Volumes, then press Enter.
2 To create a new volume, press the Insert key.
A query asks if you want to encrypt the volume.
3 To encrypt the new volume, select Yes, then press Enter.
NSS enables the Encrypted attribute for the volume, then prompts you to enter a password for
the volume.
4 Enter an encryption password, then enter it again to verify it.
242
OES 11 SP2: NSS File System Administration Guide for Linux
The encryption password can be 2 to 16 standard ASCII characters, with a suggested minimum
of 6. The password generates a 128-bit NICI key for encryption. The password persists for the
life of the volume; it cannot be changed later.
5 Set the volume size and other attributes, as desired.
When you are done, the encrypted volume is active and mounted.
You must supply the encryption password for the volume on the first volume mount after a system
boot or reboot. For information, see Section 19.4, “Mounting an Encrypted NSS Volume with
NSSMU,” on page 243.
For information about entering the password for a volume in a cluster, see Section 19.7, “Using
Encrypted Volumes in a Server Cluster,” on page 244.
19.4
Mounting an Encrypted NSS Volume with NSSMU
Mount only one volume at a time so that you can enter its password.
IMPORTANT: For encrypted NSS volumes, you can mount the volume only from NSSMU the first
time it is mounted after a reboot.
1 In NSSMU, select Volumes.
2 In the Volumes list, select the encrypted volume that you want to mount.
3 Press F7 to mount the volume.
4 If you are prompted to enter the password, enter the password, then click OK.
You are prompted for the password on the first time the volume is mounted after a system
reboot. The password is stored on the system until the next system reboot. You can mount the
volume without the password until a system reboot occurs.
19.5
Mounting Encrypted NSS Volumes with NSS Commands
You must enter a password only on the first activation following a system reboot. Thereafter, other
environmental security and authentication measures control access to user data.
IMPORTANT: The NSS Console (nsscon) does not support entering the password from the
command line. You must mount the encrypted volume from NSSMU on the first time after a system
reboot. Thereafter, you can use the commands in this section without supplying the password. For
information, see Section 19.4, “Mounting an Encrypted NSS Volume with NSSMU,” on page 243.
Syntax
The following table provides the syntax for NSS commands to use with encrypted volumes on
subsequent mounts of the volume until the system reboots. Enter the commands from nsscon. In each
case, replace volname with the name of the encrypted NSS volume.
You cannot use the wildcard option of all as the volname before an encrypted volume is mounted
with its password following each system reboot. The All option does not find the volume and does
not execute the command.
Managing Encrypted NSS Volumes
243
Table 19-1 Volume Mount Commands
Command
Description
mount volname
Mounts an encrypted or unencrypted NSS volume.
The mount command is usable for encrypted volumes only after a previous
activation with password. Otherwise, it returns an error message, requesting
more information.
mount all
Activates and mounts all encrypted NSS volumes that have been previously
activated with their passwords.
Encrypted NSS volumes that were not previously activated are not mounted.
Mount them from NSSMU, where you can provide the encryption password.
nss /volumes
Displays a list of encrypted and unencrypted NSS volumes, showing their
attributes.
The encrypted volume returns a status of Encrypted.
19.6
Dismounting an Encrypted NSS Volume with NSSMU
Before you can dismount an NSS volume, you must dismount the volume from NCP Server;
otherwise, the dismount function fails.
1 If NCP Server is running, dismount the volume from NCP Server.
1a At the server prompt, open the NCP Console by entering
ncpcon
1b Dismount the volume from NCP.
The volume is no longer accessible or visible to NCP clients.
2 Dismount the volume.
2a From a terminal console, start NSSMU, then select Volumes.
2b In the Volumes list, select the encrypted volume that you want to dismount.
2c Press F7 to dismount the mounted volume.
19.7
Using Encrypted Volumes in a Server Cluster
If you use an encrypted NSS volume in a Novell Cluster Services cluster, you must manually enter
the password for the volume on one of the servers only when you first start or restart the cluster. You
use NSSMU to mount the encrypted volume on one of the OES servers and enter the volume
password, then dismount volume before you can bring the cluster resource online for the first time.
NSS uses the password to create a key, which it stores in the server memory. The Novell Cluster
Services software passes the key to other nodes. After all servers hold the key, the volume is available
while any one of the servers is still participating actively in the cluster. If all servers in the cluster fail,
you must repeat this procedure when you recover the cluster and restart services.
1 Boot or restart the servers in the cluster.
If you automated the loading of cluster resources, the cluster reports that each resource is
comatose because it cannot bring the corresponding encrypted volume online.
If you opt to manually start cluster resources, the cluster resources are not yet active.
244
OES 11 SP2: NSS File System Administration Guide for Linux
2 From one of the nodes in the cluster, repeat the following steps for each of the encrypted
volumes in the cluster.
2a In NSSMU, select Volumes.
2b In the Volumes list, select the shared volume you want to mount.
2c Press F7 to mount the shared volume.
2d When prompted, enter the password, then click OK.
If the server already knows the key for the volume, you are not prompted for the password.
2e In the Volumes list, select the shared volume that you want to dismount.
2f Press F7 to dismount the shared volume.
3 Follow the normal procedures to activate the cluster resources.
For information, see the OES 11 SP2: Novell Cluster Services for Linux Administration Guide. The
node passes the key information to the other nodes. While at least one of the servers is actively
participating in the cluster, you do not need to reenter the encryption password again.
19.8
Removing Encrypted Volumes
If a rollback becomes necessary, you must remove the encrypted volume from the server before you
perform the rollback.
Encrypted volumes require OES 2 Linux or later. Because earlier releases of NSS cannot activate an
encrypted volume, you cannot roll back the system to the earlier release. If you do, the encrypted
volume fails to activate or mount, and its pool cannot be repaired.
To prevent this potential data loss, make sure that the system upgrade to a supported platform is
active and performing as desired before creating encrypted volumes.
1 Create an unencrypted volume where you want to copy the data.
For information, see Section 19.3, “Creating an Encrypted Volume,” on page 242.
2 Use one of these methods to save the encrypted volume’s data on the unencrypted volume:
 Back up the volume’s data in unencrypted format on backup media, then restore the data to
the unencrypted volume.
 Make a volume-to-volume copy of the data from the encrypted volume to the unencrypted
volume.
3 Delete the encrypted volume.
4 Perform the system rollback.
19.9
What’s Next
Manage other NSS features of your encrypted volume as you would for an unencrypted volume. For
information, see “Managing NSS Volumes” on page 209.
Managing Encrypted NSS Volumes
245
246
OES 11 SP2: NSS File System Administration Guide for Linux
20
Securing Access to NSS Volumes,
Directories, and Files
20
This section describes measures you can use to help secure access to your Novell Storage Services
(NSS) volumes and user data.
 Section 20.1, “Configuring File System Trustees, Trustee Rights, Inherited Rights Filters, and
Attributes,” on page 247
 Section 20.2, “Configuring the Security Equivalence Vector Update Frequency,” on page 255
 Section 20.3, “Using Data Shredding to Prevent Access to Purged Files,” on page 257
 Section 20.4, “Enabling or Disabling LAF Audit Log Messages for Trustee Events,” on page 258
20.1
Configuring File System Trustees, Trustee Rights, Inherited
Rights Filters, and Attributes
NSS uses the Novell Trustee model for controlling access to user data. As an administrator or a user
with the Supervisor right or Access Control right, you can use the Files and Folders plug-in to
iManager to manage file system trustees, trustee rights, inherited rights filters, and attributes for a
file or folder on an NSS volume. A user who has only the Access Control right cannot modify the
rights of another user who has the Supervisor right.
IMPORTANT: For more information and alternate methods for configuring file system trustees and
attributes for directories and files on NSS volumes, see the OES 11 SP2: File Systems Management
Guide.
 Section 20.1.1, “Prerequisites for Configuring Trustees,” on page 247
 Section 20.1.2, “Viewing Properties of a File or Folder,” on page 248
 Section 20.1.3, “Configuring File or Folder Attributes,” on page 248
 Section 20.1.4, “Configuring Rights Properties (File System Trustees, Trustee Rights, and
Inherited Rights Filter),” on page 250
 Section 20.1.5, “Viewing Effective Rights for a Trustee,” on page 253
 Section 20.1.6, “Managing Rights,” on page 253
20.1.1
Prerequisites for Configuring Trustees
 The volume that you want to manage must be in the same tree where you are currently logged in
to iManager.
Securing Access to NSS Volumes, Directories, and Files
247
 You must have trustee rights for the volume, folder, and file that you want to manage.
 The volume must be a file system that uses the Novell trustee model for file access, such as an
NSS volume on OES 11 SP2, an NSS or NetWare traditional volume on NetWare 6.5, or an NCP
(NetWare Core Protocol) volume (an NCP share on a Linux POSIX file system) on OES 11 SP2.
20.1.2
Viewing Properties of a File or Folder
1 In iManager, click Files and Folders > Properties to open the Properties page.
2 Click the Search icon to browse and locate volume, folder or file from the Storage objects, then
click the name link of the object to select it.
The pathname of the object appears in the Name field.
3 View the following properties in three Properties tabs:
Properties Tabs Description
Information
 View details about the selected
volume, folder, or file.
 Configure directory quotas for
folders on NSS volumes where the
Directory Quotas attribute is
enabled.
 Modify the file owner.
 Configure file or directory attributes.
Rights
 View details about trustees, trustee
rights, and inherited rights filter for
the selected volume, folder, or file.
 Add or remove trustees.
For Information
See Section 25.7, “Viewing or Modifying
File or Folder Properties,” on page 322.
See Section 25.9, “Viewing, Adding,
Modifying, or Removing a Directory
Quota,” on page 327.
See Section 25.8, “Viewing or Modifying
File Ownership,” on page 325.
See Section 20.1.3, “Configuring File or
Folder Attributes,” on page 248.
See Section 20.1.4, “Configuring Rights
Properties (File System Trustees, Trustee
Rights, and Inherited Rights Filter),” on
page 250.
 Grant or revoke trustee rights for one
or more trustees.
 Configure the inherited rights filter.
Inherited Rights
 View details about explicitly
See Section 20.1.5, “Viewing Effective
assigned trustee rights and inherited Rights for a Trustee,” on page 253.
rights at all levels along the path
from the selected file or folder to the
root of the volume.
 View the effective rights for a given
trustee for the selected volume,
folder, or file.
20.1.3
Configuring File or Folder Attributes
File attributes determine how the file or folder behaves when accessed by any user. File attributes
apply universally to all users. For example, a file that has a read-only attribute is read-only for all
users.
Attributes can be set by any trustee with the Modify right to the directory or file, and attributes stay
set until they are changed. Attributes do not change when you log out or when you down a file
server.
248
OES 11 SP2: NSS File System Administration Guide for Linux
For example, if a trustee with the Modify right enables the Delete Inhibit attribute for a file, no one,
including the owner of the file or the network administrator, can delete the file. However, any trustee
with the Modify right can disable the Delete Inhibit attribute to allow the file’s deletion.
1 In iManager, click Files and Folders > Properties to open the Properties page.
2 Click the Search icon to browse and locate volume, folder or file from the Storage objects, then
click the name link of the object to select it.
The pathname of the object appears in the Name field. For example:
VOL1:dir1\dirB\filename.ext
3 Click the Information tab to view or modify the file or folder attributes. Enable or disable an
attribute by selecting or deselecting the check box next to it.
IMPORTANT: Changes do not take effect until you click OK or Apply. If you click a different tab
before you save, changes you make on this page are lost.
Securing Access to NSS Volumes, Directories, and Files
249
The following table defines file system attributes and whether they apply to files, folders, or
both files and folders.
Attribute
Description
Files
Folders
Read Only
Prevents a file from being modified.
Yes
No
This attribute is typically used in combination with Delete
Inhibit and Rename Inhibit.
Archive
Identifies files and folders that have been modified since the
last backup. This attribute is assigned automatically.
Yes
Yes
Hidden
Hides directories and files so they do not appear in a file
manager or directory listing.
Yes
Yes
Shareable
Allows more than one user to access the file at the same time. Yes
This attribute is usually used with Read Only.
No
Purge Immediate
Flags a directory or file to be erased from the system as soon Yes
as it is deleted. Purged directories and files cannot be
recovered.
Yes
Rename Inhibit
Prevents the directory or filename from being modified.
Yes
Yes
Delete Inhibit
Prevents users from deleting a directory or file.
Yes
Yes
This attribute overrides the file system trustee Erase right.
When Delete Inhibit is enabled, no one, including the owner
and network administrator, can delete the directory or file. A
trustee with the Modify right must disable this attribute to allow
the directory or file to be deleted.
NOTE: Setting the following preferences override the delete
inhibit and rename inhibit settings. The override option is
made available via volume mount options and nsscon.
 From nsscon enter, /
(No)RootOverrideFA=(ALL|VOL1,VOL2)
 For local volumes change the following: /etc/
fstab (-o name=<NAME>,overrideFA)
 For shared volumes change the following: cluster
resource load scripts (/opt=overrideFA)
If /RootOverrideFA is set on the volume, the Linux root
user can delete and rename a file.
4 If you modified any settings, click Apply or OK to save your changes.
20.1.4
Configuring Rights Properties (File System Trustees, Trustee Rights,
and Inherited Rights Filter)
File system trustees, trustee rights, and inherited rights filters are used to determine access and usage
for directories and files on NSS volumes on OES 11 SP2, NCP volumes on OES 11 SP2, and NSS and
NetWare Traditional volumes on NetWare 6.5. If you modify any settings, you must click Apply or OK
to save the changes.
250
OES 11 SP2: NSS File System Administration Guide for Linux
Viewing, Adding, or Removing File System Trustees
A trustee is any NetIQ eDirectory object (such as a User object, Group object, Organizational Role
object, or other container object) that you grant one or more rights for a directory or file. Trustee
assignments allow you to set permissions for and monitor user access to data.
1 In iManager, click Files and Folders, then click Properties to open the Properties page.
2 On the Properties page, select a volume, folder, or file to manage.
For instructions, see Section 20.1.2, “Viewing Properties of a File or Folder,” on page 248.
3 Click the Rights tab to view the trustees, trustee rights, and inherited rights filter for the selected
volume, folder, or file.
4 Add trustees.
4a Scroll down to the Add Trustees field.
4b Use one of the following methods to add usernames as trustees:
 Click the Search icon, browse to locate the usernames of the users, groups, or roles that
you want to add as trustees, click the name link of the objects to add them to the
Selected Objects list, then click OK.
 Click the History icon to select usernames from a list of users, groups, or roles that you
recently accessed.
 Type the typeless distinguished username (such as username.context) in the Add
Trustees field, then click the Add (+) icon.
The usernames appear in the Trustees list, but they are not actually added until you click
Apply or OK. Each of the usernames has the default Read and File Scan trustee rights
assigned.
4c On the Properties page, click Apply to save the changes.
5 Remove trustees.
5a Scroll down to locate and select the username of the user, group, or role that you want to
remove as a trustee.
5b Click the Remove (red X) icon next to the username to remove it as a trustee.
The username disappears from the list, but it is not actually removed until you click Apply
or OK.
5c On the Properties page, click Apply to save changes.
Viewing, Granting, or Revoking File System Trustee Rights
Administrator users and users with the Supervisor right or the Access Control right can grant or
revoke file system trustee rights for a volume, folder, or file. Only the administrator user or user with
the Supervisor right can grant or revoke the Access Control right.
1 In iManager, click Files and Folders, then click Properties to open the Properties page.
2 On the Properties page, select a volume, folder, or file to manage.
For instructions, see Section 20.1.2, “Viewing Properties of a File or Folder,” on page 248.
3 Click the Rights tab to view the trustees, trustee rights, and inherited rights filter for the selected
volume, folder, or file.
4 Scroll to locate the username of the trustee you want to manage.
5 In the check boxes next to the trustee name, select or deselect the rights you want to grant or
revoke for the trustee.
Securing Access to NSS Volumes, Directories, and Files
251
IMPORTANT: Changes do not take effect until you click OK or Apply. If you click a different tab
before you save, any changes you have made on this page are lost.
Trustee Right
Description
Supervisor (S)
Grants the trustee all rights to the directory or file and any subordinate items.
The Supervisor right cannot be blocked with an inherited rights filter (IRF) and
cannot be revoked. Users who have this right can also grant other users any rights
to the directory or file and can change its inherited rights filter.
Default=Off
Read (R)
Grants the trustee the ability to open and read files, and open, read, and execute
applications.
Default=On
Write (W)
Grants the trustee the ability to open and modify (write to) an existing file.
Default=Off
Erase (E)
Grants the trustee the ability to delete directories and files.
Default=Off
Create (C)
Grants the trustee the ability to create directories and files and salvage deleted
files.
Default=Off
Modify (M)
Grants the trustee the ability to rename directories and files, and change file
attributes. Does not allow the user to modify the contents of the file.
Default=Off
File Scan (F)
Grants the trustee the ability to view directory and filenames in the file system
structure, including the directory structure from that file to the root directory.
Default=On
Access Control (A)
Grants the trustee the ability to add and remove trustees for directories and files
and modify their trustee assignments and inherited rights filters.
Default=Off
6 Click Apply or OK to save changes.
NOTE: The DFS junctions rights modification is not supported. This will be disabled. Use DFS
tasks for junction rights management.
Configuring the Inherited Rights Filter for a File or Directory
File system trustee rights assignments made at a given directory level flow down to lower levels until
they are either changed or masked out. This is referred to as inheritance. The mechanism provided
for preventing inheritance is called the inherited rights filter. Only those rights allowed by the filter
252
OES 11 SP2: NSS File System Administration Guide for Linux
are inherited by the child object. The effective rights that are granted to a trustee are a combination of
explicit rights set on the file or folder and the inherited rights. Inherited rights are overridden by
rights that are assigned explicitly for the trustee on a given file or folder.
1 In iManager, click Files and Folders, then click Properties to open the Properties page.
2 On the Properties page, select a volume, folder, or file to manage.
For instructions, see Section 20.1.2, “Viewing Properties of a File or Folder,” on page 248.
3 Click Information, then scroll down to view the inherited rights filter.
The selected rights are allowed to be inherited from parent directories. The deselected rights are
disallowed to be inherited.
4 In the Inherited Rights Filter, enable or disable a right to be inherited from its parent directory by
selecting or deselecting the check box next to it.
5 Click Apply or OK to save the changes.
20.1.5
Viewing Effective Rights for a Trustee
Effective rights are the explicit rights defined for the trustee plus the rights that are inherited from the
parent directory. The Inherited Rights page shows the inheritance path for a trustee for the selected file
or folder and the effective rights at each level from the current file or directory to the root of the
volume. You can use this information to help identify at which directory in the path a particular right
was filtered, granted, or revoked.
1 In iManager, click Files and Folders, then click Properties to open the Properties page.
2 On the Properties page, select a volume, folder, or file to manage.
For instructions, see Section 20.1.2, “Viewing Properties of a File or Folder,” on page 248.
3 On the Properties page, click the Inherited Rights tab to view the effective rights for a given trustee.
By default, the page initially displays the effective rights for the username you used to log in to
iManager.
4 On the Inherited Rights page, click the Search icon next to the Trustee field to browse for and locate
the username of the trustee you want to manage, then select the username by clicking the name
link.
The path for the selected file or folder is traced backwards to the root of the volume. At each
level, you can see the rights that have been granted and inherited to create the effective rights for
the trustee.
5 If you make any changes, click Apply or OK to save them.
20.1.6
Managing Rights
Users can receive rights in a number of ways, such as explicit trustee assignments, inheritance, and
security equivalence. Rights can also be limited by Inherited Rights Filters and changed or revoked
by lower trustee assignments. The net results of all these actions—the rights a user can employ—are
called effective rights.
 “Viewing or Modifying the Effective Rights of a Trustee” on page 254
 “Assigning or Modifying Rights to Files and Folders” on page 254
 “Limitations in Effective Rights, and Rights to Files and Folders” on page 255
Securing Access to NSS Volumes, Directories, and Files
253
Viewing or Modifying the Effective Rights of a Trustee
View the effective rights for a trustee. If needed, you can modify the trustee's rights on a file, folder or
volume.
NOTE: Ensure to LUM-enable the non-default admin users for viewing effective rights.
Effective Rights details include the following:
Server: Displays the name of the server where the volume, folder or file exists along with the trustee
information.
Location: Displays the location of the volume, file or folder.
Trustees: Lists the trustees who have effective rights on the file, folder or volume listed in Location.
To view or modify the rights for a particular trustee which will be reflected in the effective rights:
1 In iManager, click Files and Folders > Properties to open the Properties page.
2 Click the
(Object Selector) icon to browse the storage objects, locate and select the name link
of the file or folder you want to manage, then click OK to view the Properties for the file. For
more instructions, see Section 20.1.2, “Viewing Properties of a File or Folder,” on page 248.
3 Click the Effective Rights tab to view the list of trustees and their effective rights on the chosen
folder or file.
4 To modify the rights for a particular trustee which will be reflected in the effective rights, click
the hyper-linked name of a trustee.
5 On the Rights to Files and Folder page, modify the rights and click Apply.
Assigning or Modifying Rights to Files and Folders
Use this feature to assign or modify the rights that a trustee has on a folder or file.
Rights to Files and Folders details include the following and they are displayed based on the context
from where this page is invoked:
Modify User / Modify Group: Displays the name of the trustee for whom the rights are being
assigned or modified.
Volume: Displays the selected volume. It's a read-only field.
Files and Folders: Lists the files and folders for which you can modify the rights for the trustee
displayed in Rights to Files and Folders.
To modify or add rights on a file or folder for a trustee:
1 In iManager, click Users > Modify User or Groups > Modify Group to open the Modify User page.
2 Under Volume, using the search
(Search) button, select the volume where the file or folder
exists. You can also choose a recently used volume using the
(History) button.
3 Under File and Folders Trustee Rights, click Add to select file(s) or folder(s) and the selected
entities get listed under Files/Folders section.
4 Under Files and Folders section, for a file or folder, modify or assign rights and then click Apply.
Use the
254
(Delete) button, to delete an entity under Files and Folders.
OES 11 SP2: NSS File System Administration Guide for Linux
TIP
 Use
collapse or
expand buttons to collapse or expand the list of files and folders.
 Use the Filter option to search for a file or folder from the displayed list. Nested filter is
supported. For example, if you specify the search string as "ark dir", the files and folders
will be filtered based on "ark" string first, and then a sub-search is done for "dir" on the filter
result of "ark".
 Use the
(unsorted),
(sorted ascending), and
the list of files and folders.
(sorted descending) buttons to sort
Limitations in Effective Rights, and Rights to Files and Folders
 Viewing Effective Rights on non-NSS (NCP) Volumes not supported.
 Viewing the soft-linked files or folders in a volume using the Properties under Files and Folders
is not supported.
 You will see the primary file too when the rights are modified for the hard linked files.
 The Effective rights is not supported for DST Shadow Volume files in this release.
20.2
Configuring the Security Equivalence Vector Update
Frequency
The Security Equivalence Vector (SEV) is used to validate the user against the trustee rights of the
directory and file the user is attempting to access. You can use commands in the NSS Console utility
(nsscon) to enable or disable the update, to set the update interval from 5 minutes to 90 days
(specified in seconds), and to force an immediate update of security equivalence vectors.
 Section 20.2.1, “Understanding the SEV,” on page 255
 Section 20.2.2, “Enabling or Disabling the Background SEV Update,” on page 256
 Section 20.2.3, “Configuring the Background SEV Update Interval,” on page 256
 Section 20.2.4, “Forcing a Background SEV Update,” on page 257
20.2.1
Understanding the SEV
The Security Equivalence Vector (SEV) is calculated for each NSS user based on information in the
user’s profile in NetIQ eDirectory. It is a list of eDirectory GUIDs, for example:
 the user’s own GUIDs
 GUIDs of groups that include the user
 GUIDs of parent containers for the user and his or her groups
 security equivalent GUIDs
After you boot the Linux server, when a user first attempts to connect to the NSS file system, NSS
contacts NetIQ eDirectory to retrieve the user’s Security Equivalence Vector (SEV). eDirectory
calculates the user’s effective rights for the NSS volume, creates the SEV, and passes it to NSS. NSS
compares the user’s SEV with file system trustees and trustee rights for the specified file or directory
to determine if the user can access the resource.
Securing Access to NSS Volumes, Directories, and Files
255
For NetWare, whenever a user connects to the NSS file system, NetWare retrieves the user’s SEV
from eDirectory and maintains it as part of the connection structure for the user’s session. NSS
automatically retrieves the user’s SEV from the NetWare connection structure, then deletes it when
the session ends.
The SEV behavior differs from the NetWare behavior because NSS does not have the same integrated
relationship to the connection infrastructure as it does on NetWare. NSS caches the SEV locally in the
server memory, where it remains until the server is rebooted or the user is deleted from eDirectory.
NSS polls eDirectory at a specified interval for updates to the SEVs that are in cache.
20.2.2
Enabling or Disabling the Background SEV Update
By default, the SEV is updated in the background and whenever the server is rebooted. You can
optionally disable the background updating. If it is disabled, the user access can become
unsynchronized over time, so that users might have less or more access than you have configured.
We recommend that you leave the SEV updating feature enabled, then modify the polling frequency
to best meet the security needs of your production environment.
To enable or disable the setting:
1 Open a terminal console, then log in as the root user.
2 At the terminal console prompt, start the NSS Console by entering
nsscon
3 At the nsscon prompt, do one of the following:
 Enable: This is the default. To enable the background updating of the SEV in addition to the
default update at server reboot, enter
nss /SecurityEquivalenceUpdating
 Disable: To disable the background updating, enter
nss /NoSecurityEquivalenceUpdating
The SEV Update is enabled when you first reboot the server. If you disable SEV updates and want the
setting to persist across server reboots, include the /SecurityEquivalenceUpdating option in the /
etc/opt/novell/nss/nssstart.cfg file.
20.2.3
Configuring the Background SEV Update Interval
You might want to modify the background SEV update interval to make the polling for eDirectory
updates to be more or less frequent. Polling too frequently can impact performance. Polling too
infrequently can cause delays in granting or restricting access for certain users. To avoid possible
security violations, you can also force an update at any time by using the /
ForceSecurityEquivalenceUpdate command. For information, see Section 20.2.4, “Forcing a
Background SEV Update,” on page 257.
The interval for the background updating of the SEV is the elapsed time between the last update and
the next one. At the end of the elapsed time, NSS requires updated SEVs from eDirectory. The default
interval is 7200 seconds (2 hours). The valid range is 300 (5 minutes) to 7776000 (90 days).
To set the interval to use until the next server reboot:
1 Open a terminal console, then log in as the root user.
2 At the terminal console prompt, start the NSS Console by entering
256
OES 11 SP2: NSS File System Administration Guide for Linux
nsscon
3 At the nsscon prompt, enter
nss /UpdateSecurityEquivalenceInterval=value
Replace value with the desired interval.
To make the interval setting persistent across server reboots, include the /
UpdateSecurityEquivalenceInterval=value option in the /etc/opt/novell/nss/
nssstart.cfg file.
20.2.4
Forcing a Background SEV Update
If you modify user’s access control settings or remove a user from eDirectory in between SEV update
intervals, you can to force the SEV to be updated immediately after that to avoid possible security
violations. Use the /ForceSecurityEquivalenceUpdate option to force an immediate update for all
users in the NSS file system so that your changes can be reflected immediately in the user’s active
SEV for this server.
To force an immediate update:
1 Open a terminal console, then log in as the root user.
2 At the terminal console prompt, start the NSS Console by entering
nsscon
3 At the nsscon prompt, enter
nss /ForceSecurityEquivalenceUpdate
20.3
Using Data Shredding to Prevent Access to Purged Files
Data shredding hides purged files by overwriting them with random patterns of hexadecimal
characters. This prevents unauthorized users from using a disk editor to access purged files.
If the Data Shredding attribute for an NSS volume is disabled, unauthorized access to data is
possible. An individual can extend a file, LSEEK to the end of the existing file data, and then read the
data. This returns the decrypted leftover data that is in the block.
You can place up to seven data shred patterns over deleted data. Data shredding truly erases files.
Only files that have been purged are shredded. If Salvage is enabled, there remains a purge delay
between when the file is deleted and purged during which users can still salvage deleted files.
Data shredding consumes a great deal of disk connection bandwidth, resulting in a performance
penalty for using the disk and system resources needed to overwrite the shredded file. Unless you
must use data shredding for security reasons, the Data Shredding attribute for your NSS volume can
be disabled or set to a lower number of shredding passes.
This section describes the following:
 Section 20.3.1, “Setting the Data Shredding Attribute When You Create a Volume,” on page 258
 Section 20.3.2, “Setting the Data Shredding Attribute for an Existing Volume,” on page 258
 Section 20.3.3, “Disabling Data Shredding for an Existing Volume,” on page 258
Securing Access to NSS Volumes, Directories, and Files
257
20.3.1
Setting the Data Shredding Attribute When You Create a Volume
When you create a volume, simply select the Data Shredding check box and specify the number of
shredding cycles with an integer number between 1 and 7 times (or specify 0 to indicate no shredding
capability) when you set the volume’s attributes. For more information, see Section 18.3, “Creating
Unencrypted NSS Volumes,” on page 216.
20.3.2
Setting the Data Shredding Attribute for an Existing Volume
1 In iManager, click Storage > Volumes to open the Volumes page.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select a server to manage.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
Wait until the page refreshes with a list of volumes in the Volumes list.
3 From the Volumes list, select the volume that you want to manage.
4 Click Properties > Attributes.
This opens the Volume Properties page to the Attributes tab.
5 Select the Data Shredding check box.
6 Specify the number of shredding cycles, where 0 is no shredding and 1 to 7 are the valid number
of cycles to shred data.
7 Click Apply or OK to save the change, or click Cancel to back out of the process.
If you click Apply, iManager saves the change and remains on the device page. If you click OK,
iManager saves the change and takes you to the main Storage page. If you do not click Apply or
OK, the setting is not implemented.
20.3.3
Disabling Data Shredding for an Existing Volume
WARNING: If you disable data shredding, an individual can recover leftover data on the drive and
secure data might be exposed.
1 At the NSS console (nsscon), enter
nss /nodatashredding=volumename
where volumename is the name of the volume where you want to prevent the shredding
capability.
20.4
Enabling or Disabling LAF Audit Log Messages for Trustee
Events
Use the NSS audit log messages commands to enable or disable messages via Lightweight Auditing
Format (LAF) for NSS trustee changes for NSS volumes on OES 11 SP2.
 Section 20.4.1, “Understanding NSS Audit Log Messages,” on page 259
 Section 20.4.2, “Enabling or Disabling LAF Audit Messages for Trustee Events,” on page 262
 Section 20.4.3, “Viewing LAF Audit Messages,” on page 262
 Section 20.4.4, “Additional Information,” on page 262
258
OES 11 SP2: NSS File System Administration Guide for Linux
20.4.1
Understanding NSS Audit Log Messages
When the LAFAuditTrustee parameter is enabled, NSS reports changes for the following subset of
NSS events:
 Adding trustees (AddTrustee)
 Removing trustees (RemoveTrustee)
 Setting the inherited rights mask (SetInheritedRightsMask)
Comma separated name value pairs are used for the NSS audit log messages. The messages are
written to the/var/log/audit/audit.log file.
The types of information reported are described below:
 “Message Type and ID” on page 259
 “Add Trustee Event Messages” on page 259
 “Remove Trustee Event Messages” on page 259
 “Set Inherited Rights Mask Event Messages” on page 260
 “Trustee Rights” on page 260
 “Inherited Rights Mask for Trustee Rights” on page 261
 “Special Rights” on page 261
 “Inheritance Attributes” on page 261
Message Type and ID
All NSS Audit Log messages are of the type AUDIT_KERNEL_OTHER (1316) for LAF. For example,
the log messages begin
type=UNKNOWN[1316] msg=audit(message_id):
Add Trustee Event Messages
The general format of NSS audit log messages for a single AddTrustee event is:
NSS: AddTrustee: fsuid=<user requesting the
operation>,vol=<VOLNAME>,path=<FULL_PATH (relative to the
volume)>,trustee=<typeful Fully Distinguished eDirectory username of the trustee
being added>,rights=<RIGHTS>,attributes=<ATTRIBUTES>
For example, the following message is for a single event for adding a trustee:
type=UNKNOWN[1316] msg=audit(1164926678.066:7): NSS: AddTrustee:
fsuid=0,vol=NSS1,path=/abc/
a,trustee=.CN=user5.O=company.T=COMPANY_TREE.,rights=0x1fb,attributes=0xc000
In this example, the trustee user5.company.company_tree is assigned the SRWCEMFA rights,
totaling 0x1fb. For a map of rights to values, see “Trustee Rights” on page 260.
Remove Trustee Event Messages
The general format of NSS audit log messages for a single RemoveTrustee event is:
Securing Access to NSS Volumes, Directories, and Files
259
NSS: RemoveTrustee: fsuid=<user requesting the
operation>,vol=<VOLNAME>,path=<FULL_PATH (relative to the
volume)>,trustee=<typeful Fully Distinguished eDirectory username of the trustee
being removed>
For example, the following message is for a single event for removing a trustee:
type=UNKNOWN[1316] msg=audit(1164926734.422:8): NSS: RemoveTrustee:
fsuid=0,vol=NSS1,path=/abc/a,trustee=.CN=user5.O=company.T=COMPANY_TREE.
Set Inherited Rights Mask Event Messages
The general format of NSS audit log messages for a single SetInheritedRightsMask event is:
NSS: SetInheritedRightsMask: fsuid=<user>,vol=<VOLNAME>,path=<FULL_PATH(relative
to the volume)>,inheritedRightsMask=<RIGHTS>
For example, the following message is for a single event for changes to the inherited rights mask:
type=UNKNOWN[1316] msg=audit(1164926882.005:10): NSS: SetInheritedRightsMask:
fsuid=0,vol=NSS1,path=/abc/a,inheritedRightsMask=0x149
In this example, the trustee rights settings can be inherited from the parent directory for the
Supervisor (0x0100), Read (0x0001), Create (0x0008), and File Scan (0x0040) rights, totaling 0x0149.
Trustee Rights
The file system trustee rights setting in the message is a hexadecimal value that represents the
combination of rights assigned.
The following table maps the trustee rights to hexadecimal values. The values for enabled rights are
added to get the reported value for the Rights and Inherited Rights Mask.
Trustee Right
Hexadecimal Value
Supervisor (S)
0x0100
Read (R)
0x0001
Write (W)
0x0002
Create (C)
0x0008
Erase (E, Delete)
0x0010
Modify (M)
0x0080
File Scan (F, See Files)
0x0040
Access Control (A)
0x0020
For example, if the trustee has SRWCEMFA rights, the value is the sum of these or 0x1fb in
hexadecimal.
260
OES 11 SP2: NSS File System Administration Guide for Linux
Inherited Rights Mask for Trustee Rights
An inherited rights mask (IRM) specifies which trustee rights are allowed to be inherited downward
through a directory. If a trustee bit is set in the IRM of a directory, that bit can be inherited downward
in the tree. If a trustee bit is not set in the IRM of a directory, then that right cannot be inherited by the
directory's contents, even if a higher level in the directory tree had that right.
The bit definitions for inherited rights masks are the same bits as the trustee rights themselves as
described in “Trustee Rights” on page 260. For example, if the Read and File Scan rights can be
inherited, the inherited rights mask value is 0x0041 in hexadecimal.
Special Rights
In addition to trustee rights, the following are special rights that might be reported in the Rights field.
They cannot be inherited.
Special Right
Hexadecimal Value
Salvage
0x0200
Secure
0x8000
Inheritance Attributes
The attributes reported in the log are flags that tell the trustee how it gets inherited. (They are not file
system attributes.) By default, the NetWare trustee model inherits downward and upward (visibility
inherits upward; actual rights inherit downward).
The following table maps the inheritance attributes to hexadecimal values. The values for enabled
inheritance attributes are added to get the reported value for the Attributes parameter.
Inheritance Attribute
Hexadecimal Value
Inherit Down
0x8000
Make rights inherit downward.
Inherit Up
0x4000
Make directories above this file visible.
Negative Rights (Not currently used)
0x2000
All other bits are ignored if this parameter is set.
For example, an Attribute value of 0xc000 in the audit message indicates that both the Inherit Down
and Inherit Up parameters are enabled. This is the typical setting for NSS file systems.
Securing Access to NSS Volumes, Directories, and Files
261
20.4.2
Enabling or Disabling LAF Audit Messages for Trustee Events
Enable or disable the generation of audit messages via LAF for NSS trustee changes. After you enable
the audit log messages, the setting persists until the server reboot. After a server reboot, the audit log
is disabled again by default. To make the command persist across reboots, add it to the /etc/opt/
novell/nss/nssstart.cfg file. The messages are written to the/var/log/audit/audit.log file.
1 Open a terminal console, then log in as the root user.
2 At the terminal console prompt, start the NSS Console by entering
nsscon
3 At the nsscon prompt, do one of the following:
 Enable: To enable audit messages for an NSS volume, enter
nss /LAFAuditTrustee
 Disable: To disable audit messages for an NSS volume, enter
nss /NoLAFAuditTrustee
20.4.3
Viewing LAF Audit Messages
View audit messages in the /var/log/audit/audit.log file.
For information about the format of the messages, see Section 20.4.1, “Understanding NSS Audit Log
Messages,” on page 259.
20.4.4
Additional Information
For information about configuring Linux Audit, see the Linux Audit Quick Start (https://
www.suse.com/documentation/sles11/singlehtml/audit_quickstart/audit_quickstart.html).
262
OES 11 SP2: NSS File System Administration Guide for Linux
21
Managing Compression on NSS
Volumes
21
Novell Storage Services file compression uses algorithms to reduce the amount of space a file
consumes in your storage system. Compression can optionally be used to conserve disk space and
increase the amount of data a volume can store. No data in the file is permanently eliminated to
compress the file; all original data is recovered when the file is decompressed.
This section describes the following:
 Section 21.1, “Understanding Compression,” on page 263
 Section 21.2, “Configuring Compression for a Server,” on page 268
 Section 21.3, “Configuring a Volume for Compression,” on page 273
 Section 21.4, “Suspending Compression for Volumes or Files,” on page 275
 Section 21.5, “Disabling Compression for a Volume,” on page 275
 Section 21.6, “Restoring Data to a Uncompressed Volume,” on page 275
 Section 21.7, “Configuring Compression Preferences for Directories and Files,” on page 275
 Section 21.8, “Using NSS Commands to Configure and Monitor Compression,” on page 278
 Section 21.9, “Repairing Compressed Volumes with the Compfix Utility,” on page 279
 Section 21.10, “Backing Up Compressed Files,” on page 279
21.1
Understanding Compression
This section describes the following:
 Section 21.1.1, “Compression and Decompression Processes,” on page 264
 Section 21.1.2, “Compression Settings,” on page 264
 Section 21.1.3, “Guidelines for Compression,” on page 265
 Section 21.1.4, “Factors Affecting Compression,” on page 267
 Section 21.1.5, “Factors Affecting Decompression,” on page 268
 Section 21.1.6, “Monitoring Compression Activity,” on page 268
Managing Compression on NSS Volumes
263
21.1.1
Compression and Decompression Processes
For each compressed volume, the file compression and decompression processes occur in the
background as needed for each compressed volume, to support normal file access and immediate file
compression settings. Scheduled compression occurs during a specified time each day. The
scheduled period is usually set to non-peak hours, but it can be set for any time you prefer.
A file must be idle for the period specified in the Days Untouched before Compression parameter
before it is considered eligible for a scheduled compression. During a scheduled compression, NSS
evaluates file time stamps and all compression settings to determine which files qualify for
compression. When NSS queues eligible files for compression, the compression process begins and
handles as many compression tasks as it can in the available time. Any remaining files are queued for
the next compression opportunity.
To minimize the impact of compression and decompression on system performance, you can limit
the maximum number of concurrent process threads. The system queues the compression and
decompression requests and then processes them as the threads become available.
NSS retains the uncompressed file during the compression process. Before NSS compresses a file, it
verifies that the file system has enough space available for both the uncompressed file and the
compressed file to temporarily coexist. If there is not enough space available, the file is not
compressed. You must make free space available for the volume before the file can be opened. After
the compression finishes successfully, NSS deletes the uncompressed file and keeps the compressed
file. If error occurs during compression, NSS discards the compressed file and marks the
uncompressed file with a Cannot Compress (Cc) attribute.
NSS does not attempt to compress a file while its Cc attribute is set to On. If the file is opened and
saved, its Cc attribute is reset to Off. You can also run the Compfix utility to clear the Cc attribute. For
more information, see Section 21.9, “Repairing Compressed Volumes with the Compfix Utility,” on
page 279.
21.1.2
Compression Settings
Although the cost of storage media is decreasing, you might consider compression to store more
information on media where available space is limited. File compression requires configuration for
the server level, for the volume, and optionally for individual directories and files.
Common Service Compression Parameters
At the server level, the settings for compression parameters in Common Services govern when and
how compression works for the NSS volumes where the compression attribute is enabled. For
information about these parameters, see Section 21.2, “Configuring Compression for a Server,” on
page 268.
The Volume’s Compression Attribute
The volume’s Compression attribute determines if its files can be compressed. You can enable the
attribute when you create a new volume or add it at any time for an existing volume. After it is set,
the Compression attribute persists for the life of the volume. For information about setting attributes
for existing volumes, see Section 21.3, “Configuring a Volume for Compression,” on page 273.
264
OES 11 SP2: NSS File System Administration Guide for Linux
Compression Preferences for Directories and Files
For individual directories and files, you can optionally set compression preferences that allow file
compression to occur immediately for specified files, regardless of the server’s compression
parameters. You can also specify restrictions for individual files that make them ineligible for
compression. For information about how to set compression preferences for individual directories
and files, see Section 21.7, “Configuring Compression Preferences for Directories and Files,” on
page 275.
21.1.3
Guidelines for Compression
To effectively use compression for your NSS volumes, you must understand the following key
concepts:
 “Some Volumes Are Not Good Candidates for Compression” on page 265
 “After It Is Set, the Compression Attribute Persists for the Life of the Volume” on page 265
 “Inactivity Determines Which Files Are Eligible for Background Compression” on page 266
 “Some Files Do Not Compress Well” on page 266
 “Decompression Activity Depends on Available Space” on page 266
 “Immediate Compression Impacts CPU Performance” on page 267
 “Files Remain Compressed during Backup and Restore” on page 267
Some Volumes Are Not Good Candidates for Compression
Compression is not recommended for the sys: volume. Reserve compression for user data volumes.
You cannot use compression on an NSS volume on a CD or DVD drive.
After It Is Set, the Compression Attribute Persists for the Life of the Volume
The Compression attribute for a volume can be set when you create the NSS volume, or it can be set
at any time thereafter. After you set the Compression attribute for a volume, you cannot turn it off;
the parameter is in effect for the life of the volume.
You can suspend the compression activity, as needed, by using the Enable File Compression
parameter. This parameter suspends compression for all volumes on the server. For information, see
Section 21.4, “Suspending Compression for Volumes or Files,” on page 275 and Section 21.5,
“Disabling Compression for a Volume,” on page 275.
If you want to turn off file compression permanently, you must uncompress the data, back up the
volume in its uncompressed state, then restore the uncompressed data to a new volume on which the
Compression attribute is not set.
Managing Compression on NSS Volumes
265
Inactivity Determines Which Files Are Eligible for Background Compression
NSS compresses files based on the interval of time that a file remains inactive. With background
compression, files automatically pass in and out of their compressed state as they are unused and
qualify for compression, then are accessed and uncompressed. It is not necessary to separate
application files from data files for file compression. Most application files are used regularly and are
not inactive long enough to qualify for compression.
Use the compression parameter named Days Untouched before Compression to set the length of the
interval of inactivity. The parameter uses the date the file was last accessed for reading or writing to
determine if a file is inactive, and therefore, eligible for compression.
To determine the optimal period of inactivity to use, consider the frequency of use of different types
of files and your compression goals. Application files tend to be used more frequently, while user
data is used less frequently. For example, the shorter the period of inactivity is, the higher the
frequency of compression. The longer the period of inactivity is, the lower the frequency of
compression, and the less likely it is that files are ever compressed.
If the volume is on a shared pool, its files might be queued for compression on one node, when its
pool is cluster migrated or failed over to another node. The Compression Queue is non-persistent, so
on the new node, the volume does not have information about the compression queue from the old
node. Therefore, the files must re-qualify for compression when the Background Compression starts
on the new node.
Some Files Do Not Compress Well
A file must be larger than 8 KB and smaller than 256 MB to be eligible for compression. The
compression algorithm determines these limits.
To avoid the overhead of decompressing files that do not compress well, the system calculates the
compressed size of a file before actually compressing it. If no disk space is saved by compression, or
if the size difference does not meet the value specified by the set command’s Minimum Percentage
Compression Gain parameter, the file is not compressed.
NSS does not compress NSS sparse files. A sparse file contains numerous contiguous zeros that NSS
stores in a special way to conserve space. A sparse file’s logical size is larger than its physical disk
usage. If a sparse file were compressed, it would actually consume more storage space than it
normally does.
Some database files become unavailable when they are compressed, such as Sybase database files.
IMPORTANT: If you use Sybase database files in a volume, such as for ZENworks databases, do not
enable compression on the volume, or mark each database file with the Don’t Compress (Dc)
attribute so that it is never compressed even if compression is enabled for the volume. For details, see
Technical Information Document 10075966 (http://support.novell.com/cgi-bin/search/searchtid.cgi?/
10075966.htm).
Decompression Activity Depends on Available Space
Compressed files are uncompressed as they are needed, then remain uncompressed until they are
inactive for a designated period. For a file to be uncompressed, there must be enough free space on
the volume to accommodate the decompression process and the uncompressed file size.
266
OES 11 SP2: NSS File System Administration Guide for Linux
Immediate Compression Impacts CPU Performance
Compression requires processor resources, memory resources, and hard disk space during the
compression and decompression processes. Compression is usually a low-priority process thread
because of compression’s impact on performance. If you flag many large files for immediate
compression during peak system usage, CPU performance might deteriorate.
You can configure the server’s compression parameters to control how compression services use
resources. For example, you can schedule compression to occur only during non-peak hours to free
CPU resources during peak and normal hours. For information, see Section 21.2, “Configuring
Compression for a Server,” on page 268.
Files Remain Compressed during Backup and Restore
Novell Storage Management Services (SMS) backs up and restores compressed files in their
compressed state. It does not compress uncompressed files for backup; they are stored and restored
in their uncompressed state. For more information, see Section 21.10, “Backing Up Compressed
Files,” on page 279.
21.1.4
Factors Affecting Compression
Typically, files that have been inactive for a specified period of time set in the Days Untouched before
Compression parameter are eligible for compression.
The Immediate Compression attribute for a file or directory can also be used to identify files for
compression:
 Files residing in a directory marked for immediate compression
 Files residing in subdirectories of a directory marked recursively for immediate compression
 Individual files that are marked for immediate compression
Several factors prevent an uncompressed file from being compressed, even if it meets inactivity
criteria:
 The file has not been inactive for a period longer than the value in Days Untouched before
Compression parameter when the compression daily check begins and compares the file’s time
stamp to the starting time. The daily check is controlled by the Compression Daily Check
Starting Hour parameter.
 The file is flagged with a Don’t Compress (Dc) attribute. For information, see Section 21.7,
“Configuring Compression Preferences for Directories and Files,” on page 275.
 The file is an NSS sparse file, so its physical storage size is already minimized. A sparse file
contains numerous contiguous zeros that NSS stores in a special way to conserve space.
 The amount of space freed by compressing a file does not meet the specified minimum
reduction criteria. For example, if the Minimum Compression Percentage Gained parameter is
set to 20%, a file would not be compressed if compression reduced its file size by only 10%.
 The file compression service is suspended. For information, see Section 21.4, “Suspending
Compression for Volumes or Files,” on page 275.
 The queue of files marked for compression is long and cannot be completed during the specified
hours set aside for compression activities. For information on how to modify the Compression
Daily Check Stop Hour, see Section 21.2, “Configuring Compression for a Server,” on page 268.
Managing Compression on NSS Volumes
267
 The volume does not contain enough space to hold both the original version and the compressed
version of the file while compression occurs.
 The file has been deleted, and the Deleted File Compression Option parameter does not allow
compression of deleted files. For information on how to modify this setting, see Section 21.2,
“Configuring Compression for a Server,” on page 268.
21.1.5
Factors Affecting Decompression
Decompression occurs as needed to support file access, but other factors affect whether the
uncompressed version or compressed version of the file remains in the volume after the access. The
files remain compressed in the following cases:
 The percentage of free disk space available on the volume is insufficient to allow a
decompressed file to remain in its uncompressed state.
 The Convert Compressed to Uncompressed Option parameter requires that compressed files
always remain compressed.
 The file was opened for the first time for viewing only and the Convert Compressed to
Uncompressed Option parameter requires that the file must be opened at least twice for viewing
or opened once for modification before the file remains uncompressed after access.
Whenever you open a compressed file, NSS decompresses the file, but it keeps the compressed
copy of the file while the file is open. The first time you open a compressed file for viewing only,
NSS discards the decompressed copy of the file when you close the file. The compressed file
remains on the system. If you open the file a second time for viewing only, the file is considered
active. When you close the file, NSS keeps the uncompressed file and discards the compressed
copy of the file. If you modify the file and save it, NSS saves the uncompressed file, then discards
the compressed file.
21.1.6
Monitoring Compression Activity
Monitor compression activity using the nss /compscreen command. For instructions, see
Section 21.8, “Using NSS Commands to Configure and Monitor Compression,” on page 278.
21.2
Configuring Compression for a Server
The server’s compression parameters govern compression behavior for all NSS volumes on your
server. The server-level settings apply to all files and directories in compression-enabled NSS
volumes, but some settings can be overridden by individual file or directory attributes.
Before you set parameters, make sure you understand how compression works for NSS. For
information, see Section 21.1, “Understanding Compression,” on page 263.
 Section 21.2.1, “Understanding Server-Level Compression Parameters,” on page 268
 Section 21.2.2, “Configuring Server-Level Compression Parameters with Commands,” on
page 272
21.2.1
Understanding Server-Level Compression Parameters
The following table describes each compression parameter, its purpose, supported values, and
default value.
268
OES 11 SP2: NSS File System Administration Guide for Linux
Table 21-1 Explanation of Compression Parameters
Parameter
Description
Days Untouched
Before Compression
Specifies the number of days the system waits after a file was last accessed
before it is compressed. The parameter uses the date the file was last
accessed for reading or writing to determine if a file is inactive, and therefore,
eligible for compression. When background compression starts, it first
evaluates which files meet this inactivity requirement to determine which files
are to be compressed during the compression period.
To effectively stop compression for a volume, set the elapsed time very high.
Eventually, files are decompressed and remain uncompressed because they
never cross the inactivity threshold.
Range: 0 to 100000
Default: 14
Compression Daily
Check Starting Hour
Specifies the hour when you want the file compressor to start scanning enabled
volumes for files that need to be compressed and to compress them.
If the Compression Daily Check Stop Hour parameter is the same as the
Compression Daily Check Starting Hour, then the file compressor starts
checking every day at the Compression Daily Starting Hour time and runs as
long as necessary to finish all files that meet the compressible criteria.
Range: 0 to 23
Hours are specified by a 24-hour clock: (0=midnight; 23=11 p.m.).
Default: 0
Compression Daily
Check Stop Hour
Specifies the hour when you want the file compressor to stop scanning enabled
volumes for files that need to be compressed and to stop compressing them.
Range: 0 to 23
Hours are specified by a 24-hour clock: (0=midnight; 23=11 p.m.).
Default: 6
Managing Compression on NSS Volumes
269
Parameter
Description
Enable File
Compression
Specifies whether file compression is enabled or suspended for all volumes
where the Compression attribute is enabled. After an NSS volume’s
Compression attribute is enabled, it cannot be turned off because the volume
contains compressed files and metadata about compression. The server-level
Enable File Compression parameter allows you to turn off the compression of
more files on the server’s compressed volumes.
While file compression is suspended, files that would have been compressed
are queued for compression, then are compressed only when (or if) the Enable
File Compression parameter is reset to On. Files that are already compressed
remain compressed, unless they are decompressed when they are opened and
used.
Range: On (default) or Off
The On setting allows file compression activity to occur on volumes where the
Compression attribute is enabled. It does not enable the Compression attribute
on the server’s volumes.
The Off setting suspends compression on volumes where the Compression
attribute is enabled. Immediate compression requests are queued until the
value is reset to On, when the files meeting criteria are compressed. The Off
setting does not disable the Compression attribute on individual volumes, and it
does not prevent you from enabling the Compression attribute for a volume.
Default: On
Minimum Compression Sets the minimum percentage a file must compress to remain in a compressed
Percentage Gain
state.
Range: 0 to 50
Default: 20
Maximum Concurrent
Compressions
Specifies the maximum concurrent or simultaneous compressions allowed.
Range: 1 to 8
Default: 2
270
OES 11 SP2: NSS File System Administration Guide for Linux
Parameter
Description
Convert Compressed to Specifies what the file system does with an uncompressed version of a file after
Uncompressed Option the server has decompressed it.
IMPORTANT: Before a compressed file can be opened, there must be
sufficient space available on the volume for the uncompressed and
compressed copies of the file to coexist while the file is open.
Range: 0, 1 (default), or 2
0 = Always leave the file compressed.
While the file is open, both the uncompressed and compressed copies of the
file coexist on the volume. If the file is closed without changes, the
uncompressed copy of the file is discarded. If changes are saved, the
compressed copy of the file is discarded. After the modified file is closed, it is
queued for immediate compression. Sufficient space must be available for both
the compressed and uncompressed copies of the file to temporarily coexist on
the volume in order for the compression to occur. After successful
compression, the uncompressed copy of the modified file is discarded.
1 = Leave the file compressed until second access if it is read only once during
the time specified by the Days Untouched Before Compression parameter. This
is the default behavior for compression.
While the file is open, both the uncompressed and compressed copies of the
file coexist on the volume. The first time that the file is closed without changes
in the specified period, the uncompressed copy of the file is discarded. The
second time that the file is closed without changes in the specified period, the
compressed copy of the file is discarded. If changes are saved, the
compressed copy of the file is discarded. The uncompressed file remains
uncompressed until it meets requirements for being compressed.
2 = Always leave the file uncompressed.
While the compressed file is open, both the uncompressed and compressed
copies of the file coexist on the volume. When the file is closed or when
changes are saved, the compressed copy of the file is discarded. The
uncompressed file remains uncompressed until it meets requirements for being
compressed.
Decompress Percent
Disk Space Free to
Allow Commit
Specifies the percentage of free disk space required on a volume for file
decompression to permanently change compressed files to decompressed.
This parameter prevents newly decompressed files from filling up the volume.
Range: 0 to 75
Default: 10
Specifies the time between alerts when the file system is not changing
Decompress Free
Space Warning Interval compressed files to decompressed because of insufficient disk space.
Range: 0 seconds to 29 days 15 hours 50 minutes 3.8 seconds
Setting the interval to 0 turns off the alert.
Default: 31 minutes 18.5 seconds
Managing Compression on NSS Volumes
271
Parameter
Description
Deleted Files
Compression Option
Specifies whether and when deleted files are compressed.
Range: 0, 1, or 2
0 = Do not compress deleted files
1 = Compress deleted files the next day
2 = Compress deleted files immediately
Default: 1
21.2.2
Configuring Server-Level Compression Parameters with Commands
Use the following commands to modify server-level compression parameters. Issue the commands at
the NSS Console (nsscon) as the root user. For details about each parameter, see Section 21.2.1,
“Understanding Server-Level Compression Parameters,” on page 268.
Command
Values
nss /DaysUntouchedBeforeCompression=value
Range: 0 to 100000 (in days)
Default: 14
nss /CompressionDailyCheckStartingHour=value
Range: 0 to 23
Hours are specified by a 24-hour
clock: (0=midnight; 23=11 p.m.).
Default Value: 0
nss /CompressionDailyCheckStopHour=value
Supported Values: 0 to 23
Hours are specified by a 24-hour
clock: (0=midnight; 23=11 p.m.).
Default Value: 6
nss /(No)EnableFileCompression
Supported Values: On (default) or Off
To enable compression, enter
Default Value: On
nss /EnableFileCompression
To disable compression, enter
nss /NOEnableFileCompression
nss /MinimumCompressionPercentageGain=value
Supported Values: 0 to 50
Default Value: 20
nss /MaximumConcurrentCompressions=value
Supported Values: 1 to 8
Default Value: 2
272
OES 11 SP2: NSS File System Administration Guide for Linux
Command
Values
nss /ConvertCompressedToUncompressedOption=value
Supported Values: 0, 1, or 2
0 = Always leave the file compressed
1 = Leave the file compressed until
second access if it is read only once
during the time specified by the Days
Untouched Before Compression
parameter
2 = Always leave the file
decompressed
Default Value: 1
nss /DecompressPercentDiskSpaceFreeToAllowCommit=value
Supported Values: 0 to 75
Default Value: 10
nss /DecompressFreeSpaceWarningInterval=value
Supported Values: 0 seconds to 29
days 15 hours 50 minutes 3.8 seconds
Setting the interval to 0 turns off the
alert.
Default Value: 31 minutes 18.5
seconds
nss /DeletedFilesCompressionOption=value
Supported Values: 0, 1, or 2
0 = Do not compress deleted files
1 = Compress deleted files the next
day
2 = Compress deleted files
immediately
Default Value: 1
21.3
Configuring a Volume for Compression
To use compression on a volume, set the volume’s Compression attribute to On. You can set the
Compression attribute when you create a new volume or enable the attribute for an existing noncompressed volume.
After you enable compression for a volume, you can suspend compression, but you cannot turn the
Compression attribute off. For more information, see Section 21.4, “Suspending Compression for
Volumes or Files,” on page 275.
IMPORTANT: You cannot use file compression on a volume on a CD or DVD drive.
 Section 21.3.1, “Enabling Compression for a New Volume,” on page 274
 Section 21.3.2, “Enabling Compression for an Existing Volume,” on page 274
Managing Compression on NSS Volumes
273
21.3.1
Enabling Compression for a New Volume
When you create a new volume, simply select the Compression check box when you set the volume’s
attributes. For information, see Section 18.3, “Creating Unencrypted NSS Volumes,” on page 216.
21.3.2
Enabling Compression for an Existing Volume
1 In iManager, click Storage > Volumes to open the Volumes page.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select a server to manage.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
3 From the Volumes list, select the volume that you want to manage.
4 Click Properties > Attributes.
This opens the Volume Properties to the Attributes page.
5 Select the Compression check box to enable file compression for the selected volume.
IMPORTANT: After compression is enabled for a volume, you cannot deselect the Compression
check box to turn off compression. You can suspend compression or restore uncompressed data
to an uncompressed volume. For information, see Section 21.4, “Suspending Compression for
Volumes or Files,” on page 275 or Section 21.6, “Restoring Data to a Uncompressed Volume,” on
page 275.
6 Click Apply or OK to save the change, or click Cancel to back out of the process.
If you click Apply, iManager saves the change and remains on the Volumes page. If you click OK,
iManager saves the change and takes you to the main Storage page. If you do not click Apply or
OK, the setting is not implemented; the volume remains uncompressed.
274
OES 11 SP2: NSS File System Administration Guide for Linux
21.4
Suspending Compression for Volumes or Files
You can temporarily suspend file compression for all volumes by using the Set command’s Enable
File Compression parameter. While file compression is suspended, files that would have been
compressed are queued for compression, then compressed when compression is re-enabled. For
information, see Section 21.7, “Configuring Compression Preferences for Directories and Files,” on
page 275.
You can suspend compression for a file by setting its Don’t Compress flag. For information, see
Section 21.7, “Configuring Compression Preferences for Directories and Files,” on page 275.
21.5
Disabling Compression for a Volume
After compression is enabled for an NSS volume, you cannot deselect the Compression attribute check
box to turn off compression. Use one of the following methods to disable compression for a volume.
 To effectively stop compression for a volume, set the Days Untouched Before Compression
parameter’s elapsed time to the maximum value. Eventually, files are decompressed as they are
used and remain uncompressed because they never cross the threshold for inactivity.
 Restore the data to an uncompressed volume. For information, see Section 21.6, “Restoring Data
to a Uncompressed Volume,” on page 275.
 Set Enable File Compression to Off. Eventually, files are decompressed as they are used and
remain uncompressed.
21.6
Restoring Data to a Uncompressed Volume
1 Decompress the volume data.
2 Back up the uncompressed data.
3 Create a new volume with the Compression attribute disabled.
4 Restore the old volume contents as uncompressed data from your backup media.
21.7
Configuring Compression Preferences for Directories and
Files
File and directory compression preferences override the compression settings for the volume and
server.
 Section 21.7.1, “Using the Novell Client,” on page 276
 Section 21.7.2, “Using ATTRIB,” on page 276
Managing Compression on NSS Volumes
275
21.7.1
Using the Novell Client
1 From a workstation, click the Novell Client icon (the red N in the notification area), select Novell
Map Network Drive, then map a drive to the NSS volume by using the login and password of the
Administrator user.
2 Use either of the following methods to open the Novell Info (NetWare Info in older client versions)
dialog box for the file or directory you want to manage:
 In a file manager, navigate to the directory or file you want to manage. Right-click the
directory or file, select Properties, then select the Novell Info (NetWare Info in older client
versions) tab in the Properties window.
 Click the Novell Client icon, select NetWare Utilities > Object Properties, navigate to and select
the directory or file you want to manage, click OK, then select the Information tab in the
NetWare Services window.
3 Do any of the following:
 Select its check box to enable (set) the Don’t Compress or Immediate Compression attribute.
 Deselect its check box to disable (clear) the Don’t Compress or Immediate Compression
attribute.
4 Click OK or Apply to accept the changes, or click Cancel to back out of the process.
21.7.2
Using ATTRIB
Use the ATTRIB utility at the Linux terminal console prompt to view or modify file and directory
attributes for compression on NSS volumes where compression is enabled.
Syntax
attrib [options] [filename]
Options
If both the set and clear options are selected, the clear option is completed before the set option. If the
filename is not specified, the operation is completed on the current directory.
Table 21-2 Operations Options for the ATTRIB Utility
276
Option
Description
-s, --set=ATTRIBUTES
Sets the attributes on the specified file or directory.
-c, --clear=[ATTRIBUTES | all]
Clears the attributes on the specified file or
directory.
OES 11 SP2: NSS File System Administration Guide for Linux
Compression Attributes Options
Table 21-3 Compression Attributes Options for the ATTRIB Utility
Applies to
Applies to
Directorie
Files
s
Attribute
Description
dc
Don’t Compress keeps data from being compressed. This attribute Yes
overrides settings for automatic compression of files not accessed
within a specified number of days.
No
ic
Immediate Compression sets data to be compressed as soon as a Yes
file is closed. If applied to a directory, every file in the directory is
compressed as each file is closed.
Yes
The files in the specified directory are compressed as soon as the
operating system can perform the operation after the file is closed.
This does not apply to the directory’s subdirectories and the files in
them.
For example, to set the Don’t Compress attribute for all files in the current directory, enter
attrib --set=dc
To clear the Immediate Compression attribute from the /usr/course/winter/students.sxi file,
enter
attrib --clear=ic /usr/course/winter/students.sxi
Viewing Compression Status for Files and Directories
Enter attrib without options to show the compression attribute information for a specified file or for
all files in the directory. You cannot modify Status attributes.
Table 21-4 Compression Status for the ATTRIB Utility
Status
Description
cc
Cannot Compress (status display only) displays if the file cannot be
compressed because of limited space savings.
cm
Compressed (status display only) indicates whether the file is currently
stored in compressed format.
To view the compression status of a file, enter the following at the server console:
attrib filename
Replace filename with the path to the file. For example, to view the attributes of the /usr/course/
winter/students.sxi file, enter
attrib /usr/course/winter/students.sxi
To view the compression attributes of all files in the current directory, enter
attrib
Managing Compression on NSS Volumes
277
21.8
Using NSS Commands to Configure and Monitor
Compression
NSS offers the following commands for configuring and monitoring compression. Issue the
commands at the NSS Console (nsscon) as the root user.
Table 21-5 Compression Management Commands
Command
Description
nss /Compression=volume_name
Enables the Compression attribute for the specified
volume.
nss /Compression=all
Enables the Compression attribute for all volumes
on the server.
nss /StopNormalCompression
Stops all queued compression for files, based on
the compression triggered by a file open or close.
nss /BGCompression
Allows compression to occur in the background at
any time, instead of only within specified hours.
nss /NoBGCompression
Stops background compression and clears all the
enqueued background compression request.
Allows compression to occur only within the
specified hours.
nss /
CompressionDailyCheckStartingHour
Starts scanning each enabled volume for files that
need to be compressed at the specified hour
(Range = 0 - 23, where 0=midnight, 23=11p.m).
nss /CompressionDailyCheckStopHour
Ends scanning each enabled volume for files that
need to be compressed at the specified hour
(Range = 0-23, where 0=midnight, 23=11p.m).
NOTE: If CompressionDailyCheckStopHour is
equal to CompressionDailyStartingHour
then the start checking every day at
CompressionDailyStartingHour and run as long as
necessary to finish all files meeting the
compressible criteria. [Value=6, Range=0-23].
278
nss /DaysUntouchedBeforeCompression
The number of days to wait after a file was last
accessed before automatically compressing it.
[Value=14, Range=0-100000].
nss /DeletedFilesCompressionOption
Options to compress the deleted files. 0 = do not
compress the files, 1= compress next day,
2=compress immediately. [Value=1, Range=0-2].
nss /(No)EnableFileCompression
Allows file compression to occur on compression
enabled volumes. If disabled, the compression
does not happen. Immediate compress requests
are queued until compression is allowed.
[Value=ON].
nss /MaximumConcurrentCompressions
Maximum number of simultaneous compressions
allowed by the system (simultaneous compressions
can only occur if there are multiple volumes).
[Value=2, Range=1-8].
OES 11 SP2: NSS File System Administration Guide for Linux
21.9
Command
Description
nss /MinimumCompressionPercentageGain
Minimum percentage a file must compress in order
to remain compressed. [Value=20 Range=0-50]
Repairing Compressed Volumes with the Compfix Utility
Use the Compfix utility to repair compression information for compressed volumes or to clear the
Cannot Compress attribute for files in the compressed volume. For information, see Section B.2,
“compfix,” on page 407.
21.10
Backing Up Compressed Files
When you back up a compressed volume, files are written to the backup media in compressed or
uncompressed format, according to how they are currently stored. To back up compressed files in
uncompressed format, you must decompress the files first, then back up the files.
When you recover a volume, the files are restored in their saved format to the destination volume. If
you try to restore a compressed file to a volume without compression, the file is not readable. For
information, see “Backing Up Compressed files” in the OES 11 SP2: Storage Management Services
Administration Guide for Linux, or see your third-party backup application’s documentation.
Managing Compression on NSS Volumes
279
280
OES 11 SP2: NSS File System Administration Guide for Linux
22
Managing Space Quotas for Volumes,
Directories, and Users
2
This section describes how to manage space quotas for volumes, directories, and users of Novell
Storage Services (NSS) volumes.
 Section 22.1, “Understanding Space Quotas,” on page 281
 Section 22.2, “Managing NSS Volume Quotas,” on page 282
 Section 22.3, “Managing Directory Quotas,” on page 284
 Section 22.4, “Managing User Space Quotas,” on page 288
22.1
Understanding Space Quotas
You can control how space is allocated in an NSS pool or volume by restricting the amount of space
available to a particular volume, directory, or user. These space restrictions, or quotas, work
independently, with the lower value being the most restrictive if all constraints apply. NSS allocates
the space as it is needed; the quota does not reserve the space.
If you set a quota to a value equal to or less than the current size of space in use for the specified
volume, directory, or user, users cannot add files until enough files are deleted to free up space in the
volume, directory, or user files. Users can continue to access existing files for which they are
authorized users, but they cannot save them.
Quotas restrict the actual physical space that the volume, directory, or user is allowed to consume.
When enforcing quotas, NSS considers only the actual physical blocks consumed (in 4 KB blocks), not
the file’s logical size. If you have sparse files or compressed files, only the actual physical space they
consume is counted against the quota. In order for a compressed file to be uncompressed, there must
be enough space available in the most restrictive of the quotas set (whether volume, directory, or
user) to accommodate the decompression process and the uncompressed file size. Otherwise, the
user is not able to open the file. For more information about the space requirements for compressing
and decompressing files, see Section 21.1.5, “Factors Affecting Decompression,” on page 268.
As the amount of space consumed by a user’s files approaches the user’s space quota, the user should
use caution when saving files. Data loss can occur if the user attempts to save a file that is too large
for the remaining unused space.
To get the correct disk utilization information, use NSSMU or nsscon /SpaceInformation. For NSS
volume, used space = in-use space + purgeable space. If you use df-h command or any other nonNSS utility to get the disk utilization information, the output may confuse the user as it shows only
in-use space and does not take account of purgeable space.
WARNING: If storing a file would cause a quota to be exceeded, only part of the file is actually
saved, resulting in data corruption.
Managing Space Quotas for Volumes, Directories, and Users
281
If the Salvage attribute is enabled for a volume, deleted files are not immediately purged from the
volume. Deleted files on the volume are not counted against quotas.
Volume Quotas
When you create an NSS volume, you have the option of setting a space quota for the volume or
letting it grow to the size of the pool. At any time thereafter, you can view and configure the volume
quota from the Storage > Volumes > Properties > Attributes tab in iManager. For information, see
Section 22.2, “Managing NSS Volume Quotas,” on page 282.
If you set a volume quota to grow to the pool size, you can also add segments to the volume’s pool to
expand its size, and therefore, expand the volume quota.
As a volume nears its quota, automatic controls can be configured to manage space. For information,
see Section 28.3, “Monitoring Quotas and Space Usage for NSS Pools and Volumes,” on page 344.
Directory Quotas
Directory quotas limit the space available in an individual NSS directory. To use directory quotas on
an NSS volume, you must first enable the Directory Quotas attribute.
For information about configuring attributes when you create a volume, see “Understanding Volume
Properties” on page 209.
For information about setting directory quotas, see Section 22.3, “Managing Directory Quotas,” on
page 284
User Space Quotas
User space restrictions limit the space available to a user of the NSS volumes across all directories and
files owned by the user. For information about setting user space quotas, see Section 22.4.3,
“Configuring a User Space Quota,” on page 291.
You must first enable the User Space Quotas attribute on the NSS volume where you want to
configure user space restrictions. You can set the attribute at any time. For information about
configuring the User Space Quotas attribute for an existing volume, see “Modifying Attributes of an
NSS Volume” on page 222.
Example of Directory and User Space Quotas
Quotas are beneficial for systems where you want to control how your storage resources are used. In
environments such as a university, where you set up a common work area for a large number of
students and you want to limit the space that directory can consume, set a Directory Quota. You
might also limit the amount of space an individual user’s work can consume by setting the User
Quota.
For example, if a directory’s quota is 500 MB and the user’s quota is 1 GB, the user is limited to up to
500 MB in the specific directory. If the user can access multiple directories, each with a 500 MB quota,
the maximum space the user’s work can consume for all directories combined is limited to the user’s
1 GB administrative limit.
22.2
Managing NSS Volume Quotas
1 In iManager, click Storage > Volumes.
282
OES 11 SP2: NSS File System Administration Guide for Linux
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select a server to manage.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
A list of volumes appears in the Volumes list.
3 In the Volumes list, select a volume that you want manage.
4 Click Properties.
The Properties page has three tabs: Attributes, Statistics, and Quota Usage. It opens to the Attributes
tab.
5 In the Quota field, do one of the following:
 No Quota: Select Allow Volume Quota to Grow to the Pool Size. NSS pools allow overbooking
so the administrative sum of all volumes’ quotas in a pool can exceed the physical pool
quota.
 Quota: Deselect Allow Volume Quota to Grow to the Pool Size, then specify the maximum size
you want to allow volume to grow. The quota cannot exceed the pool size.
If you set the quota to a value less than the current volume size, you cannot save files to the
volume until you purge some files to make room.
6 Click Apply.
Managing Space Quotas for Volumes, Directories, and Users
283
7 Click the Quota Usage tab to view the volume and pool space usage for the selected volume and
to verify the new setting.
22.3
Managing Directory Quotas
A directory quota limits the amount of space on a volume that can be consumed by all of the files and
folders in that directory. You can specify any positive value for the quota. If the current size of the
directory exceeds the specified limit, users cannot save data to the directory until space is cleared by
removing files from the directory. If the specified directory quota exceeds the volume quota, the
volume quota overrides the directory quota in determining whether data can be saved to the
directory as data is written. Because overbooking is allowed for directory and volume quotas, the
physical space limits for the pool might also prevent the directory from growing to its specified
maximum. Physical space limits for the pool override the volume and directory quotas.
For example, lets assume that volume VOL1: has a volume quota of 50 GB in pool POOL1. You set the
directory quota to 100 GB for directory VOL1:\finance\sales. Users cannot save 100 GB of files in
the sales directory because NSS reaches the volume quota long before it reaches the directory quota.
Because of overbooking, other directories are competing for the 50 GB of space in the volume, and
VOL1: is competing for space with other volumes in POOL1. You might not be able to put as much as
50 GB of data in the sales directory.
Setting the directory quotas on a volume will automatically enable the Directory Quotas attribute for
the volume as well. As the administrator user, you can view and configure directory quotas with the
Files and Folders plug-in for iManager, NetStorage, and the Novell Client.
This section describes the following:
 Section 22.3.1, “Enabling or Disabling the Directory Quotas Attribute for an NSS Volume,” on
page 285
 Section 22.3.2, “Configuring Directory Quotas,” on page 285
 Section 22.3.3, “Removing a Directory Quota,” on page 287
 Section 22.3.4, “Removing All Directory Quotas for an NSS Volume,” on page 288
284
OES 11 SP2: NSS File System Administration Guide for Linux
22.3.1
Enabling or Disabling the Directory Quotas Attribute for an NSS
Volume
Setting the directory quotas on a volume will automatically enable the Directory Quotas attribute for
the volume as well. You can set the attribute at create time or at any time for an existing volume.
To set the Directory Quotas attribute for an existing volume:
1 In iManager, click Storage > Volumes.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select a server to manage to view a list of NSS volumes on the server.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
3 In the Volumes list, select a volume that you want manage.
Wait for the volume details to be displayed before you continue.
4 Click Properties.
The Properties page has three tabs: Attributes, Statistics, and Quota Usage. It opens to the Attributes
tab.
5 On the Attributes tab, select or deselect the Directory Quotas check box, then click Apply.
6 If you enabled or disabled the Directory Quotas attribute, restart NCP2NSS by entering the
following at a terminal console prompt:
/etc/init.d/ncp2nss restart
22.3.2
Configuring Directory Quotas
 “Adding or Modifying a Directory Quota with iManager” on page 285
 “Adding or Modifying a Directory Quota with Novell NetStorage” on page 286
 “Adding or Modifying Directory Quotas with the Novell Client” on page 286
Adding or Modifying a Directory Quota with iManager
1 In iManager, select Files and Folders > Properties.
2 Click the Search icon, then browse to locate and select the folder you want to manage on an NSS
volume.
3 View the current status of the Directory Quota.
If a Directory Quota is set, the Restrict Size field is selected and the Limit field shows the quota
size in KB.
If the Directory Quota is not set, the Restrict Size field is deselected and the Limit field is dimmed
(grayed out).
4 Do one of the following:
 Add a Quota: On the Information tab, select Restrict Size to enable space restrictions for the
selected directory. In the Limit field, type the directory quota in KB. The value must be an
increment of 4 KB; that is, it must be divisible by 4 with no remainder.
 Modify an Existing Quota: In the Limit field, type the new directory quota in KB/MB/GB/
TB. Select a value from the drop-down list. The value must be an increment of 4 KB; that is,
it must be divisible by 4 with no remainder.
Managing Space Quotas for Volumes, Directories, and Users
285
 Remove a Quota: On the Information tab, deselect Restrict Size to disable space restrictions
for the selected directory. The Limit field is automatically dimmed (grayed out).
5 On the Information page, click Apply or OK to apply the changes.
Adding or Modifying a Directory Quota with Novell NetStorage
Using Novell NetStorage, you can manage directory quotas for directories in an NSS volume from
any computer with a supported Web browser. This requires you to first configure a NetStorage server
in the same context. For information, see the OES 11 SP2: NetStorage Administration Guide for Linux.
To create or modify NSS directory quotas with NetStorage:
1 In a Web browser, connect to NetStorage.
For information, see Section 9.4, “Novell NetStorage,” on page 97.
2 Log in to NetStorage with the username and password of the Administrator user or equivalent
user.
3 Navigate to the directory you want to manage.
4 Right-click the directory, then select Properties.
5 Click the NetWare Info tab.
Although the option label refers to NetWare, you can use the option for your NSS and NCP
volumes on Linux.
6 Do one of the following to configure the directory quota:
 Space Restriction: Select Restrict Size, then specify the directory quota in KB. The value
must be a multiple of 4.
 No Space Restriction: Deselect Restrict Size to set the directory quota to Unlimited.
 Complete Space Restriction: Select Restrict Size, then specify the directory quota as 0 KB. If
the directory already contains files and subdirectories, the directory cannot grow beyond
the current space consumed.
7 Click Apply to accept the directory quota configuration.
Adding or Modifying Directory Quotas with the Novell Client
The Novell Client for Windows 2000/XP allows the Administrator user to manage directory quotas
for directories in an NSS volume from a Windows computer.
1 In the Novell Client, map a drive to the NSS directory you want to manage, or map to its parent
directory.
1a Right-click the Novell Client icon (the red N icon in the notification area), then select Novell
Map Network Drive.
1b Specify the network path to the directory. For example: 192.168.1.1/users.
1c Specify the username of the Administrator user or equivalent user, then click Map.
1d When prompted, enter the user’s password.
2 In a file browser, locate and right-click the directory you want to manage, then click Properties >
Novell Info (or NetWare Info on older clients).
286
OES 11 SP2: NSS File System Administration Guide for Linux
3 In the Space Restriction field, click Change to open the Space Restriction dialog box.
4 Do one of the following to configure the directory quota:
 Space Restriction: Specify the directory quota in KB. The value must be a multiple of 4.
 No Space Restriction: Select No Space Restriction to set the directory quota to Unlimited.
 Complete Space Restriction: Select Complete Space Restriction to set the directory quota to 0
KB. If the directory already contains files and subdirectories, the directory cannot grow
beyond the current space consumed.
5 Click OK to accept the directory quota.
22.3.3
Removing a Directory Quota
1 In iManager, select Files and Folders > Properties.
2 Click the Search icon, then browse to locate and select the folder you want to manage on an NSS
volume.
Managing Space Quotas for Volumes, Directories, and Users
287
3 On the Information tab, deselect Restrict Size to disable space restrictions for the selected folder.
4 Click Apply or OK to apply the changes.
22.3.4
Removing All Directory Quotas for an NSS Volume
To delete the directory quotas for all directories on an NSS volume without dealing individually with
each directory, you can simply disable the Directory Quotas attribute for the NSS volume.
1 In iManager, click Storage > Volumes.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select a server to manage.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
A list of volumes appears in the Volumes list.
3 In the Volumes list, select a volume that you want manage.
4 Click Properties.
The Properties page has three tabs: Attributes, Statistics, and Quota Usage. It opens to the Attributes
tab.
5 On the Attributes tab, deselect the Directory Quotas check box, then click Apply.
6 Restart NCP2NSS by entering the following at a terminal prompt:
/etc/init.d/ncp2nss restart
22.4
Managing User Space Quotas
User space quotas are the space restrictions you optionally set for users of an NSS volume. Setting the
user space quotas on a volume will automatically enable the User Space Quotas attribute for the
volume as well. The Users with Quotas page of the Storage plug-in reports the quota setting and
space usage for each user who has space restrictions in place for a specified volume, whether the user
has data stored on the volume or not.
NOTE: If the user gets deleted, it's corresponding associated files does not get deleted and the
ownership does not change which may cause quota inconsistency.
This section describes the following:
 Section 22.4.1, “Setting the User Space Quotas Attribute for an NSS Volume,” on page 288
 Section 22.4.2, “Viewing User Space Quotas,” on page 289
 Section 22.4.3, “Configuring a User Space Quota,” on page 291
 Section 22.4.4, “Modifying a User Space Quota,” on page 292
 Section 22.4.5, “Deleting a User Space Quota,” on page 293
 Section 22.4.6, “Configuring User Space Quotas on Volumes After Upgrading or Migrating from
OES 1,” on page 293
22.4.1
Setting the User Space Quotas Attribute for an NSS Volume
The administrator user can view and modify the User Space Quotas attribute at any time.
1 In iManager, click Storage > Volumes.
288
OES 11 SP2: NSS File System Administration Guide for Linux
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select a server to manage to view a list of its volumes.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
3 In the Volumes list, select a volume that you want manage, then wait for the page to refresh to
view its details.
4 Click Properties.
The Properties page has three tabs: Attributes, Statistics, and Quota Usage. It opens to the Attributes
tab.
5 On the Attributes tab, select (enable) or deselect (disable) the User Space Quotas check box, then
click Apply.
22.4.2
Viewing User Space Quotas
As the Administrator user or a user with the Supervisor right for the volume can view and configure
user space quotas with the Storage plug-in for iManager.
The Users with Quotas page reports the quota setting and space usage for each user who has space
restrictions in place for a specified volume, whether the user has data stored on the volume or not.
The All Users page reports the current usage of all users on the volume who have data stored on the
volume, whether the users have a quota on the volume or not.
The tracking of user space usage and quotas is an expensive operation in terms of performance. For
this reason, NSS does not begin tracking user space usage until you create the first user quota on the
volume. If you have never assigned a user quota to a volume, the All Users page has no information
to report. After you create the first user quota on the volume, NSS begins tracking all of the user
space used on the volume. From then on, the All Users page reports usage for all users with data on
the volume.
The Users with Quotas report and All Users report include the following information:
Table 22-1 Report of User Space Quotas
Parameter
Description
User Name
The distinguished user name. For example: userid.context,
jsmith.geo.example, asantiago.example
Quota
Indicates the amount of space in MB that the user can use on the selected
volume. The user’s data can grow only to the size of the quota or to the amount
of available physical space on the volume, whichever is less.
NSS allocates space only as needed. You can set user quotas that total more
than the total available space on the volume. This is called overbooking.
However, you cannot set any single quota to be greater than the volume size.
Used
Reports the current amount of space used by the user's data in all directories
combined on the specified volume. Reports the current amount of space used
by the user's data in all directories combined on the specified volume.
Available
Reports the free, unused space within the user’s quota for the specified
volume. Because of overbooking, other users might be competing for a portion
of this space.
Managing Space Quotas for Volumes, Directories, and Users
289
Viewing User Quotas as the Administrator User
1 Log in to iManager as the Administrator user or equivalent user.
2 In iManager, click Storage > User Quotas.
3 Click the Volume browser, then select the volume that you want to manage.
Wait for the page to refresh to see the user space restrictions for all users with quotas for the
selected volume.
4 To view all users (with or without quotas) who have data stored on that volume, click All Users.
Viewing User Quotas as an Individual User
1 Log in to iManager as an individual user with your eDirectory username and password.
2 In iManager, click Storage > User Quotas.
3 Click the Volume browser, then select the volume that you want to manage.
290
OES 11 SP2: NSS File System Administration Guide for Linux
Wait for the page to refresh to see the individual user’s space restrictions for the selected volume
on the Users with Quotas page.
22.4.3
Configuring a User Space Quota
The Administrator user or equivalent user can configure the user space quota for one user or
multiple users at a time.
1 In iManager, click Storage > User Quotas, then select the volume you want to manage.
2 Click New to open the Add User Quota dialog box.
3 Click the Browse icon to open the User object list, then browse to select one or more users who
you want to share the same quota.
4 In User Quotas, specify the upper limit of storage space that you want to assign to each of the
selected users.
5 Use the Units drop-down list to select the units to use for the quota you specified in Step 4.
Options are bytes, kilobytes, megabytes, and gigabytes.
6 Click Finish to apply the quota, or click Cancel to back out of the transaction.
Each of the usernames you selected now has user space quotas assigned to it.
Managing Space Quotas for Volumes, Directories, and Users
291
22.4.4
Modifying a User Space Quota
The Administrator user or equivalent user can modify the quota for one user or multiple users at a
time. Modify the current quotas for the selected users by setting a new quota for all users, increasing
each quota by the same amount, or decreasing each quota by the same amount.
1 In iManager, click Storage > Volumes, select a server, then select the volume you want to manage.
2 Click Quotas > Users with Quotas.
3 Open the Edit User Quotas dialog box by doing one of the following:
 Single User: Select the name link for the user.
 Multiple Users: Select the User check box next to one or more user names whose user space
quota you want to delete, then click Edit.
4 Specify the change that you want to apply to each of the selected users by doing one of the
following:
 Modify Quota: Specify a value to set each selected user's quota to the specified value, such
as 200 MB.
 Increase Quota: Use a plus (+) with the value to increase each selected user's quota by that
amount, such as +50 MB.
292
OES 11 SP2: NSS File System Administration Guide for Linux
If any individual quota or the total of all quotas exceeds the volume size, the increase is
processed, because space is allocated to the users as needed, on a first-come-first-served
basis.
If you attempt to increase the quota for a user with an Unlimited quota, no changes are
made for that user's quota. The setting remains Unlimited.
 Decrease Quota: Use a minus (-) with the value to decrease each selected user's quota by
that amount, such as -20 MB.
If a reduction takes a user's quota to 0 MB, then the user is fully restricted. The user cannot
add any new files no matter how many existing files are deleted. To remove the restriction,
set a non-zero quota or delete the quota for the user.
If a reduction takes a user's quota to a negative value, then an error message reports the
quota as an Invalid Quota. You must repeat the process to set a valid quota for the user.
5 Use the Units drop-down list to select the units to use for the quota you specified in Step 4.
Options are bytes, kilobytes, megabytes, and gigabytes.
6 Click Finish.
Each of the users you selected now has the new user quota assigned to his or her individual
account on this volume.
22.4.5
Deleting a User Space Quota
The Administrator user or equivalent user can delete the user space quota for one or more users.
Deleting the user space quota for a user removes the space restriction for the user on the selected
volume. Of course, any directory quotas or volume quotas still apply.
1 In iManager, click Storage > User Quotas, then select the volume you want to manage.
2 Select the User check box next to one or more user names whose user space quota you want to
delete, then click Delete.
3 Do one of following:
 To confirm, click Yes to remove the quotas. The user accounts no longer have quotas
assigned to them for the selected volume.
 To back out of the process, click No.
To remove all user space quotas on the volume at the same time, you can simply disable the User
Space Quota attribute for the volume. For information about disabling the attribute, see
Section 22.4.1, “Setting the User Space Quotas Attribute for an NSS Volume,” on page 288.
22.4.6
Configuring User Space Quotas on Volumes After Upgrading or
Migrating from OES 1
Beginning with OES 2, file ownership is properly assigned for access via the NCP protocol whether
the user is Linux-enabled or not. User quotas for NCP users no longer require that a user be Linuxenabled.
For OES 1, using user space quotas required usernames to be Linux-enabled with Linux User
Management (LUM). If an NCP user was not Linux-enabled on OES 1, any files the user created were
assigned as the root user identity (after verifying the username's trustee rights to do so, of course)
instead of the actual username.
Managing Space Quotas for Volumes, Directories, and Users
293
If you migrate an NSS volume where NCP users were not Linux-enabled from OES 2 to OES 11 SP2,
you will observe that the file ownership is now recorded as the actual user. However, existing files
might still belong to the root user and are not counted against user space quotas.
IMPORTANT: Directories and files that were created while users were not Linux-enabled on the
OES 2 server are owned by the root user and are not counted against the user space quotas you set
unless you reassign file ownership to the individual users.
1 Use one of the following tools to change ownership of any files that the user normally uses that
were previously assigned to the root user as owner:
 Novell Client
 Files and Folders plug-in to iManager
You cannot use the Linux chown command to change the creator field for the NSS file system. It
changes the root user’s view of who is reported as the owner user in the Linux path, but the change
has no effect on the NSS metadata. The Linux chown command also does not modify the owner
group.
294
OES 11 SP2: NSS File System Administration Guide for Linux
23
Salvaging and Purging Deleted Volumes,
Directories, and Files
23
This section describes how to configure the salvage system for Novell Storage Services file systems
on Novell Open Enterprise Server 11 SP2.
 Section 23.1, “Understanding the NSS Salvage System,” on page 295
 Section 23.2, “Configuring the Purge Behavior for NSS,” on page 298
 Section 23.3, “Enabling or Disabling the Salvage Attribute for an NSS Volume,” on page 300
 Section 23.4, “Viewing, Salvaging, or Purging Deleted NSS Volumes in a Pool,” on page 301
 Section 23.5, “Salvaging or Purging Deleted Files with iManager,” on page 302
 Section 23.6, “Salvaging or Purging Deleted Files with Other Tools,” on page 304
23.1
Understanding the NSS Salvage System
 Section 23.1.1, “Volume Salvage versus File Salvage,” on page 295
 Section 23.1.2, “Trustees and Rights Handling for Salvaged Data,” on page 297
 Section 23.1.3, “Understanding Purging Triggers,” on page 297
23.1.1
Volume Salvage versus File Salvage
The NSS salvage system makes it possible to retain deleted files for a specified period of time or until
space is needed. The volume salvage and file salvage subsystems function separately.
For volume salvage, the NSS volumes are automatically retained on deletion. The deleted volume can
be salvaged for a period of time that is determined by the server-level Logical Volume Purge Delay
setting. Administrators with the Supervisor right can salvage or purge deleted volumes at any time
before the purge delay elapses.
Table 23-1 describes parameters that control the volume salvage behavior for NSS volumes. The
server-level settings apply to directories and files on all NSS volumes.
Table 23-1 Volume Salvage Parameters
Salvage Policy
Range of Influence
Logical Volume Purge Delay
Server-level
Refer to
Section 23.2.1, “Setting the
Purge Delay for All Deleted
Default: 345600 seconds (4 days) Volumes,” on page 298
The automatic purging delay
applies to deleted NSS volumes.
Salvaging and Purging Deleted Volumes, Directories, and Files
295
Salvage Policy
Range of Influence
Logical Volume Purge Delay After Server-level
Continue
Default: 900 seconds
Refer to
Section 23.2.1, “Setting the
Purge Delay for All Deleted
Volumes,” on page 298
Seconds to delay purging a
deleted volume after a continue.
Logical Volume Purge Delay After Server-level
Load
Default: 7200 seconds
Section 23.2.1, “Setting the
Purge Delay for All Deleted
Volumes,” on page 298
Seconds to delay purging a
deleted volume after an NSS load
if the purge delay elapses while
NSS is disabled.
Auto Purging of Logical Volumes: The auto purging of logical volumes is determined by all the
above mentioned volume salvage parameters. For example, you have deleted a volume and set the
following parameters for auto purging:
LogicalVolumePurgeDelay: 7:24:16 p.m.
LogicalVolumePurgeAfterContinue: 7:39:06 p.m.
LogicalVolumePurgeAfterLoad: 7:20:00 p.m.
The latest time is considered for auto purging. Therefore, the volume is purged at 7:39:06 p.m.
Salvage for directories and files is controlled by each volume’s Salvage attribute. You can enable the
Salvage attribute when you create the volume, or modify the setting later in the volume’s properties.
Deleted directories and files are retained and can be salvaged until space is needed in the pool where
the volume resides, as determined by the pool’s available-space watermark settings. The
administrator user or any user who is a trustee with the Create right can salvage deleted directories
and files.
Table 23-2 describes parameters that control the file-system salvage behavior for NSS volumes.
Table 23-2 Files Salvage Parameters
Salvage Policy
Range of Influence
Refer to
Salvage attribute
Volume-level
Section 23.3, “Enabling or
Disabling the Salvage Attribute
for an NSS Volume,” on page 300
Default: Enabled
Immediate Purge of Deleted Files Server-level
Default: Disabled
Low and High Watermarks
Pool-level
Default: low 10%; high 20%
Volume-level watermarks are not
available.
296
OES 11 SP2: NSS File System Administration Guide for Linux
Section 23.2.2, “Setting the
Immediate Purge of Deleted Files
for All NSS Volumes,” on
page 298
Section 23.2.3, “Setting the Low
and High Salvage Watermarks for
Automatically Purging Deleted
Directories and Files,” on
page 299
23.1.2
Salvage Policy
Range of Influence
Refer to
Purge Immediate file system
attribute
Individual directory or file where
the attribute is enabled.
Section 23.2.4, “Setting the
Purge Immediate Attribute for a
Directory or File,” on page 300
Trustees and Rights Handling for Salvaged Data
When you salvage a volume, the data and metadata is exactly the same as it was at delete time, with
no changes.When salvaging deleted directories or files, the content, trustees, trustee rights, and the
inherited rights filters are just as they were before the file was deleted. If the rights in the tree above
the salvaged file have changed, then the inherited rights for the salvaged deleted file are calculated
based on the current rights above it in the directory tree.
23.1.3
Understanding Purging Triggers
Purging is triggered to begin by the following events. After the deleted data enters a Purge state by
manually starting a purge or by autopurging, deleted files can no longer be salvaged (do not return
to a Salvageable state), even if you pause the autopurging process.
 The Logical Volume Purge Delay setting times out for a deleted volume. Autopurging begins
automatically and can take some time, depending on the size of the volume.
The elapsed time between the delete and the purge is called the purge delay. The server-level
LogicalVolumePurgeDelay parameter applies to all NSS volumes. For information, see
Section 23.2.1, “Setting the Purge Delay for All Deleted Volumes,” on page 298.
 The pool’s Low Salvage Watermark setting is reached, indicating that the amount of free space is
below the administrator-specified minimum. NSS automatically purges the deleted files and
directories for all volumes in the pool with a first deleted, first purged policy until the free space
reaches the high watermark, or until all of the existing deleted directories and files are purged,
whichever occurs first.
You configure thresholds for space-based purging on each pool. Low and high watermarks
determine when to begin and stop automatic purging of deleted files to free up space on the
pool. For information, see Section 23.2.3, “Setting the Low and High Salvage Watermarks for
Automatically Purging Deleted Directories and Files,” on page 299.
 A user or administrator purges the deleted file or directory.
Deleted directories and files can be purged by the administrator user or by any user who is a
trustee with the Erase right at any time before the automatic purge begins. For information, see
Section 23.5.3, “Purging the Deleted Files,” on page 304.
 An administrator purges the deleted volume.
Delete volumes can be purged by the administrator with Supervisor right to the volume. For
information, see Section 23.5.3, “Purging the Deleted Files,” on page 304.
 Salvage is disabled at the server level for directories and files.
Directories and files are purged immediately on deletion if the server-level
ImmediatePurgeOfDeletedFiles parameter is enabled. If the Salvage attribute is enabled for a
volume, this setting overrides it. For information, see Section 23.2.2, “Setting the Immediate
Purge of Deleted Files for All NSS Volumes,” on page 298.
Salvaging and Purging Deleted Volumes, Directories, and Files
297
 Salvage is disabled for the individual directory or file.
If the Salvage attribute is enabled for an NSS volume, you can set the PurgeImmediate filesystem attribute on individual directories and files so that they are purged immediately on
deletion. For information, see Section 23.2.4, “Setting the Purge Immediate Attribute for a
Directory or File,” on page 300.
23.2
Configuring the Purge Behavior for NSS
 Section 23.2.1, “Setting the Purge Delay for All Deleted Volumes,” on page 298
 Section 23.2.2, “Setting the Immediate Purge of Deleted Files for All NSS Volumes,” on page 298
 Section 23.2.3, “Setting the Low and High Salvage Watermarks for Automatically Purging
Deleted Directories and Files,” on page 299
 Section 23.2.4, “Setting the Purge Immediate Attribute for a Directory or File,” on page 300
23.2.1
Setting the Purge Delay for All Deleted Volumes
The Purge Delay setting for the NSS volume determines the amount of time (in seconds) that you can
still access the deleted volume before it is removed from the system. The default value for the Purge
Delay setting is 345600 seconds (4 days). The volume name is changed during delete so that a new
volume with the same name can be immediately created. The management tool used to delete the
volume should clean up any NetIQ eDirectory Storage objects at delete time. Use NSSMU or the
Storage plug-in to iManager to purge or salvage the deleted volume before the Purge Delay time
elapses.
To configure the Purge Delay time, issue the following command at the NSS Console (nsscon) as the
root user:
nss /LogicalVolumePurgeDelay=value
In this command, replace value with the actual number of seconds to delay the purge. For example, if
you want to change the Purge Delay time from the default of 4 days to 1 day, set the value to 86400 by
entering:
nss /LogicalVolumePurgeDelay=86400
The Purge Delay change command is not permanent when entered from the command line. You
must enter the command each time you restart the server. To make the new setting permanent, add
the command to the /etc/opt/novell/nss/nssstart.cfg file.
23.2.2
Setting the Immediate Purge of Deleted Files for All NSS Volumes
The salvage capability for directories and files can be turned on and off for NSS at the server level by
using the /(No)ImmediatePurgeOfDeletedFiles flag. By default, the setting is disabled (set to
NoImmediatePurgeOfDeletedFiles). You might want to enable this setting if you have Salvage
enabled for multiple volumes, but want to disable salvage across all of them without separately
changing the volumes’ Salvage attribute settings.
This server-level salvage setting overrides the settings for the volume-level Salvage attribute. It does
not affect deleted NSS volumes. Issue the commands at the NSS Console (nsscon) as the root user
298
OES 11 SP2: NSS File System Administration Guide for Linux
Table 23-3 Server-Level Salvage Parameter
23.2.3
Parameter
Description
nss /NoImmediatePurgeOfDeletedFiles
Supported Values: Enabled or disabled (default)
nss /ImmediatePurgeOfDeletedFiles
If this parameter is enabled, it overrides the
Salvage attribute setting for individual NSS
volumes so that the directories and files are
immediately purged on deletion.
Setting the Low and High Salvage Watermarks for Automatically
Purging Deleted Directories and Files
Salvage watermarks are parameters associated with the salvage feature of NSS pools. Automatic
purging of deleted files begins when the pool’s low watermark is reached and continues until its high
watermark is reached, or until all existing deleted files and directories have been purged. Files and
directories are purged on a first-deleted, first-purged bases. If you have a deleted volume in the pool,
the deleted volume along with any deleted files and directories that existed when the volume was
deleted are not considered for this space-based purge.
When free space in the pool falls below a low watermark, NSS begins autopurging the deleted files.
When enough files are purged so that the free space increases to a percentage equal to or greater than
the high watermark, the autopurge stops. The autopurge also stops before the high watermark is
reached if there are no more deleted files or directories to be purged. Autopurging does not start
again until free space again drops below the low watermark.
Figure 23-1 How Autopurging Works
Autopurging
Begins
No Autopurging
Autopurging
Stops
Files and
Deleted Files
Free Space
High Watermark
(Default = 20%)
Low Watermark
(Default = 10%)
The high and low watermarks can be configured only at a pool level. The default low watermark is
10% of the maximum pool size. The default high watermark is 20% of the maximum pool size. The
low watermark’s percentage can range from a minimum of 0% to a maximum of 98%. The high
watermark’s percentage can range from a minimum 2% to a maximum of 100%. The high and low
watermarks must be at least 2% apart from each other.
Salvaging and Purging Deleted Volumes, Directories, and Files
299
IMPORTANT: The pool’s watermark settings are enforced only at the pool level. Volume-level
watermarks are not supported. The Storage plug-in to iManager displays default values for volume
watermarks, but they have no effect. In order for a volume to benefit from the watermark protection,
set the volume’s size to grow to the size of the pool.
At the minimum setting of 0%, the low watermark activates the autopurge only when the pool is
totally out of free space. If the watermark is set this low, users are likely to get out-of-space errors
when they try to save files. Setting the low watermark to a percentage a little higher than 0%
guarantees that autopurging begins before free space is completely used, and users are less likely to
get out-of-space errors.
The low and high salvage watermarks for a pool are set on boot to the default levels. Commands to
modify the watermarks can be issued from the command line or placed in a startup file. Settings in
the startup file persist across server reboots. Commands issued at the command line persist until the
next reboot, or until the command is issued again, whichever occurs first.
Use the following commands to configure the high and low watermarks for pools. Issue the
commands in the NSS Console (nsscon) as the root user.
nss /PoolHighWaterMark=poolname:Percent/MB/GB
nss /PoolLowWaterMark=poolname:Percent/MB/GB
Replace the poolname with the name of an individual pool or with All to set the value for all pools. For
example, to set the low watermark to 5% and the high watermark to 10% for pool p_users, enter
nss /PoolHighWaterMark=p_users:10%/MB/GB
nss /PoolLowWaterMark=p_users:5%/MB/GB
23.2.4
Setting the Purge Immediate Attribute for a Directory or File
The Purge Immediate file-system attribute flags a directory or file to be erased from the system as
soon as it is deleted. Purged directories and files cannot be recovered. When this attribute is enabled,
it overrides the salvage settings at the volume and server level. When it is disabled, the server and
volume salvage settings apply. In order to modify this setting, you must be the administrator user or
a user who is a trustee with the Erase right.
1 In iManager, select Files and Folders > Properties.
2 Click the Search icon to locate and select the directory or file you want to manage.
3 Click the Information tab to view the properties for the selected directory or file.
4 Scroll down to view the Attributes section, then select the Purge Immediate attribute to enable the
selected directory or file to be purged immediately on deletion, or deselect it to allow the salvage
settings to control the fate of the deleted directory or file.
5 Click Apply or OK to save your changes.
23.3
Enabling or Disabling the Salvage Attribute for an NSS
Volume
You can enable the Salvage attribute when you create the volume, or modify the setting later in the
volume’s properties.
1 In iManager, click Storage > Volumes.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
300
OES 11 SP2: NSS File System Administration Guide for Linux
2 Select a server to manage.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
3 In the Volumes list, select a volume that you want manage.
4 Click Properties.
The Properties page has three tabs: Attributes, Statistics, and Quota Usage. Use the Attributes
page to view or modify the attributes for the selected volume.
5 Select (enable) or deselect (disable) the Salvage attribute, then click Apply.
23.4
Viewing, Salvaging, or Purging Deleted NSS Volumes in a
Pool
If you delete a volume, NSS removes it from the pool. During the Purge Delay (by default, four days
after a volume is deleted), you can manually purge deleted volumes, view the volume contents,
transfer files from the deleted volume to other volumes, or salvage the entire volume. When you
salvage a volume, the data and metadata are exactly the same as they was at delete time, with no
changes. After the Purge Delay time elapses, NSS automatically purges the deleted volume from the
system and you can no longer access it. You can also manually purge any volumes you have deleted.
WARNING: If you delete an entire pool, all the volumes are deleted with it. You cannot restore a
deleted pool or any deleted volumes in it.
You can change the Purge Delay time to extend or reduce the time for the automatic purging cycle.
For information, see Section 23.2.2, “Setting the Immediate Purge of Deleted Files for All NSS
Volumes,” on page 298.
The Deleted Volumes option on the Pools page opens a separate Deleted Volumes page where you can
purge or salvage the deleted volumes for the pool. This option is only available if the selected pool
has deleted volumes on it.
To manage the deleted volumes in a pool:
1 In iManager, click Storage > Pools.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select a server to manage.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
A list of pools appears in the Pools list.
3 Select the pool that you want to manage.
Wait for the page to refresh and display the details. The Deleted Volumes button is active only
when there are deleted volumes in that list.
4 If the button is available, click Deleted Volumes.
This opens the Deleted Volumes page.
Salvaging and Purging Deleted Volumes, Directories, and Files
301
5 Select a volume from the Deleted Volumes list.
The browser refreshes to display details in the Details area.
 Pool: The pool name.
 Volume Was Deleted: The time stamp when the volume was deleted.
 Scheduled Purge Time: The time that the Purge Delay expires for this deleted volume and
the purging process is scheduled to begin.
 Current Deletion State: The deletion states are salvageable, purging, or paused.
6 Select one or more volumes, then perform one or more of these actions:
 Purge: Immediately begins the process of purging the selected volumes and their contents.
After this option is selected, the deleted volume can no longer be salvaged (does not return
to a Salvageable state). You cannot stop the purging by clicking Pause Purge for the
autopurging process; the two options are unrelated.
 Pause Purge/Restart Purge: Pause Purge disables autopurging for the selected volumes so
that purging does not begin automatically for a deleted volume when its Purge Delay time
elapses. During the purge delay or while autopurging is disabled, the deleted volume is
salvageable.
Restart Purge enables autopurging for the selected volumes. The deleted volume is purged
when the purge delay time elapses.
This option does not make it possible to salvage a deleted volume that has already entered a
Purge state.
 Salvage: Begins the restore process for deleted volumes you selected.
If you salvage a volume, you can assign a new name to that volume, or reuse the old one if
no current volume is using that name. A wizard opens to allow you to name the salvaged
volume.
The volume salvage process can slow the server response time, so you might want to do this
when there is less server activity.
 Close: Closes the Deleted Volumes page.
23.5
Salvaging or Purging Deleted Files with iManager
As an administrator, you can use the Files and Folders plug-in to iManager to salvage or purge
deleted files from an NSS volume where the Salvage attribute is enabled. When salvaging deleted
files, the file content, trustees, trustee rights, and inherited rights filter are just as they were before the
302
OES 11 SP2: NSS File System Administration Guide for Linux
file was deleted. If the rights in the tree above the salvaged file have changed, then the inherited
rights for the salvaged deleted file are calculated based on the current rights above it in the directory
tree.
 Section 23.5.1, “Prerequisites,” on page 303
 Section 23.5.2, “Salvaging the Deleted Files,” on page 303
 Section 23.5.3, “Purging the Deleted Files,” on page 304
23.5.1
Prerequisites
 The NSS volume that you want to manage must be in the same tree where you are currently
logged in.
 You must have trustee rights for the file that you want to manage.
 The NSS volume must be configured for salvage at the time the files, directories, or volumes are
deleted in order for deleted files to be available. Enable the Salvage attribute by going to the
volume’s Attributes page (Storage > Volumes > Properties > Attributes), select Salvage, then click
OK.
 Deleted files are typically purged according to the Purge Delay settings on the server. When the
delay time elapses, the deleted data is no longer available for salvage.
 Deleted files can be salvaged by any trustee for the file with the Create right. If another user has
purged the deleted file, it is no longer available for salvage.
 Deleted files can be purged by any trustee for the file with the Erase right. If another user has
purged the deleted file, it is no longer available for salvage.
 If the Purge Immediate attribute is set for a file or folder, the file or folder is immediately and
permanently removed from the volume upon deletion.
23.5.2
Salvaging the Deleted Files
You can salvage a deleted file and restore it to the directory from which it was deleted if you are a
trustee of the file with the Create write. You can choose to overwrite any existing copies of the file in
that location, or to rename the deleted file before it is salvaged. Review the guidelines in
Section 23.5.1, “Prerequisites,” on page 303 to understand when deleted files are available for
salvage.
1 In iManager, click Files and Folders, then click Deleted Files to open the Deleted File page.
2 On the Deleted File page, use one of the following methods to locate the folder on an NSS
volume where the deleted file existed when it was deleted:
 Click the Search icon to browse and locate the folder, then click the name link of the folder to
select it.
 Click the History icon to select a folder from the list of folders that you recently accessed.
The Deleted Files report lists the deleted files in the folder and shows who deleted each file and
when it was deleted.
3 Browse the list of deleted files to locate the version of the file you want to salvage.
4 Select one or more deleted files that you want to salvage, then click Salvage.
NOTE: To successfully salvage files, it is mandatory that the parent folders of the files that you
intend to salvage exist. If they do no exist, salvage the parent folders first.
Salvaging and Purging Deleted Volumes, Directories, and Files
303
5 If a current file in the folder is named the same as the salvaged file, you are prompted to do one
of the following:
 Type a new name for the salvaged file, then click OK.
 Click OK to overwrite the current file with the salvaged file.
23.5.3
Purging the Deleted Files
You can purge a deleted file to remove it immediately from the volume if you are a trustee of the file
with the Erase right. Purged files can no longer be salvaged. Review the guidelines in Section 23.5.1,
“Prerequisites,” on page 303 to understand when deleted files are available for purging.
1 In iManager, click Files and Folders, then click Deleted File to open the Deleted File page.
2 On the Deleted File page, use one of the following methods to locate the folder on an NSS
volume where the deleted file existed when it was deleted:
 Click the Search icon to browse and locate the folder, then click the name link of the folder to
select it.
 Click the History icon to select a folder from the list of folders that you recently accessed.
The Deleted Files report lists the deleted files in the folder and shows who deleted each file and
when it was deleted.
3 Browse the list of deleted files to locate the version of the file you want to purge.
4 Select one or more deleted files that you want to purge, then click Purge.
23.6
Salvaging or Purging Deleted Files with Other Tools
You can use any of the following methods to salvage or purge deleted files. To purge, the user must
be a trustee of the file with the Erase right. To salvage, the user must be a trustee of the file with the
Create right.
 Section 23.6.1, “Using NetStorage,” on page 304
 Section 23.6.2, “Using the Novell Client,” on page 305
23.6.1
Using NetStorage
Using NetStorage, the Administrator user, the Admin-equivalent user, and individual users can
purge and possibly undelete NSS files that were previously deleted on the server.
1 Access NetStorage.
For information, see Section 9.4, “Novell NetStorage,” on page 97.
2 In the left column, select the directory where the deleted files were located when they were
deleted.
3 Click View, then click Show Deleted Files.
4 Select the check box next to one or more files you want to undelete or purge.
5 Click File, then click Undelete or click Purge.
304
OES 11 SP2: NSS File System Administration Guide for Linux
23.6.2
Using the Novell Client
Using the Novell Client for Windows 2000/XP/2003, Administrator users, Admin-equivalent users,
and individual users can purge and possibly undelete NSS files that were previously deleted on the
server.
1 Right-click the Novell Client icon (the red N) in the notification area to display the menu.
2 If you want to salvage a deleted file, click Novell Utilities > Salvage (or NetWare Utilities > Salvage
on older client versions), browse to locate the directory where the deleted file resided, then do
one of the following:
 To restore one or more deleted files, select the deleted files, then click Salvage File.
 To restore all deleted files in the directory, click Salvage All.
3 When you are done, click OK.
4 If you want to purge a deleted file, click Novell Utilities > Purge (or NetWare Utilities > Purge on
older client versions), browse to locate the directory where the deleted file resided, then do one
of the following:
 To purge one or more deleted files, select the deleted files, then click Purge File.
 To purge all deleted files in the directory, click Purge All.
 To purge the directory’s subdirectories and all deleted files in them, click Purge
Subdirectories.
5 When you are done, click OK.
You can also purge deleted files from NCPCON using the command purge volume. For more
information, see “Using NCPCON to Purge Deleted Files” in the OES 11 SP2: NCP Server for Linux
Administration Guide.
Salvaging and Purging Deleted Volumes, Directories, and Files
305
306
OES 11 SP2: NSS File System Administration Guide for Linux
24
Managing Hard Links
24
This section describes how to configure, create, and delete hard links on Novell Storage Services
volumes on Novell Open Enterprise Server 11 SP2 server.
 Section 24.1, “Understanding Hard Links,” on page 307
 Section 24.2, “Upgrading the Media Format for Hard Links Support,” on page 311
 Section 24.3, “Enabling or Disabling the Hard Links Attribute,” on page 311
 Section 24.4, “Creating a Hard Link Using Ln on an NSS Volume,” on page 313
 Section 24.5, “Creating a Hard Link Using a zLink API,” on page 314
 Section 24.6, “Creating a Hard Link for Testing Purposes,” on page 314
 Section 24.7, “Viewing Hard Links for a File,” on page 315
 Section 24.8, “Deleting a Hard Link,” on page 316
 Section 24.9, “Deleting a Primary Link,” on page 316
24.1
Understanding Hard Links
Novell Storage Services supports zero to 65,535 hard links per file on NSS volumes. Hard link
support for an NSS volume allows users to create multiple names for a single, existing file object in
the same directory or in multiple directories in the same NSS volume.
IMPORTANT: Hard links to directories, data streams, and extended attributes are not allowed.
The alternate names for the file object link to the primary file inode in the NSS file system. One file
can have different filenames in the same directory or multiple directories as long as all of the
directories reside on the same volume. It is not possible to create hard links from different volumes
and have them point to the same file.
This section describes the following:
 Section 24.1.1, “Hard Links and the Primary Link,” on page 308
 Section 24.1.2, “Hard Links and Linux Users,” on page 308
 Section 24.1.3, “Hard Links and File Ownership,” on page 308
 Section 24.1.4, “Hard Links and File System Trustees,” on page 309
 Section 24.1.5, “Hard Links and Directory Space Quotas,” on page 309
 Section 24.1.6, “Hard Links and User Space Quotas,” on page 309
 Section 24.1.7, “Hard Links and the Hard Links Attribute,” on page 310
 Section 24.1.8, “Hard Links and File Salvage and Purge,” on page 310
 Section 24.1.9, “Hard Links and DFS Move Volume,” on page 310
Managing Hard Links
307
24.1.1
Hard Links and the Primary Link
Every file has what is considered to be the primary link, which is originally the path and filename
assigned at file creation time. The primary parent directory is originally the path and directory name
where the file is created. When you create hard links for the file, the file’s primary link and all of its
hard links share the file content by pointing to the file’s inode. All links share all of the file’s metadata
except for name information; each link accesses only its own filename information.
When multiple hard links exist for a file, you can delete the file’s primary link or its hard links, but
the file’s content and metadata are not deleted until the last remaining link is deleted. If you delete a
hard link, the link name is deleted, not the file. If hard links exist, whenever you delete a primary
link, the most recently created hard link automatically becomes the new primary link.
When the primary link is deleted and the status of primary is given to the next hard link, the
following occurs:
 The file owner does not change.
 The explicitly assigned file system trustee assignments, trustee rights, and file attributes for the
file do not change.
 If the new primary link is in a different directory, the file’s inherited rights filter applies to the
new parent directory, so effective rights for the file’s trustees can change. For information, see
Section 24.1.4, “Hard Links and File System Trustees,” on page 309.
IMPORTANT: Make sure to check the consequences to effective rights before deleting the
primary link.
 For the directory space quota, the total disk space used by the file is reassigned from the old
primary link’s parent directory to the new primary link’s parent directory. For information, see
Section 24.1.5, “Hard Links and Directory Space Quotas,” on page 309.
 For the user space quota, the total disk space used by the file continues to be associated with the
file’s assigned owner.
24.1.2
Hard Links and Linux Users
When hard links are used on NSS volumes, users must be Linux-enabled eDirectory users.
24.1.3
Hard Links and File Ownership
In the NSS file system, the user who creates a file is assigned as its owner at file creation time. File
ownership is not affected by transferring the primary link, creating hard links, or deleting hard links.
A file can have only one owner, even if has multiple hard links.
File ownership rarely, if ever, changes, but an administrator or administrator equivalent user can
assign a new owner when necessary. Changing file ownership requires the Supervisor right for the
primary parent directory and the file. Use whatever tools you normally use to modify the file’s
ownership.
NSS uses the Novell trustee model to secure access to the file, not file ownership. For information
about how access control is affected by hard links, see Section 24.1.4, “Hard Links and File System
Trustees,” on page 309.
File ownership allows accounting of disk block usage per user and enables user space quotas to
operate effectively. For information, see Section 24.1.6, “Hard Links and User Space Quotas,” on
page 309.
308
OES 11 SP2: NSS File System Administration Guide for Linux
File ownership has no affect on the directory space quotas. For information, see Section 24.1.5, “Hard
Links and Directory Space Quotas,” on page 309.
24.1.4
Hard Links and File System Trustees
Explicit file system trustee assignments, trustee rights, and attributes for a file are stored in the file’s
metadata where the information is shared equally by the primary link and all hard links. For users
who match the trustee, the file is visible along any path of the file’s primary link and hard links.
Explicit access security is enforced equally for any of these paths.
Inherited trustee assignments and trustee rights are inherited only through the primary link’s parent
directory. When calculating effective rights to a file, rights are granted based on explicit rights and
rights inherited from the primary link’s parent directory path.
If the primary link is deleted and reassigned, the inheritance changes to the settings of the newly
assigned primary link’s parent directory.
IMPORTANT: Make sure you verify the effective rights on the new primary link whenever you
delete a primary link.
24.1.5
Hard Links and Directory Space Quotas
If you set a directory space quota for a file’s primary parent directory, the file’s size is charged against
the quota. All metadata increases associated with hard links are included in the file’s total disk space
usage. For hard links in the primary parent directory, the file has an alternate name, but the charge
for space usage is not duplicated. For hard links in other directories, quotas for their parent
directories are not charged for space usage.
Whenever you delete a primary link, the most recently created hard link automatically becomes the
new primary link. At that time, the file’s size is subtracted from the space usage reported on an old
primary link’s parent directory and added to the space usage reported on the newly assigned
primary link’s parent directory. The file’s space usage is no longer charged against the directory
space quota on the old directory. If a quota is set on the new primary link’s parent directory, the file’s
space usage is charged against it.
When you delete a primary link, the transfer of responsibility for space usage to the new primary
parent directory is allowed even if the added file size overbooks its directory space quota. NSS allows
the quota to be overbooked, and then enforces the directory quota restrictions.
IMPORTANT: Make sure you verify the directory quota on the new primary link’s parent directory
whenever you delete a primary link.
24.1.6
Hard Links and User Space Quotas
User space restrictions are charged based on ownership of the file. Ownership is tracked with the
user’s GUID, not the username. If the username ever becomes invalid, the file continues to be
charged to the GUID. In a space usage report, the value of the GUID appears in place of where the a
valid username would normally be.
If other users create hard links to the file, they are not charged a quota on that file. The file size is
charged to the file owner’s user space quota, even if the owner no longer has the necessary rights to
access the file. If hard links exist, deleting the primary link does not delete the file, and the owner
continues to be responsible for the space used.
Managing Hard Links
309
The file’s users can delete the primary link and hard links to the file only in directories where they
have the necessary access rights. Hard links from directories the owner cannot access causes the file
to be retained, and the owner continues to be charged for its quota. The file continues to be charged
against the owner’s user space quota until the file itself is deleted or until the system administrator
reassigns another user as the file owner.
When file ownership is changed, the new owner’s user space quota is checked before the change of
ownership is allowed. If the file size will overbook the user space quota, the change is not allowed.
IMPORTANT: Make sure you verify the user space quota on the new owner whenever you change
ownership of a file.
24.1.7
Hard Links and the Hard Links Attribute
Hard links support is available on OES 2 and later.
A media upgrade is required. For instructions, see Chapter 4, “Upgrading the NSS Media Format,”
on page 41.
The hard links attribute for the NSS volume must be enabled to allow hard links to be created. For
information, see Section 24.3, “Enabling or Disabling the Hard Links Attribute,” on page 311.
24.1.8
Hard Links and File Salvage and Purge
The interaction between hard links and the salvage system depends on how many links exist for a
specified file. If a file has multiple links pointing to it, and one of the links is deleted, the hard link
name is not added to the salvage system, and cannot be recovered later via a salvage operation. When
the last and only remaining link to a file is deleted, that last name is a candidate for the salvage
system. If the Salvage Files attribute is enabled on the volume, and an immediate purge is not in
force, then the last link to be deleted is added into the salvage system. Under this final primary link
and filename, the file can be salvaged or purged until the file is autopurged from the system.
24.1.9
Hard Links and DFS Move Volume
If you use the Novell Distributed File Services Volume Move operation to move a volume that has
been upgraded to the new media format for hard links, consider the following guidelines:
 Before you create the DFS Move Volume job, make sure that NSS is set so that the new target
volume is automatically created with the upgraded media format for enhanced hard links. For
information and command options, see Section 4.3, “Upgrading the Media Format
Automatically for New NSS Volumes,” on page 45.
 If you moved the volume without enabling the new media format, you must upgrade the
volume to the new media format after the move completes successfully. For information and
command options, see Section 4.2, “Upgrading the Media Format for Existing NSS Volumes,” on
page 43.
 In the initial release of OES 2, the Move Volume Wizard does not have an option to enable the
Hard Links attribute for the new target volume. After the move is completed and the media
format is upgraded for enhanced hard links support, you must manually enable the Hard Links
attribute. For instructions, see Section 24.3, “Enabling or Disabling the Hard Links Attribute,” on
page 311.
310
OES 11 SP2: NSS File System Administration Guide for Linux
24.2
Upgrading the Media Format for Hard Links Support
An enhanced NSS media format is available that provides improved support for hard links. After
you install or upgrade your operating system to Novell Open Enterprise Server 2, you can decide
whether to upgrade the media format for your NSS volumes to use the new metadata structure; some
restrictions apply. For information, see Section 4.1, “Guidelines for Upgrading the Media Format of
NSS Volumes,” on page 41.
24.2.1
New Metadata Structure Supports Up to 65,535 Hard Links for a File
NSS volumes can support up to 65,535 hard links for a file, regardless of the length of the filename
and the number of name spaces used. To use hard links on these operating systems, you must
upgrade the NSS volume to use a new metadata structure.
24.2.2
Old Metadata Structure Supports Limited Hard Links for a File
Previously, NSS volumes support only a limited number of hard links for a file. These volumes use a
metadata structure that provides limited space for the file’s filenames across all name spaces.
With the old metadata structure, the number of hard links you can use for a file depends on the
length of the names you choose for a file, and how those names can be shared across the UNIX, Long,
DOS, and Macintosh name spaces. It is also limited by the amount of space available in the metadata
structure.
IMPORTANT: The sum total of all metadata for a file cannot exceed 2 KB. In the old metadata
structure, this includes all hard link names. The longer the names are, the fewer hard links you can
create.
If you reach the 2 KB metadata limit, you cannot create new hard links until you make room for them.
Use any of the following methods:
 Delete one or more hard links for the file.
 Create hard links for the file with shorter names that can be stored optimally in each name
space.
 Rename the original file with a shorter name that can be stored optimally in each name space.
24.3
Enabling or Disabling the Hard Links Attribute
The new media format for enhanced hard link support provides the Hard Links attribute for NSS
volumes. The Hard Links attribute must be enabled in order to create and manage hard links on an
NSS volume.
When you upgrade the NSS volume to use the new media format, if any old-style hard links are
detected, the Hard Links attribute is automatically enabled. Otherwise, the volume is upgraded, but
the Hard Links attribute is disabled. The attribute must be enabled before you can create hard links.
 Section 24.3.1, “Prerequisite,” on page 312
 Section 24.3.2, “Hard Links Attribute Commands,” on page 312
 Section 24.3.3, “Viewing the Hard Link Attribute Setting,” on page 312
Managing Hard Links
311
24.3.1
Prerequisite
The NSS volume must be upgraded to the new media format for enhanced hard link support. For
instructions, see Chapter 4, “Upgrading the NSS Media Format,” on page 41.
24.3.2
Hard Links Attribute Commands
Use the commands in this section to enable or disable the Hard Links attribute for an NSS volume.
The Hard Links attribute cannot be set or viewed in NSSMU or in the Storage plug-in to iManager.
Issue the commands at the NSS Console (nsscon) as the root user.
nss /HardLinks=volumename
Enables the Hard Links attribute for the specified volume. This enables hard links to be created
on the volume.
nss /HardLinks=all
Sets the Hard Links attribute for all NSS volumes on the server. This enables hard links to be
created on any volume on the server. Any given hard link can point only to a file on the same
volume.
nss /NoHardLinks=volumename
Disables the Hard Links attribute for the specified volume. Existing hard links continue to
function, but no new hard links can be created on the specified volume.
nss /NoHardLinks=all
Disables the Hard Links attribute for all NSS volumes on the server. Existing hard links continue
to function, but no new hard links can be created on any NSS volume on the server.
24.3.3
Viewing the Hard Link Attribute Setting
The Hard Links attribute cannot be viewed in NSSMU or in the Storage plug-in to iManager. Use the
nss /volumes command to determine whether the Hard Links attribute is set for the NSS volume. In
the Attributes column, the Hard Links attribute is listed if the attribute is enabled.
1 Open a terminal console, then log in as the root user.
2 At the terminal console prompt, enter the following to start the NSS Console:
nsscon
3 At the nsscon prompt, enter either of the following commands:
nss /volumes
volumes
312
OES 11 SP2: NSS File System Administration Guide for Linux
In the Attributes column, the Hard Links attribute is listed if the attribute is enabled.
24.4
Creating a Hard Link Using Ln on an NSS Volume
Typically, you create hard links by using clients (like NFS) that have existing commands to
manipulate links. This section explains how to create a hard link in a volume by mounting the
volume with NFS, then using the Link (ln) command to create a hard link. For help with syntax and
options, refer to the Link Manual Page from a Linux client. At a terminal console, enter
man ln
Creating a hard link to a file does not copy the contents of the file to the new location; it simply makes
a new name by which the file can be known, in addition to the file’s existing name or names. All of
the names are equally real, and no one of them is preferred to the others except regarding access
control and directory space quotas. For information, see Section 24.1, “Understanding Hard Links,”
on page 307.
The actual content of the file is maintained in only the original file. Users in different directories can
use hard links to access and share the same file. A single user can use hard links to access a file from
different directories.
Prerequisites
 The NSS volume must be upgraded to use the new metadata structure. For information, see
“Upgrading the NSS Media Format” on page 41.
 The Hard Links attribute must be enabled for the volume. For information, see Section 24.3,
“Enabling or Disabling the Hard Links Attribute,” on page 311.
 Mount the volume with NFS.
Procedure
To create a hard link, use the Link (ln) command from a Linux client.
1 At a terminal console, enter
ln /path/filename /linkpath/linkfilename
Replace /path/filename with the pathname for the original file. Replace /linkpath/
linkfilename with the pathname for the hard linked filename.
For example, to link the file /tmp/timetest to /usr/tmp/t1, enter
Managing Hard Links
313
ln /media/nss/VOL1/timetest /usr/media/nss/VOL1/t1
The file can now be referred to by either name. You can delete the original name or any link
name, but the contents of the file are not removed until the final name is deleted.
24.5
Creating a Hard Link Using a zLink API
To create a hard link from an application or script, you can use a zLink API. For information, see the
NDK: File System Services (64-bit) (http://developer.novell.com/ndk/doc/fs64/fs64_enu/data/
a5p4x94.html).
You can also use the Linux link(2) command to create hard links from applications or scripts.
24.6
Creating a Hard Link for Testing Purposes
While working on a server, you can use the nss /CreateHardLink command to create hard links for
testing purposes only. In a production environment, use the Link (ln) command from a client
instead, as described in Section 24.4, “Creating a Hard Link Using Ln on an NSS Volume,” on
page 313.
IMPORTANT: Do not use the nss /CreateHardLink command in a production environment.
 Section 24.6.1, “Prerequisites,” on page 314
 Section 24.6.2, “Procedure,” on page 314
24.6.1
Prerequisites
 The NSS volume must be upgraded to use the new metadata structure. For information, see
“Upgrading the NSS Media Format” on page 41.
 The Hard Links attribute must be enabled for the volume. For information, see Section 24.3,
“Enabling or Disabling the Hard Links Attribute,” on page 311.
 Hard links must be created on the same volume as the original file.
24.6.2
Procedure
To create a hard link for testing purposes on your server:
1 Issue the command at the NSS Console (nsscon) as the root user.
nss /CreateHardLink=PrimaryFilePath|HardLinkFilePath
Replace PrimaryFilePath with the complete volume name, pathname, and filename of the
primary file. Replace HardLinkFilePath with the complete volume name, pathname, and
filename of the new hard link.
IMPORTANT: The file paths are specified using the Long name space.
For example, to create a hard link on a volume named VOL1:
nss /CreateHardLink=VOL1:\path\file.ext|VOL1:\newpath\newfile.ext
314
OES 11 SP2: NSS File System Administration Guide for Linux
24.7
Viewing Hard Links for a File
You can view a report of hard links for a file to identify its primary link and the hard link that
becomes the primary link if the primary link is deleted. The nss /ListHardLinks command returns
a list of all hard links for a specified file, where the first link in the list is the primary link. The second
link is the most recently created hard link. All other hard links follow in reverse chronological order
of their create time. All links for the file are listed in the order that they appear in the metadata, which
is also the order in which a new primary name would be assigned.
IMPORTANT: If the primary link is deleted, the most recently created hard link (the second link
reported in the list) becomes the new primary link. Changing the primary link can impact the
trustees and inheritance for the file. For more information, see Section 24.1.4, “Hard Links and File
System Trustees,” on page 309.
The list of hard links contains the following information for the file:
List of Hard Links
Information
Line 1 of the report
The ZID (file number) of the inode that contains the metadata for the
hard link file set and the number of links associated with that inode.
Line 2 of the report
The complete name of the primary link, including the path and
filename
Line 3 of the report
The complete name of the most recently created hard link name,
including the path and filename. This is the next candidate in line to
become the primary link if the primary link is deleted.
Lines 4 through 65,536 (up to
65,535 links per file) of the report
The complete name of each of the hard link names, including the
path and filename, are listed in the reverse chronological order of
their creation time.
To view information about the primary link and hard links for a file on an NSS volume:
1 Open a terminal console, then log in as the root user.
2 At the terminal console prompt, enter the following to open the NSS Console:
nsscon
3 At the nsscon prompt, enter
/nss /ListHardLinks=vol:path\filename.ext
Replace path with the file’s primary link path or one of its hard link paths where you want to
start the search. Replace filename.ext with the actual filename of the file, including the
extension.
Managing Hard Links
315
24.8
Deleting a Hard Link
If multiple links exist for a file, deleting a hard link to a file deletes only the link name, not the file’s
contents. NSS keeps count of how many hard links a file has and does not delete the file contents
until all hard link names for the file and its primary link have been deleted (the link count goes to
zero).
There are no special commands required to delete a hard link. In a file manager, locate the link name
to be deleted, then delete it with the delete commands and procedures native to the client you use to
access the file.
24.9
Deleting a Primary Link
When multiple hard links exist for a file, deleting its primary link does not delete the content and
metadata. These are deleted only when the last remaining link is deleted. While a file has multiple
links, if you delete a primary link, the most recently created hard link automatically becomes the new
primary link.
If the most recently created hard link is not the link you want to become the primary link, you can
delete and re-create the preferred link, then delete the primary link. Make sure you do not create any
other new hard links in the meantime.
Deleting a primary link has the following consequences:
 Order of Ascendancy: Whenever you delete a primary link for a file with hard links, the most
recently created hard link automatically becomes the new primary link.
To identify the pathname of the hard link that is next in line to become the new primary link, see
Section 24.7, “Viewing Hard Links for a File,” on page 315.
For information about primary links, see Section 24.1.1, “Hard Links and the Primary Link,” on
page 308
 Effective Rights for File System Trustees: When the inherited rights filter is applied to the new
primary link’s parent directory, the effective rights for trustees might change.
For information, see Section 24.1.4, “Hard Links and File System Trustees,” on page 309
 Directory Quotas: When the new primary link is in a different directory, the directory quotas are
affected for the old and new parent directories.
For information, see Section 24.1.5, “Hard Links and Directory Space Quotas,” on page 309
There are no special commands required to delete a primary link. In a file manager, locate the link
name to be deleted, then delete it with delete commands and procedures native to the client you use
to access the file.
316
OES 11 SP2: NSS File System Administration Guide for Linux
25
Managing Files and Folders on NSS
Volumes
25
This section provides an overview of how to manage files and folders as an administrator of a Novell
Storage Services (NSS) volume using the Files and Folders plug-in for Novell iManager 2.7.x. For
more information about using the Files and Folders plug-in to configure the file system trustees,
trustee rights, inherited rights filters, and file and folder attributes, see Section 20.1, “Configuring File
System Trustees, Trustee Rights, Inherited Rights Filters, and Attributes,” on page 247.
 Section 25.1, “Creating a Folder on an NSS Volume,” on page 317
 Section 25.2, “Deleting a File or Folder on an NSS Volume,” on page 318
 Section 25.3, “Uploading Files to an NSS Volume,” on page 319
 Section 25.4, “Downloading Files from an NSS Volume,” on page 320
 Section 25.5, “Renaming a File on an NSS Volume,” on page 321
 Section 25.6, “Moving a File to Different Folder on an NSS Volume,” on page 321
 Section 25.7, “Viewing or Modifying File or Folder Properties,” on page 322
 Section 25.8, “Viewing or Modifying File Ownership,” on page 325
 Section 25.9, “Viewing, Adding, Modifying, or Removing a Directory Quota,” on page 327
25.1
Creating a Folder on an NSS Volume
As an administrator, you can use the Files and Folders plug-in to iManager to create a folder on an
NSS volume.
25.1.1
Prerequisites
 The destination NSS volume must be in the same tree where you are currently logged in to
iManager.
 You must have trustee rights for the volume and destination location where you want to create
the new folder. The Create right is required for creating files and folders.
25.1.2
Procedure
1 In iManager, click Files and Folders, then click New Folder to open the New Folder page.
Managing Files and Folders on NSS Volumes
317
2 Use one of the following methods to specify the destination path on the NSS volume where you
want to create the new folder:
 Click the Search icon to browse and locate the destination folder, then click the name link of
the folder to select it.
 Click the History icon to select a folder from the list of folders that you recently accessed.
The pathname of the folder appears in the Path field.
3 In Folder Name, type the name the folder you want to create in the selected location.
4 Click OK to create the folder, or click Cancel to abandon it.
A message confirms when the folder has been successfully created.
5 Click Repeat Task to create another folder, or click OK to dismiss the confirmation message.
6 Click Files and Folders, then click Properties to set file system trustees, trustee rights, and attributes
for the new folder or folders.
For instructions for configuring properties, see Section 20.1, “Configuring File System Trustees,
Trustee Rights, Inherited Rights Filters, and Attributes,” on page 247.
25.2
Deleting a File or Folder on an NSS Volume
As an administrator, you can use the Files and Folders plug-in to iManager to delete a file or folder on
an NSS volume.
25.2.1
Prerequisites
 The NSS volume must be in the same tree where you are currently logged in to iManager.
 You must have trustee rights for the file or folder that you want to delete. The Erase right is
required to delete the file.
25.2.2
Procedure
1 In iManager, click Files and Folders, then click Delete to open the Delete File or Folder page.
318
OES 11 SP2: NSS File System Administration Guide for Linux
2 Use one of the following methods to specify the file or folder that you want to delete from the
NSS volume:
 Click the Search icon to browse and locate the file or folder, then click the name link of the
object to select it.
 Click the History icon to select a file or folder from the list of files and folders that you
recently accessed.
The pathname of the folder appears in the Name field.
3 Click OK to delete the selected file or folder, or click Cancel to abandon the delete process.
A message confirms when the file or folder has been successfully deleted.
4 Click Repeat Task to delete another folder, or click OK to dismiss the confirmation message.
25.3
Uploading Files to an NSS Volume
As an administrator, you can use the Files and Folders plug-in to iManager to upload files from your
local computer to an existing folder on an NSS volume.
25.3.1
Prerequisites
 The destination NSS volume must be in the same tree where you are currently logged in to
iManager.
 You must have trustee rights for the destination folder in order to be able to find the folder and
upload the file. The Create right is required for file uploads.
25.3.2
Procedure
1 In iManager, click Files and Folders, then click Upload to open the Upload File page.
2 Use one of the following methods to specify the path to the folder on the NSS volume where you
want to put the file:
 Click the Search icon to browse and locate the folder, then click the name link of the folder to
select it.
 Click the History icon to select a folder from the list of folders that you recently accessed.
The pathname appears in the Path field.
3 Select the file on your local computer that you want to upload:
3a Click Browse to open a local file browser dialog box.
3b Browse and locate the file.
Managing Files and Folders on NSS Volumes
319
3c Select the file, then click Open.
The local pathname for the selected file appears in the File Name field.
4 Click OK to begin the upload, or click Cancel to abandon the process.
A message confirms when the file has been successfully uploaded. Wait until the upload
completes before proceeding to other tasks.
5 Click Repeat Task to upload another file, or click OK to dismiss the confirmation message.
25.4
Downloading Files from an NSS Volume
As an administrator, you can use the Files and Folders plug-in to iManager to download a file from
an NSS volume to your local computer.
25.4.1
Prerequisites
 The NSS volume must be in the same tree where you are currently logged in to iManager.
 You must have trustee rights for the file in order to be able to browse to and download the file.
25.4.2
Procedure
1 In iManager, click Files and Folders, then click Download to open the Download File page.
2 Use one of the following methods to select the file that you want to download from the NSS
volume to your local drive:
 Click the Search icon to browse and locate the file, then click the name link of the file to
select it.
 Click the History icon to select a file from the list of files that you recently accessed.
The pathname appears in the File Name field.
3 Click OK to open the File Download dialog box.
IMPORTANT: If the File Download dialog box does not open, make sure the security settings in
your browser allow downloads from the server by adding the server as a trusted site, then try
again.
4 Use one of the following methods to save the file to the local computer:
 Click Open to view the file in an appropriate application, then save the file by using the
application's File > Save options.
The application that opens the file must already be installed on your computer.
320
OES 11 SP2: NSS File System Administration Guide for Linux
 Click Save to open the Save As dialog box, browse to an existing folder or create a new local
folder where you want to save the file, then click Save.
The browser’s download manager manages the download and notifies you when the
download is complete.
You can continue with other iManager tasks while the file is downloading.
25.5
Renaming a File on an NSS Volume
Use this task to rename your file to a different name.
25.5.1
Prerequisites
You must have trustee rights for the file in order to be able to find the file and rename it the file.
Create and modify rights are required to rename the files.
25.5.2
Procedure
1 Use one of the following methods to select the file that you want to rename:
 Click the Search icon to browse and locate the file, then click the name link of the file to
select it.
 Click the History icon to select a file from the list of files that you recently accessed.
The pathname appears in the Path field.
2 Type the new name in the New name field.
3 Click OK to rename the file, or click Cancel to discard the changes.
A message confirms that the file has been successfully renamed. Wait until the rename
completes before proceeding to other tasks.
25.6
Moving a File to Different Folder on an NSS Volume
Use this task to move your file to a different folder on the same NSS volume.
25.6.1
Prerequisites
You must have trustee rights for the file in order to be able to find the file and move it. Create and
modify rights are required to move a file.
25.6.2
Procedure
1 Use one of the following methods to select the file that you want to move:
 Click the Search icon to browse and locate the file, then click the name link of the file to
select it.
 Click the History icon to select a file from the list of folders that you recently accessed.
2 Click Browse to open a local file browser dialog box. Browse to locate and select the folder
where you want to move the file, then click Open.
The pathname for the selected folder appears in the Folder Name field.
Managing Files and Folders on NSS Volumes
321
3 Click OK to begin the upload, or click Cancel to discard the changes.
A message confirms that the file has been successfully uploaded. Wait until the upload
completes before proceeding to other tasks.
25.7
Viewing or Modifying File or Folder Properties
1 In iManager, click Files and Folders > Properties to open the Properties page.
2 Click the Search icon to browse the Storage objects, locate and select the name link of the file or
folder you want to manage, then click OK to view the Properties for the file.
3 Click the Information tab to view or modify the following information for the selected folder or
file:
322
OES 11 SP2: NSS File System Administration Guide for Linux
Property
Description
Server
The name of the server.
Location
The pathname of the selected volume, folder, or file. For example:
VOL1:dir1\dirB\filename.ext
Size
The file size.
Restrict Size
Enable (select) or disable (deselect) a directory quota on the specified folder
on an NSS volume where the Directory Quotas attribute is enabled. The
default is Disabled.
(Enable or Disable a
Directory Quota on a
Folder)
If this option is enabled, you must also specify a value for the quota in the
Limit field.
A directory quota limits the amount of space on a volume that can be
consumed by all of the files and folders in that directory. The directory quota
applies to files and folders created by any user of the directory.
Select Restrict Size to enable a directory quota for the selected folder, specify
the quota value in Limit, then click Apply.
Deselect Restrict Size to disable a directory quota for the selected folder, then
click Apply.
Limit
The maximum size allowed for the specified directory and its contents.
(Set Limit for a Directory Default: Disabled (not available unless Restrict Size is enabled).
Quota on a Folder)
If you enable Restrict Size for the selected folder, you must specify a limit for
the directory quota. Type a value in KB for the quota. The value must be an
increment of 4 KB; that is, it must be divisible by 4 with no remainder. Click
Apply to save the changes.
If the directory quota exceeds the volume quota, the volume quota is enforced.
If the current size of the selected folder exceeds the specified limit, users
cannot save data to the folder until space is cleared by removing files from it.
If a user quota is set for a user on the volume, the user space restriction
overrides the directory quota. That is, the user cannot save data to the folder if
doing so causes the user to exceed his or her user quota.
Created
The time stamp (Day, Month DD, YYYY hh:mm) for when the file or folder was
created.
Modified
The time stamp (Day, Month DD, YYYY hh:mm) for when the file or folder was
last modified.
Accessed
The time stamp (Day, Month DD, YYYY hh:mm) for when the file or folder was
last accessed.
Archived
The time stamp (Day, Month DD, YYYY hh:mm) for when the file or folder was
last archived.
Managing Files and Folders on NSS Volumes
323
Property
Description
Creator
The typeless distinguished NetIQ eDirectory username (such as
username.context) of the user who created the file or folder. If the username
becomes invalid, such as if an employee leaves the company, the GUID of the
username is reported. For NSS, any number of files or folders can be
represented by GUIDs instead of valid usernames.
(View or Modify
Ownership)
User quotas for NSS volumes consider file ownership to enforce user space
restrictions. You might need to change the ownership of a file or folder in order
to make the space it consumes be charged against a different user.
For NSS volumes (as for all volumes that use the Novell trustee model of
access), all access to data is controlled by file system trustees and trustee
rights instead of by ownership. When a user creates a file or folder, the
trustees and trustee rights for accessing the file are automatically inherited
from the directory where the file is created. If you intend different trustees and
rights for the file, you must assign them explicitly by user, or assign the rights
to a group and put the users into that group. For instructions, see
Section 20.1.4, “Configuring Rights Properties (File System Trustees, Trustee
Rights, and Inherited Rights Filter),” on page 250.
Changing the ownership of the file or folder does not modify who can access
it, but it does modify whose username is charged for the space it consumes. If
you modify the ownership, you must click Apply or OK to save the changes.
Archiver
The distinguished username (such as username.context) of the user who
modified the version of the file or folder that was last archived.
Modifier
The distinguished username (such as username.context) of the user who last
modified the current version of the file or folder.
Attributes
File attributes determine how the file or folder behaves when accessed by any
user. Enable or disable an attribute by selecting or deselecting the check box
next to it. If you modify a setting, click Apply or OK to save the changes.
File attributes apply universally to all users. For example, a file that has a
read-only attribute is read-only for all users.
Attributes can be set by any trustee with the Modify right to the directory or file,
and attributes stay set until they are changed. Attributes do not change when
you log out or when you down a file server.
For example, if a trustee with the Modify right enables the Delete Inhibit
attribute for a file, no one, including the owner of the file or the network
administrator, can delete the file. However, any trustee with the Modify right
can disable the Delete Inhibit attribute to allow the file’s deletion.
The following table defines file system attributes and whether they apply to files, folders, or
both files and folders.
324
OES 11 SP2: NSS File System Administration Guide for Linux
Attribute
Description
Files
Folders
Read Only
Prevents a file from being modified.
Yes
No
This attribute is typically used in combination with Delete
Inhibit and Rename Inhibit.
Archive
Identifies files and folders that have been modified since the
last backup. This attribute is assigned automatically.
Yes
Yes
Hidden
Hides directories and files so they do not appear in a file
manager or directory listing.
Yes
Yes
Shareable
Allows more than one user to access the file at the same time. Yes
This attribute is usually used with Read Only.
No
Purge Immediate
Flags a directory or file to be erased from the system as soon Yes
as it is deleted. Purged directories and files cannot be
recovered.
Yes
Rename Inhibit
Prevents the directory or filename from being modified.
Yes
Yes
Delete Inhibit
Prevents users from deleting a directory or file.
Yes
Yes
This attribute overrides the file system trustee Erase right.
When Delete Inhibit is enabled, no one, including the owner
and network administrator, can delete the directory or file. A
trustee with the Modify right must disable this attribute to allow
the directory or file to be deleted.
NOTE: Setting the following preferences override the delete
inhibit and rename inhibit settings. The override option is
made available via volume mount options and nsscon.
 From nsscon enter, /
(No)RootOverrideFA=(ALL|VOL1,VOL2)
 For local volumes change the following: /etc/
fstab (-o name=<NAME>,overrideFA)
 For shared volumes change the following: cluster
resource load scripts (/opt=overrideFA)
If /RootOverrideFA is set on the volume, the Linux root
user can delete and rename a file.
4 If you modified any settings, click Apply or OK to save your changes.
25.8
Viewing or Modifying File Ownership
The owner of a file is assigned by default to be the identity of the user who creates the file.
Ownership does not determine who can access a file because the NSS file system uses the Novell
trustee model to control access. However, user quotas for NSS volumes consider file ownership to
enforce user space restrictions. You might need to change the ownership of a file or folder in order to
make the space it consumes be charged against a different user. Changing the ownership of the file or
folder does not modify who can access it, but it does modify whose username is charged for the space
it consumes.
NOTE: As an administrator you can modify the file or folder ownership.
Managing Files and Folders on NSS Volumes
325
The Creator field shows the typeless distinguished NetIQ eDirectory username (such as
username.context) of the user who owns the file or folder. If the username becomes invalid, such as if
an employee leaves the company, the GUID of the username is reported. For NSS, any number of files
or folders can be represented by GUIDs instead of valid usernames.
1 In iManager, click Files and Folders, then click Properties to open the Properties page.
2 Click the Search icon to browse and locate file from the Storage objects, click the name link of the
file to select it.
The pathname of the file or folder appears in the Name field.
3 Click OK to open the file’s Properties page.
4 On the Information page, the Creator field shows the typeless distinguished username of the
current owner, such as username.context.
5 If you want to modify the owner, click the Search icon to open the Object Browser dialog box, then
locate and select the username of the new owner.
6 If you modified the owner, click Apply or OK on the Information page in order to save the
change.
326
OES 11 SP2: NSS File System Administration Guide for Linux
25.9
Viewing, Adding, Modifying, or Removing a Directory Quota
Directory quotas for NSS volumes require that the Directory Quotas attribute be set for the volume.
For information, see Section 22.3.1, “Enabling or Disabling the Directory Quotas Attribute for an NSS
Volume,” on page 285.
1 In iManager, select Files and Folders > Properties.
2 Click the Search icon, browse to locate and select the folder you want to manage on an NSS
volume, then click OK to open the Properties page for the selected folder.
3 View the current status of the Directory Quota.
If a Directory Quota is set, the Restrict Size field is selected and the Limit field shows the quota
size in KB.
If the Directory Quota is not set, the Restrict Size field is deselected and the Limit field is dimmed
(grayed out).
Managing Files and Folders on NSS Volumes
327
4 Do one of the following:
 Add a Quota: On the Information tab, select Restrict Size to enable space restrictions for the
selected directory. In the Limit field, type the directory quota in KB. The value must be an
increment of 4 KB; that is, it must be divisible by 4 with no remainder.
 Modify an Existing Quota: In the Limit field, type the new directory quota in KB. The value
must be an increment of 4 KB; that is, it must be divisible by 4 with no remainder.
 Remove a Quota: On the Information tab, deselect Restrict Size to disable space restrictions
for the selected directory. The Limit field is automatically dimmed (grayed out).
5 On the Information page, click Apply or OK to apply the changes.
328
OES 11 SP2: NSS File System Administration Guide for Linux
26
Managing Backup and Restore for Data
and Trustee Information
26
This section describes your backup and restore options for data and trustee information for Novell
Storage Services volumes on Novell Open Enterprise Server 11 SP2.
 Section 26.1, “Using Novell Storage Management Services,” on page 329
 Section 26.2, “Using the Event File List to Refine the Backup,” on page 330
 Section 26.3, “Using METAMIG to Save and Restore Trustee Information on NSS and Linux
POSIX File Systems,” on page 330
 Section 26.4, “Using Extended Attributes (xAttr) Commands,” on page 330
 Section 26.5, “Backing Up Files Without Altering the Access Time,” on page 332
 Section 26.6, “Additional Information,” on page 332
26.1
Using Novell Storage Management Services
Novell Storage Management Services (SMS) provide backup, restore, and data migration services for
NSS volumes. For information, see the following:
 OES 11 SP2: Storage Management Services Administration Guide for Linux
 NW 6.5 SP8: SBCON Administration Guide
 NBACKUP Utility for OES Linux (nbackup(1)) (http://www.novell.com/documentation/oes/
smsadmin/data/nbackup.1.html). The nbackup utility is included in the novell-sms RPM file
under /opt/novell/sms/bin. It can be used to back up and restore NSS and non-NSS volumes
by using the SMS framework. It backs up and restores NSS metadata, which includes file system
trustees and trustee rights information. Trustees on NSS volumes are tied to NetIQ eDirectory
users and objects, so you need to protect eDirectory in addition to backing up data. The man
page for nbackup provides further details on its usage.
Related utilities and configuration files include the following:
 smdrd(8) (http://www.novell.com/documentation/oes/smsadmin/data/smdrd.8.html)
 smdrd.conf(5) (http://www.novell.com/documentation/oes/smsadmin/data/
smdrd.conf.5.html)
 sms(7) (http://www.novell.com/documentation/oes/smsadmin/data/sms.7.html)
 smsconfig(1) (http://www.novell.com/documentation/oes/smsadmin/data/
smsconfig.1.html)
 tsafs(1) (http://www.novell.com/documentation/oes/smsadmin/data/tsafs.1.html)
 tsafs.conf(5) (http://www.novell.com/documentation/oes/smsadmin/data/
tsafs.conf.5.html)
Managing Backup and Restore for Data and Trustee Information
329
Encrypted NSS volumes store user data in encrypted format on the NSS volume, yet work
transparently with most applications, NLM programs, and backup utilities that currently work with
NSS.
SMS backs up and restores compressed files in their compressed state. It does not compress
uncompressed files for backup; they are stored and restored in their uncompressed state.
26.2
Using the Event File List to Refine the Backup
NSS uses the Event File List (EFL) feature to track files that have changed on a volume during an
interval called an epoch. It logs changes that are made to data and metadata for each active epoch on a
specific NSS volume in the _admin:manage_nss\volume\volumename\FileEvents.xml file. The
metadata includes changes in trustee, IRF and file attributes .
Your backup solution can take advantage of this file in order to get a list of modified files for NSS
volumes. You can use the API commands in scripts to start and stop an epoch, reset the event list for
an epoch, and to affect how long epochs are retained.
For information about the Event File List (EFL) APIs for developers, see “FileEvent.xml Definitions”
(http://developer.novell.com/documentation/vfs/vfs__enu/data/ak7gh2x.html) in NDK: Virtual File
Services (http://developer.novell.com/documentation/vfs/vfs__enu/data/bktitle.html).
See Cool Tools (search for EFL) (http://www.novell.com/communities/search/node/EFL) on the
Novell Cool Solutions Web site for scripts that use the Event File List APIs.
26.3
Using METAMIG to Save and Restore Trustee Information
on NSS and Linux POSIX File Systems
The metamig utility allows you to save and restore trustee information for NSS volumes. You can also
restore trustee information for any NCP volume that was backed up as raw data with a third-party
backup application. For information, see Section B.3, “metamig,” on page 410.
For OES, the NCP Server allows you to create NCP volumes for Linux POSIX file systems. NSS
volumes are NCP volumes by default. You can assign trustees and trustee rights for NCP volumes
based on Linux POSIX file systems just as you do for NSS volumes. The trustee information is located
in a hidden file on the volume rather than being integrated in the volume. When you use a thirdparty backup application to backup files as raw data, the trustee file is also backed up as raw data.
You can use the metamig -ncp option to restore trustee information for NCP volumes on OES that
use the NSS file system or Linux POSIX file systems.
For information about creating NCP volumes, see “Managing NCP Volumes” in the OES 11 SP2: NCP
Server for Linux Administration Guide.
26.4
Using Extended Attributes (xAttr) Commands
In OES SP2 and later, NSS supports the Linux extended attributes (XAttr) option that allows listing,
saving, and restoring the trustee information that is stored in the netware.metadata extended
attribute. Third-party backup software that supports the standard Linux Extended Attributes (xattr)
can use this feature for NSS volumes to preserve trustees, trustee rights, file attributes, and quotas in
backup and restore.
The NSS switch, ListXattrNWMetadata, that helps to retrieve the list of attribute names is disabled
by default. To enable it, you must set the following switches:
330
OES 11 SP2: NSS File System Administration Guide for Linux
nss /ListXattrNWMetadata
nss /CtimeIsMetadataModTime
If you issue the commands from the command line, the support is automatically disabled at the next
server reboot. You can enable the support for Linux xattr list across server reboots by adding the
switches to the /etc/opt/novell/nss/nssstart.cfg file.
 Section 26.4.1, “Enabling NSS Support for Linux xAttr,” on page 331
 Section 26.4.2, “Disabling NSS Support for Linux xAttr,” on page 331
 Section 26.4.3, “Additional Information,” on page 332
26.4.1
Enabling NSS Support for Linux xAttr
 “Using NSSCON” on page 331
 “Using the nssstart.cfg File” on page 331
Using NSSCON
To enable support for Linux xattr list from NSSCON:
1 Open a Linux terminal console, then log in as the root user.
2 Start NSSCON by entering the following at the console prompt:
nsscon
3 To enable the Linux xattr list support for all NSS volumes on the server, enter
nss /ListXattrNWMetadata
nss /CtimeIsMetadataModTime
The commands are enabled until the next server reboot. You can also issue commands that
disable the support.
Using the nssstart.cfg File
You can enable the support for Linux xattr list across server reboots by adding the following lines to
the /etc/opt/novell/nss/nssstart.cfg file:
/ListXattrNWMetadata
/CtimeIsMetadataModTime
Make sure the switches are spelled correctly, and do not have spaces after the forward slash (/). If the
switch names are entered incorrectly in the nssstart.cfg file, parsing errors can prevent the NSS
pool from mounting.
26.4.2
Disabling NSS Support for Linux xAttr
 “Using NSSCON” on page 332
 “Using the nssstart.cfg File” on page 332
Managing Backup and Restore for Data and Trustee Information
331
Using NSSCON
To disable support for Linux xattr list from NSSCON:
1 Open a Linux terminal console, then log in as the root user.
2 Start NSSCON by entering the following at the console prompt:
nsscon
3 To disable the Linux xattr list support for all NSS volumes on the server, enter
nss /noListXattrNWMetadata
nss /noCtimeIsMetadataModTime
Using the nssstart.cfg File
If you added the switches to the /etc/opt/novell/nss/nssstart.cfg file, and you want the
support for Linux xattr list to be automatically disabled after a server reboot, remove the switches
from the file. When the server reboots, the Linux xattr list support is disabled, which is the default
behavior.
26.4.3
Additional Information
For information, see Section A.10, “Extended Attributes (XAttr) Commands,” on page 378.
26.5
Backing Up Files Without Altering the Access Time
You can set the noatime option to control whether the access time is updated when reading files and
directories.
In OES 2 SP1, NSS provides the /atime and /noatime options for Linux. For information, see
Section A.21, “noatime and atime Commands,” on page 388.
In OES SP2 and later, NSS provides noatime and nodiratime support for the Linux open(2) API
command, mount command, and the /etc/fstab configuration file. Backup applications can take
advantage of this option to back up a file without altering its access time. For information, see
Section A.22, “noatime and nodiratime Support for Linux open, mount, nfsmount, and /etc/fstab,” on
page 389.
26.6
Additional Information
For a current list of backup software vendors that support Novell Open Enterprise Server, see Novell
Open Enterprise Server Partner Support: Backup and Antivirus Support (http://www.novell.com/
products/openenterpriseserver/partners/). This list is updated quarterly.
332
OES 11 SP2: NSS File System Administration Guide for Linux
27
Tuning NSS Performance
27
This section describes how to tune the Novell Storage Services cache buffers to improve performance
on a Novell Open Enterprise Server 11 SP2 server.
 Section 27.1, “Do I Need to Tune NSS?,” on page 333
 Section 27.2, “Tuning Cache Buffers for NSS,” on page 333
 Section 27.3, “Configuring or Tuning Group I/O,” on page 334
 Section 27.4, “Tuning I/O Schedulers,” on page 339
27.1
Do I Need to Tune NSS?
There are many factors that contribute to decreasing server performance; however, if your server is
performing poorly and you suspect the storage subsystem (NSS), you can monitor the storage
subsystem by using specific NSS command line options. These options help you determine if any
tuning is required. For more information, see Appendix A, “NSS Commands,” on page 369.
27.2
Tuning Cache Buffers for NSS
 Section 27.2.1, “Understanding How NSS Uses Cache,” on page 333
 Section 27.2.2, “Setting the Minimum Number of Cache Buffers to Use for Kernel Memory,” on
page 334
 Section 27.2.3, “Setting the Name Cache Size,” on page 334
27.2.1
Understanding How NSS Uses Cache
NSS manages cache buffers on Linux using methods similar to those used in other Linux file systems
such as Reiser, Polyserve, and XFS, with the exception of EXT.
For file data, NSS uses the Linux cache page manager to gain access to available memory in the
system. There are some limits in place so that when copying large files, NSS does not starve other
user applications for memory. This is similar to the cache handling used in NetWare.
For metadata, NSS uses kernel memory. NSS can use only a percentage of this space because other
applications share this space. By default, NSS reserves a minimum buffer cache size of 30,000 4-KB
buffers, which is about 120 MB of the kernel memory space. You can adjust the minimum number of
buffers to be used by NSS with the MinBufferCacheSize parameter.
Tuning NSS Performance
333
27.2.2
Setting the Minimum Number of Cache Buffers to Use for Kernel
Memory
1 Open a terminal console as the root user.
2 Start nsscon(8). At the console prompt, enter
nsscon
3 Set the minimum number of cache buffers used by NSS. In nsscon, enter
nss /MinBufferCacheSize=value
where value is the number of 4-KB buffers to assign for NSS. The default value is 30000.
The maximum setting is the amount of memory in KB divided by 4 KB.
27.2.3
Setting the Name Cache Size
The NSS Name Cache is responsible for caching the Name Tree information. This is the information
that is read when you perform any kind of search by file or directory name. The Name Cache maps a
name to a ZID (a unique file object ID). Directory listings do not do this as much as normal file opens
that must resolve each name in the file path.
Use the NameCacheSize parameter to specify the amount of recently used Name Tree entries for files
and directories that NSS caches. Each entry uses about 150 bytes of memory.
Increasing the maximum number of entries Name Cache entries does not necessarily improve the
performance for getting directory listing information if NSS also needs to look up information about
the file from a tree or structure outside of the name tree.
If you want to see how your name cache is performing, use the nsscon /NameCacheStats command.
nsscon /NameCacheSize=value
Specify the maximum number of recently used Name Tree entries for files and directories to
cache. Name cache grows up to the specified limit. Unlike the file system cache, it does not take
the maximum amount of memory allocated from the start.
Default: 100000
Range: 17 to 1000000
27.3
Configuring or Tuning Group I/O
Group write is a technique of writing data to the volume at regular intervals in order to reduce the
seek time on the drive. It also reduces the number of writes because more changes to the same block
are made only to memory.
In OES 1, NSS writes are done on a block-based timer. A block is written one second after the block
becomes dirty (modified by a user or process). This can cause lots of head movement because there is
no control over the order of blocks being sent to disk.
In OES 11 SP2, NSS performs group writes in three categories: journal, metadata, and user data. By
setting policies for group writes, you can improve the performance of the file system for your
particular environment.
334
OES 11 SP2: NSS File System Administration Guide for Linux
For information, see the following:
 Section 27.3.1, “Viewing the Metadata Area Size,” on page 335
 Section 27.3.2, “Configuring the Journal Group Write Timer,” on page 336
 Section 27.3.3, “Configuring the Metadata Group Write Timer and Limit,” on page 336
 Section 27.3.4, “Configuring the User Data Group Write Timer,” on page 338
 Section 27.3.5, “Viewing Group Write Policies,” on page 338
27.3.1
Viewing the Metadata Area Size
NSS for OES 11 SP2 provides a logical read-ahead capability. NSS is designed to physically store
logically related data near each other, such as files in the same directory. By reading ahead using the
logical information, performance is increased. When a block is read, its logically related blocks are
also read. The area read is determined by the default area size.
To improve the performance of NSS metadata blocks use an area seed logic to make sure that related
metadata blocks are physically stored near each other. The default area size for metadata blocks is 16
blocks that are 4 KB each, or 64 KB total.
For metadata blocks, the seed is set to the block number for the area. When metadata is written, the
seed logic determines the closest free block in the area to use next. When the area is new, a new free
area is found in a higher area in the pool, and a new seed marks this area. When the search for a free
area reaches the end of the pool, it wraps back to start searching for free areas to use at the start of the
pool. If no free space of sufficient size is found, the size is temporarily halved from 16 to 8, 4, 2, or 1
blocks progressively as needed until the temporary size is 1. A setting of 1 block indicates that the
pool is essentially out of space. As space is freed or the pool increases in size, future space allocations
use the default area size of 16 blocks.
The maximum number of dirty data blocks that are allowed to accumulate is governed by the
Metadata Group Write Limit parameter. By default, the limit is 20000 dirty blocks. For information,
see Section 27.3.3, “Configuring the Metadata Group Write Timer and Limit,” on page 336.
You can view the metadata area size that is currently in use and the number of dirty blocks waiting to
be written by viewing the Current Metadata Group Write Size parameter in the NSS status report.
The information is reported in the following format:
Current Metadata Group Write Size
= areasize (number_dirty_blocks)
For example, with the default setting of 16 4-KB blocks, the metadata area is 64 KB. If 16000 dirty
blocks are waiting to be written, the values are reported follows:
Current Metadata Group Write Size
= 64K (16000)
To view the Current Metadata Group Write Size information:
1 Open a terminal console, then log in as the root user.
2 At the console prompt, open the NSS Console by entering
nsscon
3 At the nsscon prompt, enter
nss /status
4 In the NSS status report, look for the Current Metadata Group Write Size parameter to view the
current values:
Current Metadata Group Write Size
= areasize (number_dirty_blocks)
Tuning NSS Performance
335
27.3.2
Configuring the Journal Group Write Timer
For NSS, the journal keeps metadata consistent up to the time when its blocks are written to the
device. The Journal Group Write Timer determines the elapsed time between writes of journal blocks.
Thus, its timer policy determines how long ago that a consistent point is relative to a system crash.
Journal blocks are written by default as a group every second. Journal blocks might be written sooner
than the one-second elapsed time if another timer policy triggers a write or if the journal gets full
before the time elapses. Writing blocks as a group helps improve performance because it allows
fewer writes, while ensuring that data is actually recorded to the device.
Use the following NSS command option to control the group write policy for journal blocks:
/JournalGroupWriteTime=seconds
Use the JournalGroupWriteTime parameter to specify the elapsed time to wait before group
writes of journal blocks.
Journal Group Write Timer
Risk for Inconsistent Metadata
(Time Elapsed Since Last
Consistent Point)
File System Performance
1 second (default)
Minimized
Optimized for most scenarios
Greater than 1 second
Higher
Faster
To set the JournalGroupWriteTime parameter, issue the following command as the root user in
the NSS Console (nsscon):
nss /JournalGroupWriteTime=seconds
Replace seconds with the maximum number of seconds to elapse before forcing journal blocks to
be written to the volume. The default value of seconds is 1.
For example, to group write journal blocks every 2 seconds, enter
nss /JournalGroupWriteTime=2
27.3.3
Configuring the Metadata Group Write Timer and Limit
The metadata blocks are written by default as a group every 40 seconds, or when the
MetadataGroupWriteLimit is reached, whichever occurs first. Metadata loss does not occur if the
system crashes because all metadata changes are automatically recorded in the journal. However,
increasing the timer setting increases the redo/undo time that is required to activate a pool (the
mount time) after a crash because there is more unwritten metadata in the journal to be resolved.
IMPORTANT: Within a clustered environment, this means that the time to complete a failover is
related to the setting of MetadataGroupWriteLimit parameter.
You can limit the amount of time it takes for a pool activation after a crash by decreasing the
maximum number of metadata blocks that can be dirty in the MetadataGroupWriteLimit parameter.
A group write is performed when the limit is reached.
You can increase performance of the file system by increasing the maximum number of metadata
blocks that can be dirty.
Use the following NSS command options to control the group write behavior for metadata blocks:
336
OES 11 SP2: NSS File System Administration Guide for Linux
/MetadataGroupWriteTime=seconds
Use the MetadataGroupWriteTime parameter to specify the elapsed time to wait before group
writes of metadata blocks. Decreasing the metadata group write timer can help reduce the
mount time for the volume after a crash.
Metadata Group Write Timer
Time to Mount After a System
Crash
File System Performance
40 seconds (default)
Optimized for most scenarios
Optimized for most scenarios
Less than 40 seconds
Faster
Slower
More than 40 seconds
Slower
Faster
To set the MetadataGroupWriteTime parameter, issue the following command as the root user
in the NSS Console (nsscon):
nss /MetadataGroupWriteTime=seconds
Replace seconds with the maximum number of seconds to elapse before forcing metadata blocks
to be written to the volume. The default value of seconds is 40.
For example, to group write metadata blocks every 30 seconds, enter
nss /MetadataGroupWriteTime=30
/MetadataGroupWriteLimit=blocks
Use the MetadataGroupWriteLimit parameter to specify the maximum number of metadata
blocks that can be dirty before a group write is performed. The following describes how the
settings affect time to mount and file system performance:
Maximum Number of Dirty
Metadata Blocks
Time to Mount After a System
Crash
File System Performance
20000 blocks (default)
Optimized for most scenarios
Optimized for most scenarios
Less than 20000 blocks
Faster
Slower
More than 20000 blocks
Slower
Faster
To set the MetadataGroupWriteLimit parameter, issue the following command as the root user
in the NSS Console (nsscon):
nss /MetadataGroupWriteLimit=blocks
Replace blocks with the maximum number of metadata blocks that can be dirty before forcing
them to be written to the volume. The default value of blocks is 20000.
For example, to decrease the maximum number of dirty metadata blocks to 15,000 for the
purpose of reducing the mount time, enter
nss /MetadataGroupWriteLimit=15000
For example, to increase the maximum number of dirty metadata blocks to 30,000 for the
purpose of increasing the file system performance, enter
nss /MetadataGroupWriteLimit=30000
Tuning NSS Performance
337
27.3.4
Configuring the User Data Group Write Timer
The user data blocks are written as a group every 3 seconds. This increases the risk of data loss on a
crash compared to previous versions of NSS that write data blocks every 1 second. You can set the
user data group write timer (UserDataGroupWriteTime) to 1 second to get the familiar NSS behavior
for data writes.
Use the following NSS command option to control the group write behavior for user data blocks:
/UserDataGroupWriteTime=seconds
Use the UserDataGroupWriteTime parameter to specify the elapsed time to wait before group
writes of user data blocks. Decreasing the user data group write timer can help reduce the risk of
data loss for a volume after a crash.
User Data Group Write Timer
Risk of Data Loss After a Crash File System Performance
3 seconds (default)
Optimized for most scenarios
Optimized for most scenarios
1 second
Lower, typical of NSS on
NetWare and OES 2 Linux and
NetWare
Slower
Greater than 3 seconds
Higher
Faster
To set the UserDataGroupWriteTimer parameter, issue the following command as the root user
in the NSS Console (nsscon):
nss /UserDataGroupWriteTime=seconds
Replace seconds with the maximum number of seconds to elapse before forcing user data blocks
to be written to the volume. The default value of seconds is 3.
For example, to group write user data blocks every 1 second, enter
nss /UserDataGroupWriteTime=1
27.3.5
Viewing Group Write Policies
1 Open a terminal console, then log in as the root user.
2 At the terminal console prompt, enter
nsscon
3 In nsscon, enter
nss /status
4 Look for the following settings in the Current NSS Status report:
Journal Flush Timer
Metadata Flush Timer
User Data Flush Timer
Current Metadata Group Write Size
Metadata Block Group Write Limit
338
=
=
=
=
=
1 second
40 seconds
3 seconds
64k (16)
80000k (20000)
OES 11 SP2: NSS File System Administration Guide for Linux
27.4
Tuning I/O Schedulers
I/O scheduling controls how read and write operations are submitted to block storage devices, such
as single ATA disk systems, solid state drives, RAID arrays, and SAN storage arrays. A scheduler
queues and sequences requests to reduce latency and optimize I/O performance. Configuring the
best-suited I/O scheduler depends on the application workloads and storage hardware. Different
combinations of workloads and storage solutions require different tuning strategies. You can
configure I/O scheduler settings per storage device to optimize the performance based on the
applications that access files on that device.
This section describes I/O scheduling options that can help reduce seek operations, prioritize I/O
requests, and ensure that an I/O request is carried out before a given deadline. These tuning options
are specifically designed for sequential physical read performance and have no impact on cache read
and write performance.
For more information on the tuning parameters, see Tuning I/O Performance in the SLES 11 SP3
System Analysis and Tuning Guide.
 Section 27.4.1, “Single I/O Context (SIOC),” on page 339
 Section 27.4.2, “REQ_NOIDLE (Submit Request Without Idling),” on page 340
 Section 27.4.3, “Deadline Scheduler,” on page 340
 Section 27.4.4, “Completely Fair Queuing (CFQ),” on page 340
27.4.1
Single I/O Context (SIOC)
The SIOC option allows you to specify the same I/O context for all the kernel worker threads used by
the NSS filesystem to read file contents from disk, using a Finite State Machine (FSM). Every kernel
thread maintains an I/O context structure that is used by the I/O scheduler to identify the per-process
CFQ queues into which the I/O request can be temporarily queued before being dispatched to the
underlying logical/physical device queue. The anticipates additional consecutive disk blocks requests
from the same thread, so all of the requests can be merged and submitted as one single request
instead of multiple small requests. Enabling the SIOC improves I/O performance by ensuring that all
related requests are placed into the same CFQ queue even if they are submitted from different NSS
kernel worker threads.
This option helps improve I/O performance for file access patterns using multiple NCP/CIFS protocol
clients.
To see whether this option is ON or OFF, execute nsscon
/find=iocontextsharing
To change the setting:
1 Edit /etc/opt/novell/nss/nssstart.cfg.
2 Add /IOContextSharing.
3 Reboot the server.
Tuning NSS Performance
339
27.4.2
REQ_NOIDLE (Submit Request Without Idling)
The REQ_NOIDLE NSS file system option helps physical (non-cached) reads of files from a single file
system client. The option uses a flag provided by the Linux block layer to tell the CFQ I/O scheduler
to place the submitted I/O request in separate CFQ queues during reads or writes, with no waiting
for additional merges. The block requests submitted to the CFQ I/O scheduler are given to the
underlying logical/physical disk driver without any idling.
To see whether this option is ON or OFF, execute nsscon
/find=req_noidle_enabled
To change the setting:
1 Edit /etc/opt/novell/nss/nssstart.cfg.
2 Add /REQ_NOIDLE_Enabled.
3 Reboot the server.
27.4.3
Deadline Scheduler
The deadline scheduler applies a service deadline to each incoming request. This sets a cap on perrequest latency and ensures good disk throughput. Service queues are prioritized by deadline
expiration, making this a good choice for real-time applications, databases and other disk-intensive
applications.
To view the current scheduler, enter the following command:
cat /sys/block/{DEVICE-NAME}/queue/scheduler
To change the scheduler, enter the following command:
echo deadline > /sys/block/{DEVICE-NAME}/queue/scheduler
This option is not persistent. When a device is rebooted, the setting returns to the default.
For a multi-path environment, change the I/O scheduler at the device mapper level.
27.4.4
Completely Fair Queuing (CFQ)
The Completely Fair Queuing (CFQ) scheduler provides a good compromise between throughput
and latency by treating all competing IO processes alike. The algorithm assigns each thread a time
slice in which it is allowed to submit I/O to disk, so each thread gets a fair share of I/O throughput. It
also allows you to assign I/O priorities, which are taken into account during scheduling decisions.
Tune slice_idle parameter to improve the backup performance.
To see the current value of the slice_idle parameter, use the following command:
cat /sys/block/device/queue/iosched/slice_idle
To tune the slice_idle parameter, use the following command:
echo 0 > /sys/block/device/queue/iosched/slice_idle
This is not persistent. When a device is rebooted, the setting returns to the default.
For a multi-path environment, change the I/O scheduler at the device mapper level.
340
OES 11 SP2: NSS File System Administration Guide for Linux
28
Monitoring the Status of the NSS File
System and Services
28
This section describes the following methods for monitoring the status of Novell Storage Services:
 Section 28.1, “Monitoring Status of NSS Devices, Pools, and Volumes with iManager,” on
page 341
 Section 28.2, “Monitoring Compression and Salvage Statistics,” on page 342
 Section 28.3, “Monitoring Quotas and Space Usage for NSS Pools and Volumes,” on page 344
 Section 28.4, “Monitoring File System Parameters,” on page 345
28.1
Monitoring Status of NSS Devices, Pools, and Volumes with
iManager
1 Use the following table to determine where to go to view the status of your NSS storage devices,
pools, and volumes.
To monitor the status of:
Refer to:
Devices
Section 10.3, “Viewing Details for a Device,” on
page 110
Partitioned free space
Section 10.7, “Viewing Partitions on a Device,” on
page 115
Partition
Section 12.3, “Viewing Details for a Partition,” on
page 132
Software RAIDs
Section 13.3, “Viewing a List of Software RAID
Devices on a Server,” on page 143
Software RAID details
Section 13.4, “Viewing Details of a Software RAID
Device,” on page 144
NSS pools
Section 15.7, “Viewing Pools on a Server,” on
page 171
NSS pool details
Section 15.8, “Viewing Pool Details,” on page 172
NSS volumes in a pool
Section 15.10, “Viewing Volume Information for a
Pool,” on page 174
NSS volume details
Section 18.6, “Viewing the Details of an NSS
Volume,” on page 219
NSS volume quota and space usage
Section 18.7, “Viewing Properties of an NSS
Volume,” on page 220, then click the Quotas tab
Monitoring the Status of the NSS File System and Services
341
28.2
To monitor the status of:
Refer to:
NSS volume attributes
Section 18.7, “Viewing Properties of an NSS
Volume,” on page 220, then click the Attributes tab
File and folder properties
Section 20.1, “Configuring File System Trustees,
Trustee Rights, Inherited Rights Filters, and
Attributes,” on page 247
Monitoring Compression and Salvage Statistics
For each volume, the Volume Properties Statistics page in iManager reports statistics about the
compressed and salvageable files in the volume, the GUID of the volume, and the block size being
used.
1 In iManager, click Storage > Volumes.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select a server to manage.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
3 In the Volumes list, select a volume that you want manage.
4 Click Properties.
The Properties page has three tabs: Attributes, Statistics, and Quota Usage. It opens by default to
the Attributes tab.
5 Click the Statistics tab to view the compression and salvage statistics for the selected volume.
342
OES 11 SP2: NSS File System Administration Guide for Linux
If the compression attribute is set, the Compression report shows statistics of all the compressed
files for the selected volume.
Statistic
Description
Compressed Space
The amount of space in the volume in use by compressed files.
Files
The total number of files in the volume and information by the following
categories:
 Not Deleted: The total number of files in the volume that are
currently available to users.
 Deleted: The total number of files in the volume that are deleted but
not yet purged from the system.
 Uncompressed: The total number of files in the volume that are not
stored in compressed form.
If the Salvage Files attribute is enabled, the Salvage report shows statistics about deleted files
that have not yet been purged.
Statistic
Description
Minimum Keep Seconds
Minimum time (in seconds) to keep deleted files.
Maximum Keep Seconds
Maximum time (in seconds) to keep deleted files.
Low Water Mark
If the amount of free space drops below this percentage, the file system
begins purging deleted files.
High Water Mark
If there are files to delete, the autopurging process stops when the
amount of free space reaches this percentage.
Next Scheduled Purge
Date and time of the next purge.
Purgeable Space
Amount of space in the volume that is occupied by deleted files that are
queued for purging.
Unpurgeable Space
Amount of space in the volume that is occupied by files.
Deleted Files
The number of deleted files in salvage.
Oldest Deleted Time
Time line for deleted files. The file system purges the files in the same
order they were deleted.
GUID
The Global Unique Identifier (GUID) number that NSS assigns to the
volume. This number is necessary so your file system can locate the
specific volume.
Block Size
The maximum amount of data committed to a single write process.
Possible sizes include 4, 8, 16, 32, or 64 KB.
Other information reported includes the volume’s GUID and block size.
Statistic
Description
GUID
The Global Unique Identifier (GUID) number that NSS assigns to the
volume. This number is necessary so your file system can locate the
specific volume.
Monitoring the Status of the NSS File System and Services
343
28.3
Statistic
Description
Block Size
The maximum amount of data committed to a single write process.
Possible sizes include 4, 8, 16, 32, or 64 KB.
Monitoring Quotas and Space Usage for NSS Pools and
Volumes
For each volume, the Volume Properties > Quota Usage page in iManager reports the space usage for the
selected volume and the pool that contains the volume.
1 In iManager, click Storage > Volumes.
For instructions, see Section 9.1.5, “Accessing Roles and Tasks in iManager,” on page 83.
2 Select a server to manage.
For instructions, see Section 9.1.6, “Selecting a Server to Manage,” on page 84.
3 In the Volumes list, select a volume that you want manage.
4 Click Properties.
The Properties page has three tabs: Attributes, Statistics, and Quota Usage. It opens by default to
the Attributes tab.
5 Click the Quota Usage tab to view the current space usage statistics for the selected volume.
Volume Usage reports the amount of space on the volume, categorized by usage: Compressed,
Other In-use, Purgeable, and Available.
344
OES 11 SP2: NSS File System Administration Guide for Linux
Statistic
Description
Quota
Indicates whether the volume has a quota. If there is a quota, the volume
can grow only to the size of the quota. If there is no quota, the volume can
grow to the amount of available physical space in the pool.
Used Space
The amount of space currently in use and information by the following
categories:
 Compressed: If the Compressed attribute is enabled, this is the
amount of space in the volume containing data that is compressed.
 Other In-Use Space: The amount of space in the volume containing
data that is not compressed.
 Purgeable Space: The amount of space in the Salvage system that
you can use as free space. You can manually purge deleted files to
free space.
 Available: Available free space that is not in the Salvage system.
Pool Usage reports the amount of space on the pool, categorized by usage: Free, Purgeable,
Used, and Booked.
Statistic
Description
Available Space
The amount of space in the pool that is not currently in use and
information by the following categories:
 Free: The total amount of free space that is available on the pool.
 Purgeable: The amount of space in the Salvage system that you
can use as free space. You can manually purge deleted files to free
space.
Total Space
The total amount of space allocated to the pool and information by the
following categories:
 Used: The total amount of space currently in use by all volumes on
the pool.
 Overbooked/Booked: If the amount of space assigned to the pool’s
volumes exceeds the amount of physical space available in the pool,
the Overbooked field shows the amount of exceeded space.
Otherwise, the Booked field shows the total amount of space in all
volumes in the pool. If any of the volumes do not have a quota, these
volumes are not calculated in the total combined quota.
28.4
Monitoring File System Parameters
 Section 28.4.1, “Using iManager to Monitor NSS File System Parameters,” on page 346
 Section 28.4.2, “Using Novell Remote Manager to Browse Directories and Files,” on page 346
 Section 28.4.3, “Using Novell NetStorage to Monitor NSS File System Parameters,” on page 346
Monitoring the Status of the NSS File System and Services
345
28.4.1
Using iManager to Monitor NSS File System Parameters
Use the File Manager plug-in to iManager to browse files and directories, and to manage access
control for them. For information, see the following:
 Section 20.1, “Configuring File System Trustees, Trustee Rights, Inherited Rights Filters, and
Attributes,” on page 247
 Chapter 25, “Managing Files and Folders on NSS Volumes,” on page 317
28.4.2
Using Novell Remote Manager to Browse Directories and Files
You can browse directories and files by using Novell Remote Manager for Linux. For information, see
“ Viewing File Systems” in the OES 11 SP2: Novell Remote Manager Administration Guide.
28.4.3
Using Novell NetStorage to Monitor NSS File System Parameters
Use Novell NetStorage to monitor the status of the NSS file system parameters listed in the following
table:
Use this link
Volumes
To do these tasks
 Monitor or restrict disk space usage by directory (directory quotas)
 Purge or salvage deleted volumes or files
 Configure file and directory Trustee rights and Inherited Rights filters
 View or change a list of Set parameters
346
OES 11 SP2: NSS File System Administration Guide for Linux
29
Troubleshooting the NSS File System
29
This section describes some issues you might experience with Novell Storage Services (NSS) and
provides suggestions for resolving or avoiding them.
For additional troubleshooting information, see the Novell Support Web site (http://
support.novell.com).
 Section 29.1, “Cannot Connect to Target Servers from iManager,” on page 348
 Section 29.2, “Cannot See NSS Devices, Pools, or Volumes,” on page 348
 Section 29.3, “eDirectory Errors When Creating NSS Pools or Volumes,” on page 348
 Section 29.4, “File Compression is Not Working,” on page 349
 Section 29.5, “Linux Fails to Execute Dismount, Umount, or Delete Commands for NSS
Volumes,” on page 350
 Section 29.6, “Multipath Devices Are Not Resolved,” on page 350
 Section 29.7, “NSS Volume Disappears from the NCP Console (ncpcon),” on page 350
 Section 29.8, “Pathname Exceeds 255-Character Limit,” on page 351
 Section 29.9, “Server Hangs When Using an NSS Volume as a Netatalk Share,” on page 351
 Section 29.10, “Slow Mount Performance for NSS Volumes Using the UNIX Name Space,” on
page 351
 Section 29.11, “Software RAID 1 Fails to Recognize a Replacement Device,” on page 351
 Section 29.12, “Tuning NSS Volumes for GroupWise Servers,” on page 352
 Section 29.13, “Unknown Users as File Owners,” on page 352
 Section 29.14, “Using Scripts to Create or Manage NSS Pools and Volumes,” on page 352
 Section 29.15, “NFS Volume Mount Failure Blocks the Mounting of Other File Systems,” on
page 352
 Section 29.16, “In the Process of Cluster Resource Going Offline, NSS Pool Deactivation Hangs,”
on page 353
 Section 29.17, “Using Linux iManager Not Able to Manage an OES Linux/NetWare Server,” on
page 353
 Section 29.18, “Selecting the Device/Partition Option Which is Corrupted,” on page 353
 Section 29.19, “Salvaging an Encrypted Volume Fails After Renaming,” on page 353
 Section 29.20, “Pool Activation Fails After Aborting ravsui Verify or Rebuild Process,” on
page 354
 Section 29.21, “Trustee Entries are Stored in Different Formats in NetWare and Linux Platforms,”
on page 354
 Section 29.22, “Debugging of nlvm Results in the Display of Warning Messages in the
nlvm_debug.log File,” on page 354
 Section 29.23, “NLVM Pool Move Fails and Deactivates the Pool,” on page 354
Troubleshooting the NSS File System
347
 Section 29.24, “Changing NameSpace of a Volume Does Not Work if the Volume is in Use on the
Server When NameSpace is Changed,” on page 355
 Section 29.25, “Metamig Error When Copying Hidden or System File From Source Volume to
Target Volume,” on page 355
 Section 29.26, “iManager Taking Too Long to Load User Quotas or Error in Loading User
Quotas,” on page 355
 Section 29.27, “Error 23361 “No Space for Move Stamps” During a Pool Move,” on page 355
 Section 29.28, “Role-based Users not able Access Rights to Files and Folders under Modify
Group,” on page 356
29.1
Cannot Connect to Target Servers from iManager
If you are having difficulty connecting to servers from iManager, it might be because you are using
an unsupported protocol between the iManager server and the target server that you want to
manage. For information, see “Interoperability of Protocols for the iManager Server and Target
Server” on page 71 under the Section 7.5, “Cross-Platform Issues for Management Tools,” on page 71.
29.2
Cannot See NSS Devices, Pools, or Volumes
If you cannot see your volumes or the devices associated with those volumes, you might have a
connection failure. Connection failures can occur if an adapter, cable, or switch in the path between
the server and the storage device fails for any reason. If there is a connection failure, repair or
reconfigure the equipment.
29.3
eDirectory Errors When Creating NSS Pools or Volumes
 Section 29.3.1, “eDirectory Error 613 When Creating an NSS Pool or Volume,” on page 348
 Section 29.3.2, “eDirectory Error 672 When Creating an NSS Pool,” on page 348
 Section 29.3.3, “eDirectory Error 601 When Creating NSS Volume,” on page 349
29.3.1
eDirectory Error 613 When Creating an NSS Pool or Volume
When creating an NSS pool or volume with NSSMU on your Linux server, an Error 613 is returned if
the server has no eDirectory Read/Write replica available in the same tree when you create the pool or
volume so that the Storage objects can be written to eDirectory. The error occurs because NCP
(NetWare Control Protocol) cannot map to the pool or volume.
To avoid this problem, make sure the server has an eDirectory Read/Write replica. You can also add
the NSS volume path to the /etc/opt/novell/ncpserv.conf file for NCP Server.
29.3.2
eDirectory Error 672 When Creating an NSS Pool
When creating an NSS pool by using NSSMU or iManager, an Error 672 is returned if there is no NSS
Admin object in the NetIQ eDirectory database for the server (such as HOSTNAMEadmin.context).
NSS requires that an NSS Admin object must exist for each and every server, or management does
not work.
348
OES 11 SP2: NSS File System Administration Guide for Linux
NOTE: NSS Admin object must be placed under the default location, that is, the same place where
the server object exists.
This situation occurs if you move a server across trees without also moving its NSS Admin object
from one tree’s eDirectory database to the other.
If you re-create the NSS Admin object, you are then able to successfully create pools.
To re-create the NSS Admin object, run nssAdminInstall at a Linux terminal console as the root
user:
1 Open a terminal console, then log in as the root user.
2 At the console prompt, enter the following (all on the same line, of course):
nssAdminInstall -a adminname.context -P -o HOSTNAMEadmin.context
For example, the nssadmin user object is in the form of server1admin.example, where server1
is the server name and example is the container where the server object also resides.
nssAdminInstall -a admin.example -P -o cn=server1admin.o=example
After the NSS Admin object is created, update the eDirectory Pool object. For information, see
Section 15.14, “Updating eDirectory Pool Objects,” on page 179.
29.3.3
eDirectory Error 601 When Creating NSS Volume
When the eDirectory context of the nss admin user object changes, when doing NSS Volume/Pool
management operations, for example, create, rename, you might get the error 601. To avoid this
problem, perform the following steps:
1. Remove the nss admin user object from eDirectory using iManager. (By default, nss admin user
resides in the container where the server object resides).
For example, the nssadmin user object is in the form of server1admin.example, where server1
is the server name and example is the container where the server object also resides.
2. Run nssAdminInstall to recreate the nssadmin object.
For example, nssAdminInstall -a admin.example -P -o cn=server1admin.o=example
where admin.example is the context of the eDirectory admin user and server1admin.example
is the nssadmin user object to be recreated.
NOTE: Run the nssAdminInstall command on all the servers whose nssadmin object context is
changed.
29.4
File Compression is Not Working
If you cannot compress a file, check the following:
 Check the Compression attribute for the volume to make sure it is enabled. To apply the file
compression option to an existing NSS volume: From iManager, click Storage > Volumes >
Properties > Attributes, select Compression, then click OK.
 Check to see if the Do Not Compress (dc) attribute for the file or directory is set.
 Review other factors that affect compression, as detailed in “Factors Affecting Compression” on
page 267.
Troubleshooting the NSS File System
349
29.5
Linux Fails to Execute Dismount, Umount, or Delete
Commands for NSS Volumes
When NCP Server is active, it always keeps a file open on a volume. As a result, issuing dismount,
umount, or delete commands for an NSS volume fails, whether the command is issued from the
command line, in NSSMU, or in iManager.
This occurs because Linux does not allow you to dismount a volume if a file is open on that volume.
Linux does not provide a method of identifying which files are open on volumes.
To dismount or delete an NSS volume on your OES 11 SP2 server:
1 At the server prompt, open the NCP Console by entering
ncpcon
2 Dismount the volume from NCP.
The volume is no longer accessible or visible to NCP clients.
3 In iManager, dismount or delete the volume, as desired.
29.6
Multipath Devices Are Not Resolved
If you have multiple interconnect paths in your server-to-storage configuration, you must use
multipath management software to resolve the multiple paths into a single mutlipath device.
See Chapter 14, “Managing Multipath I/O to Devices,” on page 161.
29.7
NSS Volume Disappears from the NCP Console (ncpcon)
An NSS volume might not be found by the ncpcon utility if the volume’s mount point is renamed
(the directory path is changed). For example, this might occur if you change the mount point’s
directory path from /media/nss/volumename (the default path) to /targetpath/volumename.
When an NSS volume is created, the NCP server gets the path to the volume and caches it, assuming
that it never changes. When you later run the NCP Console (ncpcon) utility and enter the volume
command, it reports the volumes that are still found at their respective mount points. Only the
volumes that are still valid as compared to the list in cache are reported.
Use the following methods to avoid or resolve this problem:
 If you know in advance that you want to modify default path of the volume’s mount point, make
sure to create the NSS volume with iManager and change the default mount point as you
configure the volume. For example, if you are setting up an NSS volume to use for the home
directories, change the mount point from /media/nss/home to /home.
Creating the volume in NSSMU does not allow for the mount point to be changed during the
volume setup, only afterwards.
 To modify the mount point for an existing volume, you can use either iManager or NSSMU to
change it, then restart eDirectory by entering the following at a terminal console prompt:
/etc/init.d/ndsd restart
Restarting eDirectory causes the NCP volume cache to be updated.
350
OES 11 SP2: NSS File System Administration Guide for Linux
29.8
Pathname Exceeds 255-Character Limit
Pathnames for files on the server can have up to 255 characters, including the server name, volume
name, path delimiters, filename, and file extension. If a user maps a drive letter to a location deep
down the directory path, and subsequently creates a pathname that exceeds the limit for the name on
the server, the file cannot be saved. Even if the file’s path does not exceed 255 characters with respect
to the mapped drive letter, it exceeds the maximum length on the server.
When mapping a drive letter to a folder deep down the directory path, users should adopt
conventions for naming files and directories and for creating directory structures so that they do not
exceed the 255-character pathname limit on the server.
29.9
Server Hangs When Using an NSS Volume as a Netatalk
Share
The CopyCat application used by Netatalk uses sparse files for its database. Netatalk tries to create a
CopyCat database as a sparse file called .AppleDB in the root of the volume by using memory
mapped IO. This can cause the server to hang if you are using an NSS volume as the Netatalk share
because of the limited support in NSS for this combination.
NSS has limited support for memory mapped files, primarily to support loading programs. NSS does
not fully support memory mapped files especially if the application uses sparse files.
29.10
Slow Mount Performance for NSS Volumes Using the UNIX
Name Space
Normally, NSS volumes mount in seconds even if the volume contains large directories with millions
of files. You might observe that an NSS volume mounts slowly if it contains large directories and if
you are mounting it with a UNIX name space.
To avoid this problem, mount NSS volumes with a Long name space. For information, see
Section 18.11, “Mounting NSS Volumes with Linux Commands,” on page 225.
29.11
Software RAID 1 Fails to Recognize a Replacement Device
If a drive fails that is part of a software RAID 1 device, your system might not recognize the
replacement drive and does not begin remirroring automatically.
To recover:
1 Reboot the system to re-scan devices and recognize the replacement drive.
2 Unmount the NSS volumes on the software RAID device to stop the I/O.
3 Expand the software RAID 1 device and specify space from the replacement drive as a new
RAID segment. Remirroring begins automatically.
4 Allow the remirror to complete before mounting the NSS volumes.
Troubleshooting the NSS File System
351
29.12
Tuning NSS Volumes for GroupWise Servers
NSS performance has been dramatically improved over NetWare 6.0. As a result, the only tuning that
you need to do to enhance the performance of GroupWise on NSS is to disable the salvage feature by
entering the following command at the NSS Console (nsscon) as the root user.
nss /NoSalvage=<volumename | all>
29.13
Unknown Users as File Owners
In an NSS volume, a file’s owner is the user who created it. The Novell trustee model for file systems
is used to control access to files, so ownership is a consideration only when enforcing user quotas.
Ownership is tracked with the user’s GUID, not the username. If the username ever becomes invalid
(such as if the user is deleted from the system), the file continues to be charged to that user’s GUID. In
a space usage report, the value of the GUID appears in place of where the a valid username would
normally be. There is no limit on the space that can be associated with unknown users. Authorized
users can continue to use the files without interruption or incident.
For the user space quota, the total disk space used by the file continues to be associated with the file’s
assigned owner’s GUID. User quotas can be enforced only for valid users.You must change the file’s
owner to a valid user if you want the files to be included in that user’s quota.
An administrator or administrator equivalent user can assign a new owner when necessary.
Changing file ownership requires the Supervisor right for the file’s parent directory and the file. Use
whatever tools you normally use to modify the file’s ownership.
29.14
Using Scripts to Create or Manage NSS Pools and Volumes
There is an XML interface that allows you to write scripts, such as Perl scripts, that will create and
manage NSS pools and volumes. The API set can be downloaded from NDK: Virtual File Services
(http://developer.novell.com/wiki/index.php/Virtual_File_Services_for_NetWare). These APIs are for
NSS services on NetWare 6.0, NetWare 6.5, OES 1, OES 2, and OES 11 SP2 servers.
Look for sample scripts at the link above or in Cool Solutions. For example:
 NSS Pool Lister for Linux (http://www.novell.com/coolsolutions/tools/18074.html)
 NSS Volume Lister for Linux (http://www.novell.com/coolsolutions/tools/18082.html)
29.15
NFS Volume Mount Failure Blocks the Mounting of Other
File Systems
When mounting the volumes if the NFS volume mount fails, the subsequent file systems may fail to
mount. The NFS volume mount may fail because of a problem with the NFS server or a network
issue. To avoid the mount failure of the subsequent file systems, reset the default value of retry
parameter of the NFS volume.
To change the value of retry parameter of NFS volume, do the following:
1 Open the /etc/fstab file in a text editor.
2 Edit and modify the NFS mount retry value. The default value is 10,000 minutes, which is
approximately one week.
352
OES 11 SP2: NSS File System Administration Guide for Linux
For example, retry=5. If you have modified the default retry value to 5, the NFS server will try to
remount the NFS volume for 5 minutes.
You can change the retry value to the time you want the NFS server to retry mounting.
3 Save the file.
29.16
In the Process of Cluster Resource Going Offline, NSS Pool
Deactivation Hangs
To troubleshoot this issue, add the ncpcon dismount <volumeName> command in the cluster unload
script before nss /pooldeac command. For example,
del secondary ipaddress 10.10.102.111
CLUSTER CVSBIND DEL NSS_CLUSTER_DDDD_SERVER 10.10.102.111
NUDP DEL NSS_CLUSTER_DDDD_SERVER 10.10.102.111
ignore_error ncpcon dismount dvol
nss /pooldeactivate=DDDD /overridetype=question
Add this command for all the volumes.
29.17
Using Linux iManager Not Able to Manage an OES Linux/
NetWare Server
The Linux iManager cannot manage an OES Linux/NetWare server where CIMOM is not running.
Check if CIMOM client is running on the server which you are trying to manage.
29.18
Selecting the Device/Partition Option Which is Corrupted
NSSMU: On selecting the Device/Partition option, displays back trace messages instead of pool
details.
NLVM: On executing the NLVM list pool command, back trace messages are displayed instead of
pool details.
iManager: On selecting the Device/Partition option, instead of pool details an "Error 22: Unable to get
specified space on device <device name> is displayed.
29.19
Salvaging an Encrypted Volume Fails After Renaming
If you have renamed an encrypted volume, salvaging the volume fails with the following error:
NSSMU: Error 22703: 22703 Error mounting the volume
iManager: Error: Could not salvage iman_enc1. The volume cannot be mounted.
To resolve this error, mount the volume manually. When mounting the volume, provide the
password.
Troubleshooting the NSS File System
353
29.20
Pool Activation Fails After Aborting ravsui Verify or Rebuild
Process
Run a pool verify or rebuild using ravsui and interrupt the verify process using Ctrl-C. This
command kills the ravsui process in user space but does not kill the background verify or rebuild
kernel threads. Pool activation will succeed only after these background threads complete.
You can reattach the ravsui UI to the background running thread using the ravsui -a command. This
will reattach the ravsui UI to the running rebuild or verify thread and the progress will be shown.
29.21
Trustee Entries are Stored in Different Formats in NetWare
and Linux Platforms
The trusteeinfo.xml file shows the DN entries in “typeless” form in NetWare and in a “typed”
form in Linux. After pool or volume migration to Linux from NetWare, to convert all the DN entries
to the “typed” form, run the nsscon utility and enter the following command at the NSS Console
(nsscon) as the root user:
nss /ForceBackgroundCheck
29.22
Debugging of nlvm Results in the Display of Warning
Messages in the nlvm_debug.log File
When you use the debug feature for nlvm, the nlvm_debug.log file displays some warning messages
such as “File descriptor <file descriptor> (<file name>) leaked on <Binary name>
invocation”. To turn off these warning messages, do the following:
1. Go to the file /etc/profile.
2. Add EXPORT LVM_SUPPRESS_FD_WARNINGS =1.
3. Restart your terminal console prompt.
29.23
NLVM Pool Move Fails and Deactivates the Pool
If a hardware error is encountered during an nlvm move, the pool move fails, and the pool is
automatically deactivated. Currently, no error is returned, but the pool will not activate.
The pool move cannot continue because of the hardware error. You must delete the move (nlvm
delete move [<poolname>|<movename>]) to clear the move. After the move is deleted, you can
activate the pool.
Because of the hardware error, you cannot use the nlvm move command to move the pool. You can
move the pool's data to another SAN device by restoring files from backup media, or by copying the
files from the old pool to a new pool.
354
OES 11 SP2: NSS File System Administration Guide for Linux
29.24
Changing NameSpace of a Volume Does Not Work if the
Volume is in Use on the Server When NameSpace is
Changed
To resolve the issue, do the following:
1. Change the namespace.
2. Umount the volume using F7.
Ensure that mount point is not in use.
3. Mount the volume.
For a clustered volume, you must modify the load script to specify the name space option when
mounting the volume. For information, see Section 18.10, “Configuring the Name Space for an NSS
Volume,” on page 224.
29.25
Metamig Error When Copying Hidden or System File From
Source Volume to Target Volume
When copying the files from source volume to target volume, if the file is hidden or system file it
does not get selected. It will be missing on the target volume. trustee.nlm or metamig backs up trustee
information for all the files.While restoring trustee information for hidden file, metamig throws an
error "Error finding file(zERR_PATH_MUST_BE_FULLY_QUALIFIED).
29.26
iManager Taking Too Long to Load User Quotas or Error in
Loading User Quotas
When iManager is used to load user quotas for large number of users, it might take a few minutes to
load the quotas. At times the browser might become unresponsive or you might hit the "HTTP
STATUS 500" error because of insufficient Java heap memory. To resolve this issue, increase the Java
heap memory at /etc/opt/Novell/tomcat6/conf/novell-tomcat6.conf using the following
procedure:
1 Edit novell-tomcat6.conf and add JAVA_OPTS="$JAVA_OPTS -Xms1024m -Xmx2048m after
JAVA_OPTS="-Djava.library.path=/opt/novell/eDirectory/lib64:/var/opt/novell/tomcat6/lib:/usr/lib64".
2 After saving novell-tomcat6.conf, restart Tomcat using rcnovell-tomcat6 restart
command.
29.27
Error 23361 “No Space for Move Stamps” During a Pool
Move
This error occurs when the source pool has no free space to store the move stamps during a pool
move operation. To resolve this issue, expand the source pool by a minimum size of 12 MB. For more
information, see the TID on NSS pool move operation displays Error 23361 "No Space for Move
Stamps" (http://www.novell.com/support/kb/doc.php?id=7012837).
Troubleshooting the NSS File System
355
29.28
Role-based Users not able Access Rights to Files and
Folders under Modify Group
A role-based user having access to Group Management role will not be able to access Rights to Files
and Folders under Modify Group by default until the File System Rights page is assigned to them.
To assign File System Rights page for RBS users:
1 Log on to iManager with administrative credentials.
2 Click Configure > Role Based Services > RBS Configuration.
The list of collections for the RBS setup is displayed.
3 Click an RBS setup collection > Property Book tab and then select Modify Group.
4 From the Actions menu, click Page List and the Edit Page List is displayed.
5 Select File System Rights page from the Available Pages list, add it to the Assigned Pages, then
click OK.
The Files System Rights page is assigned to RBS Users.
356
OES 11 SP2: NSS File System Administration Guide for Linux
30
Security Considerations
30
This section describes security issues and recommendations for Novell Storage Services for Novell
Open Enterprise Server 11 SP2. It is intended for security administrators or anyone who is
responsible for the security of the system. It requires a basic understanding of NSS. It also requires
the organizational authorization and the administrative rights to effect the configuration
recommendations.
 Section 30.1, “Security Features of NSS,” on page 357
 Section 30.2, “Preventing Unauthorized Access,” on page 358
 Section 30.3, “Securing Sensitive Information During Remote Management Sessions,” on
page 359
 Section 30.4, “Protecting Data During Backup and on Backup Media,” on page 359
 Section 30.5, “Preventing Exposure of Sensitive Data in a Core Dump,” on page 360
 Section 30.6, “Preventing Exposure of the Encryption Password in a Log,” on page 360
 Section 30.7, “Using Data Shredding to Prevent Unauthorized Access to Purged Files,” on
page 364
 Section 30.8, “Acquiring Security Equivalence Vectors for NSS Users,” on page 364
 Section 30.9, “Protecting Modules Responsible for Security Equivalence Vectors,” on page 365
 Section 30.10, “Controlling File System Access and Attributes for NSS Volumes,” on page 365
 Section 30.11, “Displaying Directory and File Attributes for NSS Volumes,” on page 365
 Section 30.12, “Security Best Practices for zAPIs,” on page 366
 Section 30.13, “Controlling Physical Access to Servers and Resources,” on page 366
 Section 30.14, “Securing Access to the Servers With a Firewall,” on page 366
 Section 30.15, “Creating Strong Passwords,” on page 366
30.1
Security Features of NSS
Issue/Feature
Description
Recommendation
For More Information
Encrypted volume
support
Encrypted NSS volumes
meet the legal standard
of making data
inaccessible to software
that circumvents normal
access control, such as
if the media were stolen.
Encrypt data volumes
that contain mission
critical data or sensitive
data.
“Managing Encrypted
NSS Volumes” on
page 239
Use a strong encryption
password and protect
the password.
Section 30.15, “Creating
Strong Passwords,” on
page 366
Security Considerations
357
Issue/Feature
Description
Recommendation
For More Information
Storage plug-in for
iManager
iManager requires
eDirectory
authentication and SSL
connections between
your Web browser and
the iManager server and
between the iManager
server and the target
server being managed.
Use an Administrator
user identity and a
strong password.
Section 9.1.7, “Storage
Plug-In Quick
Reference,” on page 84
Section 30.15, “Creating iManager 2.7.x website
Strong Passwords,” on
(https://www.netiq.com/
page 366
documentation/
imanager27/)
Files and Folders plug-in Manage trustees, trustee Use an Administrator
for iManager
rights, and inherited
user identity and a
rights filters. View
strong password.
effective rights.
Section 30.15, “Creating
Strong Passwords,” on
Manage file system
attributes.
page 366
Section 9.1.8, “Files and
Folders Plug-In Quick
Reference,” on page 90
Section 20.1,
“Configuring File System
Trustees, Trustee
Rights, Inherited Rights
Filters, and Attributes,”
on page 247
iManager 2.7.x website
(https://www.netiq.com/
documentation/
imanager27/)
NSSMU
NSSMU is executed
locally on the server.
Novell Remote Manager Novell Remote Manager
requires eDirectory
authentication and SSL
connections between
your Web browser and
the Novell Remote
Manager running on the
server being managed.
Secure physical access
to servers.
Section 30.13,
“Controlling Physical
Access to Servers and
Resources,” on
page 366
Use an Administrator
user identity or
equivalent, and use a
strong password.
Section 9.5.3,
“Accessing Novell
Remote Manager,” on
page 101
Section 30.15, “Creating
Strong Passwords,” on
page 366
OES 11 SP2: Novell
Remote Manager
Administration Guide
NW 6.5 SP8: Novell
Remote Manager
Administration Guide
30.2
Preventing Unauthorized Access
NSS includes the following features to help prevent access to data that circumvents normal access
control:
 Encrypted volume support
358
OES 11 SP2: NSS File System Administration Guide for Linux
Encrypted volume support encrypts the volume, which makes data inaccessible to software that
circumvents normal access control, such as if the media were stolen. It meets U.S. Government
security standards. For information, see “Managing Encrypted NSS Volumes” on page 239.
 Data shredding
The Data Shredding attribute supports shredding of purged files (up to 7 times), which erases
files completely. It meets the U.S. Government security standards. For information, see
Section 20.3, “Using Data Shredding to Prevent Access to Purged Files,” on page 257.
 Multiple server access prevention for pools
Multiple server access prevention (MSAP) ensures data integrity by preventing unauthorized
access to shared media in a storage area network. For information, see Section 15.13, “Preventing
Pools from Activating on Multiple Servers,” on page 175.
 Novell Trustee model for access control
NSS uses the Novell Trustee model to greatly simplify access control management in the file
system. It restricts visibility of data structures so that users only see subdirectories they have
rights to see, not the whole tree like all other file systems.
For information about the Novell Trustee model and NSS file system rights, see the OES 11 SP2:
File Systems Management Guide.
30.3
Securing Sensitive Information During Remote
Management Sessions
When you are managing OES 11 SP2 servers using iManager, all the information including sensitive
data is typically sent via a Secure HTTP (HTTPS) connection between iManager and CIMOM on the
Linux server you are managing. This ensures that sensitive data is not exposed during transmission.
However, if CIMOM is not running on the Linux server you are managing, the plug-ins attempt to
connect via NCP or CIFS. These connections are insecure and are a security concern only when
transmitting sensitive information.
Effective from OES2, storage plug-ins have been modified to prevent this potential exposure of
sensitive information. Where tasks involve the exchange of sensitive information between iManager
and the Linux server you are managing, the plug-in now checks to see if CIMOM is running and
available on the Linux server you are managing before it attempts to execute the command. If
CIMOM is not running for some reason, it returns an error message and does not execute the task.
The plug-ins do not allow sensitive data to be sent across insecure connections (such as NCP or CIFS/
SAMBA) to the Linux server. You get an error message explaining that the connection is not secure
and that CIMOM must be running before you can perform the task.
30.4
Protecting Data During Backup and on Backup Media
Backups of NSS volumes are not encrypted, unless it is a feature of the backup software or hardware
you use. Although data is stored on an encrypted NSS volume, its data is transmitted and backed up
in an unencrypted format.
Use backup methods that protect data transmitted between the server and the backup media,
according to your security needs.
Security Considerations
359
Use one of the following methods to encrypt the data for backup:
 Use backup software that is able to encrypt data when you back it up. This method has
performance and manageability challenges, especially for managing encryption keys.
 Use an encryption appliance that encrypts sensitive backup media as data is backed up.
If you transport and store media offsite, use a company that specializes in media shipment and
storage. This way, your tapes are tracked via barcodes, stored in environmentally friendly conditions,
and are handled by a company whose reputation rests on its ability to handle your media properly.
30.5
Preventing Exposure of Sensitive Data in a Core Dump
When a core dump occurs for an encrypted NSS volume, data from the encrypted volume might be
included in the core dump as unencrypted data dumped from cache memory. To prevent encrypted
data exposure in the dump, select to exclude cache during a core dump when prompted to select
writing all of memory in the core dump or to exclude NSS cache.
This applies also for volumes that are not encrypted but contain confidential data. Although the data
is not normally encrypted, you might not want to allow unauthenticated access to the information.
30.6
Preventing Exposure of the Encryption Password in a Log
This section describes NSS debugger logging features so you can identify when these logs are turned
on and turn them off in your operational environment.
 Section 30.6.1, “NSS Logging,” on page 360
 Section 30.6.2, “NSS Logging and Security Implications,” on page 361
 Section 30.6.3, “Logging Communications between NSS and the _ADMIN Volume,” on page 361
 Section 30.6.4, “Logging Communications between NSS and eDirectory, NICI, or Linux User
Management,” on page 363
30.6.1
NSS Logging
On OES, most of the NSS code runs in kernel space, but some portions are required to run in user
space. To communicate across the boundary between user and kernel space, some internal
mechanisms were implemented. For debugging purposes, some logging features were added to track
these communications between user and kernel space. These logging features are slow and
cumbersome, and are intended for use by Novell support engineers to help diagnose any problems
that arise. They are not intended for everyday use, and seriously impact performance when they are
turned on.
There are two main areas where logging is built into the system. The first is the capacity to log all
XML communication to/from the _ADMIN volume. The second is the capacity to log NSS kernel
requests to communicate with eDirectory, NICI, and LUM, all of which run in user space.
360
OES 11 SP2: NSS File System Administration Guide for Linux
30.6.2
NSS Logging and Security Implications
When working with encrypted volumes it is important to realize that the volume password and key
information is exchanged between user and kernel space as encrypted volumes are created and/or
mounted. If you have logging enabled on the Linux server when you enter the encryption password,
your password and volume key information might show up in the log file.
You must be the root user or an equivalent user with root user privileges to perform the steps
required to enable logging, disable logging, or read /var/log/messages. This prevents ordinary
users from manipulating the logging environment. We strongly recommend that you protect the
physical access to the server and the root user passwords to prevent unauthorized access to your
servers.
Even though the logging mechanisms are root user protected, we strongly recommend that you
make sure logging is disabled whenever you plan to enter the encryption password for an encrypted
NSS volume on your system. You enter an encryption password when you create the volume and
when you mount the volume for the first time after any system start or reboot.
30.6.3
Logging Communications between NSS and the _ADMIN Volume
Applications such as NSSMU and Perl scripts communicate with NSS via the _admin volume. In
these communications, the volume’s encryption password is passed in the clear. There are two
utilities that can log these exchanges, the adminusd daemon and the nss /vfs commands in
NSSCON. Logs are written to /var/log/messages.
 “Prerequisite” on page 361
 “Enabling or Disabling adminusd Logging” on page 361
 “Enabling or Disabling VFS Logging” on page 362
Prerequisite
You must be the root user or an equivalent user with root user privileges to perform the steps
required to enable logging, disable logging, or read /var/log/messages. This prevents ordinary
users from manipulating the logging environment.
Enabling or Disabling adminusd Logging
On your OES server, an NSS daemon called adminusd is installed into /opt/novell/nss/sbin
directory. It is run from the startnss.bsh script. Output data is written to the /var/log/messages
directory.
 “Enabling adminusd Logging” on page 361
 “Disabling adminusd Logging” on page 362
Enabling adminusd Logging
At a Linux terminal console, do the following to enable adminusd logging:
1 Log in as the root user.
2 Kill the adminusd daemon.
3 Run the daemon with logging turned on by entering
Security Considerations
361
adminusd -l
Using the -l option enables logging of all communication to and from the _ADMIN volume in the
/var/log/messages.
Disabling adminusd Logging
At a Linux terminal console, do the following to disable adminusd logging:
1 Log in as the root user.
2 Kill the adminusd daemon.
3 Run the daemon with logging turned off by entering
adminusd
Not using the -l option turns logging off.
4 Delete and purge the adminusd log files in /var/log/messages.
Enabling or Disabling VFS Logging
In the NSS Console (NSSCON), the VFS option for NSS can log communications between NSS and
the _ADMIN volume. The logged data is displayed on the NSSCON screen and is also written to the /
var/log/messages.
Enabling VFS Logging
At a Linux terminal console, do the following to enable VFS logging:
1 Log in as the root user, then enter
nsscon
2 In NSSCON, enter
nss /vfs
Logging is turned on.
Disabling VFS Logging
At a Linux terminal console, do the following to disable VFS logging:
1 Log in as the root user, then enter
nsscon
2 In NSSCON, enter
nss /novfs
Logging is turned off.
3 Exit NSSCON.
4 If the terminal console logging feature was on, turn it off, then delete and purge the logged
session.
5 Delete and purge the VFS log files in /var/log/messages.
362
OES 11 SP2: NSS File System Administration Guide for Linux
30.6.4
Logging Communications between NSS and eDirectory, NICI, or Linux
User Management
All internal NSS kernel space requests for NetIQ eDirectory, NICI, and Linux User Management are
routed through an interface called the NDP (Novell Data Portal). NDP has a user space daemon
(ndpapp) and a kernel module (ndpmod). In communications between ndpapp and ndpmod, the
volume’s encryption password is obscured, but it can be easily broken. Both ndpapp and ndpmod have
a logging capacity, and both of them write their log data to /var/log/messages.
 “Prerequisite” on page 363
 “Enabling or Disabling ndpapp Logging” on page 363
 “Enabling or Disabling npdmod Logging” on page 364
Prerequisite
You must be the root user or an equivalent user with root user privileges to perform the steps
required to enable logging, disable logging, or read /var/log/messages. This prevents ordinary
users from manipulating the logging environment.
Enabling or Disabling ndpapp Logging
On your OES server, an NSS daemon called ndpapp is installed into /opt/novell/nss/sbin. It is run
from the startnss.bsh script.
 “Enable ndpapp Logging” on page 363
 “Disable ndpapp Logging” on page 363
Enable ndpapp Logging
At a Linux terminal console, do the following to enable ndpapp logging:
1 Log in as the root user.
2 Kill the ndpapp daemon.
3 Run the daemon with logging turned on by entering
ndpapp --debug=nn
Replace nn with the log level desired. Set the log level to 1 and above to turn logging on. The
higher the number, the greater and more detailed is the logged output.
Disable ndpapp Logging
At a Linux terminal console, do the following to disable ndpapp logging:
1 Log in as the root user.
2 Kill the ndpapp daemon.
3 Run the daemon with logging turned off by entering
ndpapp
Running ndpapp without the --debug option turns logging off.
4 Delete and purge the log files in /var/log/messages.
Security Considerations
363
Enabling or Disabling npdmod Logging
 “Enabling ndpmod Logging” on page 364
 “Disabling ndpmod Logging” on page 364
Enabling ndpmod Logging
At a Linux terminal console, do the following to enable ndpmod logging:
1 Log in as the root user, then enter
echo nn >/proc/driver/ndp/debug
Replace nn with the log level desired. Set the log level to 1 and above to turn logging on. The
higher the number, the greater and more detailed is the logged output.
Disabling ndpmod Logging
At a Linux terminal console, do the following to disable ndpmod logging:
1 Log in as the root user, then enter
echo 0 >/proc/driver/ndp/debug
Setting the Log Level field to 0 turns logging off.
2 Delete and purge the log files in /var/log/messages.
30.7
Using Data Shredding to Prevent Unauthorized Access to
Purged Files
If the Data Shredding attribute for an NSS volume is disabled, unauthorized access to purged deleted
files is possible. An individual can extend a file, LSEEK to the end of the existing file data, and then
read the data. This returns the decrypted leftover data that is in the block.
To secure this vulnerability, make sure to enable Data Shredding for your NSS volumes by specifying
an integer value of 1 to 7 times for the Data Shredding attribute. A value of 0 disables Data
Shredding.
For information, see Section 20.3, “Using Data Shredding to Prevent Access to Purged Files,” on
page 257.
30.8
Acquiring Security Equivalence Vectors for NSS Users
When a user authenticates to the network, the system calculates the user’s Security Equivalence
Vector (SEV) based on information in the user’s profile in NetIQ eDirectory. NSS validates the user’s
SEV against the file system trustee rights of the directory and file the user is attempting to access. In
OES, SEVs are acquired differently for NSS on NetWare and NSS on Linux.
For NSS on NetWare, whenever a user connects to the NSS file system, NetWare retrieves the user’s
SEV from eDirectory and maintains it as part of the connection structure for the user’s session. NSS
automatically retrieves the user’s SEV from the connection structure.
For NSS on Linux, NSS caches the SEV locally in the server memory, where it remains until the server
is rebooted or the user is deleted from eDirectory. NSS polls eDirectory at a specified interval for
updates to the SEVs that are in cache. Command line switches are available in the NSS Console utility
364
OES 11 SP2: NSS File System Administration Guide for Linux
(nsscon) to enable or disable the update, to set the update interval from 5 minutes to 90 days
(specified in seconds), and to force an immediate update of security equivalence vectors. For
information, see Section A.32, “Security Equivalence Vector Update Commands,” on page 397.
30.9
Protecting Modules Responsible for Security Equivalence
Vectors
The Linux modules in user space that are responsible for providing Security Equivalence Vectors for
NSS users can be replaced without the kernel module being aware of it. Make sure that the directory
/opt/novell/nss/sbin/ and the files involved (ndpapp and idbrokerd) can only be modified by
the root user. For example, make root the owner and set permissions to restrict access for Group
and Other users.
30.10
Controlling File System Access and Attributes for NSS
Volumes
To ensure that users have the appropriate effective file system rights to data on NSS volumes, make
explicit file system trustee assignments, grant security equivalences for users, and filter inherited
rights. To simplify the assignment of rights, you can create Group and Organizational Role objects in
NetIQ eDirectory, then assign users to the groups and roles.
Set file system attributes for directories and files on an NSS volume to specify how a file or directory
is used.
For information about controlling file system access and attributes for NSS volumes, see “
Understanding File System Access Control Using Trustees” in the OES 11 SP2: File Systems
Management Guide.
For information about access control issues for NSS, see Section 5.5, “Access Control for NSS,” on
page 53.
30.11
Displaying Directory and File Attributes for NSS Volumes
With NCP Server, NSS supports the Novell Trustee model, which is the same file system trustee
rights and attributes for its directories and files as does NSS on NetWare. Management tools provide
similar methods on each platform for configuring rights and attributes. For information, see the OES
11 SP2: File Systems Management Guide.
NSS on Linux displays some of the NSS file system directory and file attributes in the Linux POSIX
directory and file permissions, including the Hidden, Read Only, Read/Write, and Execute attributes.
These are not intended as a direct mapping of POSIX rights and behave differently. NSS does not
support the POSIX set-user-ID mode bit and set-group-ID mode bit. For information, see “Viewing
Key NSS Directory and File Attributes as Linux POSIX Permissions” in the File Systems Management
Guide for OES.
Security Considerations
365
30.12
Security Best Practices for zAPIs
The zAPIs for NSS create the character special device /dev/zapi. Because zAPIs run at a level
beneath where auditing tools track access and use, you should consider restricting access on the /
dev/zapi directory to the root user and setting the device’s POSIX permissions to mode=0400.
If you are using AppArmor, add the following line to the AppArmor profile of any program that uses
zAPIs for NSS:
/dev/zapi r,
You should grant root access only to members of the administrative group called wheel. The root
user is a member of the wheel group by default. Users in the wheel group can access the device by
using the su or sudo commands to obtain root privileges for any necessary tasks.
To add a user to the wheel group:
1 Log in as the root user.
2 In a terminal console, enter
usermod -G wheel username
Replace username with the username of the user being added to the wheel group.
Regardless of the POSIX access rights set for the device, the Novell trustee model is enforced for the
trustees and trustee access rights you define on /dev/zapi for individual users.
The key is specific to a user rather than a user-process pair. Therefore, two processes running as the
same user can use the same key without requiring the second process to actually open the file. This
behavior is the same as for zAPIs running for NSS on NetWare.
30.13
Controlling Physical Access to Servers and Resources
 Servers must be kept in a physically secure location with access by authorized personnel only.
 The corporate network must be physically secured against eavesdropping or packet sniffing.
30.14
Securing Access to the Servers With a Firewall
Use firewalls between public access points and servers to prevent direct access to data by a would-be
third-party intruders.
30.15
Creating Strong Passwords
Make sure to employ security best practices for passwords, such as the following:
 Length: The minimum recommended length is 6 characters. A secure password is at least 8
characters; longer passwords are better.
 Complexity: A secure password contains a combination of letters and numbers. It should
contain both uppercase and lowercase letters and at least one numeric character. Adding
numbers to passwords, especially when added to the middle and not just at the beginning or the
end, can enhance password strength. Special characters such as &, $, and > can greatly improve
the strength of a password.
366
OES 11 SP2: NSS File System Administration Guide for Linux
Do not use recognizable words, such as proper names or words from a dictionary, even if they
are bookended with numbers. Do not use personal information, such as phone numbers, birth
dates, anniversary dates, addresses, or ZIP codes. Do not invert recognizable information;
inverting bad passwords does not make them more secure.
 Uniqueness: Do not use the same passwords for all servers. Make sure to use separate
passwords for each server so that if one server is compromised, all of your servers are not
immediately at risk.
Security Considerations
367
368
OES 11 SP2: NSS File System Administration Guide for Linux
A
NSS Commands
A
This section describes commonly used Novell Storage Services command line options for Novell
Open Enterprise Server 11 SP2 servers. The commands are grouped by management tasks.
 Section A.1, “Using NSS Commands,” on page 370
 Section A.2, “Help and Find Commands,” on page 371
 Section A.3, “Access Time Command,” on page 372
 Section A.4, “Background File System Checker Commands,” on page 372
 Section A.5, “Cache Management Commands,” on page 372
 Section A.6, “Compression Commands,” on page 374
 Section A.7, “Data Shredding Commands,” on page 377
 Section A.8, “Daylight Savings Time Commands,” on page 377
 Section A.9, “eDirectory Storage Object ID Commands,” on page 378
 Section A.10, “Extended Attributes (XAttr) Commands,” on page 378
 Section A.11, “Event File List (EFL) Command,” on page 381
 Section A.12, “Hard Links Commands,” on page 381
 Section A.13, “I/O Write Commands,” on page 383
 Section A.14, “LAF Audit Log Messages Commands,” on page 384
 Section A.15, “Load Commands for the nssstart.cfg File,” on page 385
 Section A.16, “Low Storage Alert Messages Commands,” on page 385
 Section A.17, “Migration Support Commands for Near-Line Storage,” on page 386
 Section A.18, “Modified File List (MFL) Commands,” on page 386
 Section A.19, “Multipath I/O Failover Commands,” on page 386
 Section A.20, “Multiple Server Activation Prevention (MSAP) Commands,” on page 387
 Section A.21, “noatime and atime Commands,” on page 388
 Section A.22, “noatime and nodiratime Support for Linux open, mount, nfsmount, and /etc/
fstab,” on page 389
 Section A.23, “Opportunistic Locking Commands,” on page 391
 Section A.24, “Pool Freeze and Thaw Commands,” on page 391
 Section A.25, “Pool Management Commands,” on page 392
 Section A.26, “Pool Snapshot Commands,” on page 393
 Section A.27, “Pool Verify and Rebuild Commands,” on page 393
 Section A.28, “POSIX Permission Mask Command,” on page 393
 Section A.29, “Quotas Commands,” on page 394
 Section A.30, “Read Ahead Blocks and Allocate Ahead Blocks Commands,” on page 394
NSS Commands
369
 Section A.31, “Salvage and Purge Commands,” on page 396
 Section A.32, “Security Equivalence Vector Update Commands,” on page 397
 Section A.33, “Sendfile API Support Command,” on page 398
 Section A.34, “Status Commands,” on page 398
 Section A.35, “Visibility Rebuild Command,” on page 399
 Section A.36, “Volume Management Commands,” on page 400
 Section A.37, “ZID Commands,” on page 402
A.1
Using NSS Commands
 Section A.1.1, “Issuing NSS Commands at Command Consoles,” on page 370
 Section A.1.2, “Making NSS Commands Persist Through a Reboot,” on page 370
 Section A.1.3, “Permissions,” on page 370
 Section A.1.4, “Descriptions,” on page 371
A.1.1
Issuing NSS Commands at Command Consoles
Enter NSS commands at the NSS Console (NSSCON, nsscon(8)) on an OES 11 SP2 server. For
information about NSSCON, see Section B.4, “nsscon,” on page 413.
A.1.2
Making NSS Commands Persist Through a Reboot
NSS commands issued at the command line do not persist through a server reboot. To make nonpersistent command settings persist automatically through a server reboot, place the commands in
the nssstart.cfg file, which NSS reads on startup. The file is in the /etc/opt/novell/nss
directory. Some commands cannot be used in the nssstart.cfg file. Refer to the individual
commands for information.
1 In a text editor, create a file called nssstart.cfg in the /etc/opt/novell/nss/ directory.
2 Enter any NSS commands that you want to persist through server reboots.
Each NSS command should be preceded by a forward slash (/) and followed with a space.
For example, the ListXattrNWmetadata option enables the ability to return the
netware.metadata extended attribute for a file or directory at listxattr(2) time. The
ZLSSUpgradeNewVolumeMediaFormat option creates all new NSS volumes with the upgraded
media structure that supports hard links. This applies to volumes on both local and shared
pools.
/numworktodos=40 /ListXattrNWmetadata /ZLSSUpgradeNewVolumeMediaFormat
3 Save and close your nssstart.cfg file.
A.1.3
Permissions
You must be logged in as the root user, or as a Linux user with equivalent privileges.
370
OES 11 SP2: NSS File System Administration Guide for Linux
A.1.4
Descriptions
The descriptions of commands provide information about the default values, range of valid values,
and persistence of the command.
Default Value
The default value is the setting used for a given server configuration parameter. Initially, the value
reported is the default setting for the parameter. If you modify the value, it reports the actual value.
IMPORTANT: Default values are the best choice for a majority of server configurations, but you can
modify the settings to meet your needs.
Range of Valid Values
The range of valid values establishes the constraints for any particular variable setting.
Persistence
If a command’s setting is persistent, the value or policy you set remains in effect for the server
through any subsequent server reboots until you next modify the settings. If a command is not
persistent, the setting remains in effect only until the next server reboot. Some commands can be
issued in the nssstart.cfg file in order to make the settings persist across reboots.
The file is /etc/opt/novell/nss/nssstart.cfg. For information, see Section A.1.2, “Making NSS
Commands Persist Through a Reboot,” on page 370.
A.2
Help and Find Commands
The help and find options provide information about various NSS switches and a brief description
of the parameter. Enter the options at the nsscon prompt in the NSS Console.
/find search_criteria
To find a particular NSS switch, use the /find switch. Replace search_criteria with the characters
to use in the search. Wild cards searches where you replace some characters with an asterisk (*)
are allowed.
For example, the following command finds all of the NSS command options that contain the
word “compress”:
nss /find=*compress*
/help or /?
To access online Help for NSS commands, enter one of the following:
nss /help
nss /?
NSS Commands
371
A.3
Access Time Command
nss /(No)UpdateAccessTimeForReaddir=volumename
Enable or disable the ability to update the access time when enumerating directories. Enabled is
the default POSIX behavior.
Default: On
Examples
To enable access time to be updated when files in a directory are accessed for listing, enter
nss /UpdateAccessTimeForReaddir=volumename
To disable access time to be updated when files in a directory are accessed for listing, enter
nss /NoUpdateAccessTimeForReaddir=volumename
A.4
Background File System Checker Commands
The background file system checker checks the integrity of user ID metadata, directory quota
metadata, and files. The output goes to the nsscon prompt. There is no built-in report to follow the
process of a background check.
nss /(No)BackgroundChecking
Enables or disables the background file system checker.
Default: On (enabled)
Examples
To enable background checking, enter
nss /BackgroundChecking
To disable background checking, enter
nss /NoBackgroundChecking
ForceBackgroundCheck
Forces the background file system checker to start.
A.5
Cache Management Commands
Use the commands in this section to manage the cache for NSS volumes.
 Section A.5.1, “Cache Command,” on page 372
 Section A.5.2, “ID Cache Commands,” on page 373
 Section A.5.3, “Cache Monitoring Commands,” on page 373
 Section A.5.4, “UnplugAlways Command for the Read Queue,” on page 373
A.5.1
Cache Command
nss /MinBufferCacheSize=value
Sets the specified minimum number of NSS buffer cache entries, where value is the number of 4KB buffers to assign for NSS.
372
OES 11 SP2: NSS File System Administration Guide for Linux
Default: 30000 for NSS
Range: 10000 for NSS to the amount of memory in KB divided by 4 KB (the block size).
A.5.2
ID Cache Commands
Use the following command at the nsscon prompt in order to synchronize the cache of eDirectory
IDs that is maintained for controlling access to NSS volumes.
nss /IDCacheResetInterval=value
Set the number of seconds between invalidation of the ID cache.
Default: 90000
Range: 0 to 200000000
ResetIDCache
Reset the various eDirectory ID caches.
If you Linux-enable a user who has been logged into the system before being Linux-enabled,
ensure to execute the resetidcache command from the NSS Console (nsscon) utility. This
allows proper reporting of ownership because it resets the mapping of user identities in the ID
cache and forces it to update with the Linux UID for the user.
If you LUM disable the user, ensure to execute the resetidcache command from the NSS
Console (nsscon) utility. Run this command after 30 minutes, because NCP server clears its
cache periodically at the interval of 30 minutes. This allows proper reporting of ownership
because it resets the mapping of user identities in the ID cache.
nss /IDCacheSize=value
Sets the maximum number of entries for NSS GUID to ID and ID to GUID cache.
For example, nss /IDCacheSize = 256000
Default: 16384
Range: 16384 to 524288
A.5.3
Cache Monitoring Commands
CacheStats
Shows the caching statistics for buffers.
ResetStats
Resets caching and file statistics.
A.5.4
UnplugAlways Command for the Read Queue
nss /(no)UnplugAlways
When enabled, this option allows NSS to unplug the device queue after queuing each read. This
improves performance significantly on certain workloads, such as Linux copy (cp) command.
NOTE: In OES 2 SP1, the UnplugAlways default setting is on (enabled). OES 2 SP2 onwards, the
UnplugAlways default setting is off (disabled).
NSS Commands
373
For OES 2 and OES 11 SP2 the UnplugAlways default setting is off (disabled). The OES 2 version
of this option is available as patch Novell Storage Services (NSS) and Novell Cluster Services (NCS)
20080806 (oes2-novell-nss-5503) for 32-bit and 64-bit architectures. The patch is available on the
Novell Downloads Web site (http://download.novell.com).
Examples
Enable UnplugAlways by entering the following at the nsscon prompt as the root user:
nss /UnplugAlways
Disable UnplugAlways by entering the following at the nsscon prompt as the root user:
nss /NoUnplugAlways
A.6
Compression Commands
Use the commands in this section to manage compression parameters for NSS volumes where the
Compression attribute has been enabled. After compression is enabled for a volume, it cannot be
disabled.
 Section A.6.1, “Server-Level Compression Parameters,” on page 374
 Section A.6.2, “Volume-Level Compression Parameters,” on page 376
A.6.1
Server-Level Compression Parameters
Server-level compression parameters apply to all NSS volumes on the server. For details about each
parameter, see Section 21.2.1, “Understanding Server-Level Compression Parameters,” on page 268.
nss /CompressionDailyCheckStartingHour=value
Default: 0
Range: 0 to 23
Hours are specified by a 24-hour clock: (0=midnight; 23=11 p.m.).
nss /CompressionDailyCheckStopHour=value
Default: 6
Range: 0 to 23
Hours are specified by a 24-hour clock: (0=midnight; 23=11 p.m.).
nss /DaysUntouchedBeforeCompression=value
Default: 14
Range: 0 to 100000 (in days)
nss /(No)EnableFileCompression
Enable file compression on volumes where the Compression attribute is enabled. Immediate
Compress requests are queued until compression is allowed.
Default: On
Range: On or Off
Examples
To enable compression, enter
nss /EnableFileCompression
374
OES 11 SP2: NSS File System Administration Guide for Linux
To disable compression, enter
nss /NOEnableFileCompression
nss /MinimumCompressionPercentageGain=value
The minimum percentage a file must compress in order to remain compressed.
Default: 20
Range: 0 to 50
nss /MaximumConcurrentCompressions=value
The number of simultaneous compressions allowed by the system (simultaneous compressions
can only occur if there are multiple volumes).
Default: 2
Range: 1 to 8
nss /ConvertCompressedToUncompressedOption=value
Specify what the file system does with an uncompressed version of a file after the server has
decompressed it.
IMPORTANT: Before a compressed file can be opened, there must be sufficient space available
on the volume for the uncompressed and compressed copies of the file to coexist while the file is
open.
Default: 1
Range: 0, 1, or 2
0 = Always leave the file compressed.
While the file is open, both the uncompressed and compressed copies of the file coexist on the
volume. If the file is closed without changes, the uncompressed copy of the file is discarded. If
changes are saved, the compressed copy of the file is discarded. After the modified file is closed,
it is queued for immediate compression. Sufficient space must be available for both the
compressed and uncompressed copies of the file to temporarily coexist on the volume in order
for the compression to occur. After successful compression, the uncompressed copy of the
modified file is discarded.
1 = Leave the file compressed until second access if it is read only once during the time specified
by the Days Untouched Before Compression parameter. This is the default behavior for
compression.
While the file is open, both the uncompressed and compressed copies of the file coexist on the
volume. The first time that the file is closed without changes in the specified period, the
uncompressed copy of the file is discarded. The second time that the file is closed without
changes in the specified period, the compressed copy of the file is discarded. If changes are
saved, the compressed copy of the file is discarded. The uncompressed file remains
uncompressed until it meets requirements for being compressed.
2 = Always leave the file uncompressed.
While the compressed file is open, both the uncompressed and compressed copies of the file
coexist on the volume. When the file is closed or when changes are saved, the compressed copy
of the file is discarded. The uncompressed file remains uncompressed until it meets
requirements for being compressed.
NSS Commands
375
nss /DecompressPercentDiskSpaceFreeToAllowCommit=value
The percentage of disk space on a volume that is required to be free in order for file
decompression to permanently change the compressed file version to uncompressed, which
prevents newly uncompressed fils from entirely filling up the volume. Compressed files that are
written to are always left uncompressed.
Default: 10
Range: 0 to 75
nss /DecompressFreeSpaceWarningInterval=value
The time interval (in minutes) between displaying warning alerts when the file system is not
permanently changing compressed files to uncompressed files due to insufficient free disk
space.
Setting the interval to 0 turns off the alert.
Default: 31 minutes
Range: 0 to 720 (0 seconds to 29 days 15 hours 50 minutes 3.8 seconds)
Setting the interval to 0 turns off the alert.
nss /DeletedFilesCompressionOption=value
Specifies whether and when to compress deleted files. This command presumes that you have
enabled the Salvage attribute for NSS volumes. If Salvage is disabled, deleted files are purged
immediately, so there are no deleted files to compress.
Default: 1
Range: 0, 1, or 2
0 = Do not compress deleted files.
1 = Compress deleted files the next day.
2 = Compress deleted files immediately.
A.6.2
Volume-Level Compression Parameters
NSS offers volume-level commands in this section for configuring and monitoring compression on a
specified NSS volume.
CompScreen
Displays the NSS volume compression statistics on the compression screen.
nss /Compression=<volumename | all>
Enables the Compression attribute for the specified volume or for all volumes on the server.
After you enable the Compression attribute, the setting persists for the life of the volume. You
cannot disable compression, but you can set parameters to effectively turn it off. For
information, see Chapter 21, “Managing Compression on NSS Volumes,” on page 263.
nss /StopNormalCompression, or StopNormalCompression
Stops all queued compression for files, based on the compression triggered by a file open or
close.
nss /(No)BGCompression
Allows compression to occur in the background at any time, instead of only within specified
hours.
376
OES 11 SP2: NSS File System Administration Guide for Linux
nss /NoBGCompression
Stops background compression and clears any queued background compression requests.
Allow compression to occur only within the specified hours.
A.7
Data Shredding Commands
nss /(No)DataShredding=volumename:count
Enables or disables the Data Shredding attribute for the specified volume. Specify the number of
times you want to shred data.
Data shredding overwrites purged files with bit patterns up to seven times. Unless you must use
this feature for security reasons, it should be disabled, because data shredding consumes a great
deal of disk I/O bandwidth.
Default: 1
Range: 1 to 7, where 0 indicates no shredding
Examples
To enable data shredding on a volume VOL1 where the purged files are overwritten 7 times,
enter
nss /DataShredding=VOL1:7
To disable data shredding for a volume VOL1, enter
nss /NoDataShredding=VOL1
A.8
Daylight Savings Time Commands
Use the commands in this section to manage daylight savings time for NSS volumes.
nss /DaylightSavingsTimeOffset=value
Issuing this command causes UTC time to be recalculated from local time. Specify the offset
applied in time calculations when daylight savings time is in effect.
Default: +1 (one hour)
Range: 0 to 23
nss /StartOfDaylightSavingsTimeOffset=value
Local date and time when the switch on to daylight savings time should occur. Formats include
a simple date and time enclosed in quotes, or rule enclosed in quotes and parenthesis. For
example:
“April 1 2008 2:0:0 am”
“(April Sunday > 1 2:0:0 am)”
“(April Sunday First 2:0:0 am)”
Only rules cause rescheduling for the next year. You must set both the start and end dates before
either is scheduled.
[Value=none]
NSS Commands
377
nss /EndOfDaylightSavingsTimeOffset=value
Local date and time when the switch off of daylight savings time should occur. Formats include
a simple date and time enclosed in quotes, or rules enclosed in quotes and parenthesis. For
example:
“October 31 2008 2:0:0 am”
“(October Sunday <= 31
2:0:0 am)”
“(October Sunday Last 2:0:0 am)”
Only rules cause rescheduling for the next year. You must set both the start and end dates before
either is scheduled.
[Value=none]
A.9
eDirectory Storage Object ID Commands
Use the commands in this section to remove or update the Storage objects in NetIQ eDirectory.
nss /RemoveObjectIDStore
Remove the object store.
nss /UpdateObjectIDStore
Scan and add all volume objects to an existing object store.
A.10
Extended Attributes (XAttr) Commands
The Extended Attributes (XAttr) extension for NSS provides accessibility into many extended
attributes for NSS. It allows you to read, back up, and restore extended attributes of files on NSS. This
section describes options to determine how extended attributes are handled for NSS.
 Section A.10.1, “CtimeIsMetadataModTime Option,” on page 378
 Section A.10.2, “ListXattrNWmetadata Option,” on page 379
 Section A.10.3, “Additional Information,” on page 381
A.10.1
CtimeIsMetadataModTime Option
By default, the Linux ctime is mapped to NSS create time (CreateTime). We prefer that ctime be
based on the NSS metadata modified time (MetadataModifiedTime) instead of the NSS create time,
but modifying the Linux ctime function might cause unknown complications. Thus, NSS provides
the CtimeIsMetadataModTime option to allow an administrator to select to map the metadata
modified time as the Linux ctime value, rather than the NSS create time when the different time
stamp matters for your deployment.
The CtimeIsMetadataModTime option can be set persistently in the /etc/opt/novell/nss/
nssstart.cfg file, or it can be set from nsscon by a user with root access.
nss /CtimeIsMetadataModTime
Maps the NSS metadata modified time to Linux ctime. This is the default behavior in OES 2 and
later.
378
OES 11 SP2: NSS File System Administration Guide for Linux
nss /noCtimeIsMetadataModTime
Maps the NSS create time to Linux ctime.
A.10.2
ListXattrNWmetadata Option
 “ListXattrNWmetadata Option” on page 379
 “Security Issues for ListXattrNWmetadata” on page 380
 “Using the Linux cp Command to Copy Files with Extended Attributes” on page 380
 “Using the Linux rsync Command to Copy Files with Extended Attributes” on page 380
ListXattrNWmetadata Option
In OES 2 and later, the NetWare metadata (netware.metadata) extended attribute was added for
files and directories. The ListXattrNWmetadata option for NSS allows a user or application with
root access to select whether the netware.metadata extended attribute is returned for a file or
directory at listxattr(2) time. The ListXattrNWmetadata option is disabled (OFF) by default.
This option is intended for use by indexing or backup programs.
For users or applications without root access (without the CAP_SYS_ADMIN capability), the
listxattr(2) command never lists the netware.metadata extended attribute, regardless of the
ListXattrNWmetadata setting.
The ListXattrNWmetadata option can be set persistently in the /etc/opt/novell/nss/
nssstart.cfg file, or it can be set from nsscon by a user with root access as follows:
nss /(No)ListXattrNWmetadata
Enables or disables the ability to return the netware.metadata extended attribute for a file or
directory at listxattr(2) time.
The option is disabled by default in all OES versions. Enable the option if there is a need to use
the Linux xattr functions to access or change NetWare metadata fields by name.
 Off: listxattr() does not return “netware.metadata” as an extended attribute for NSS
files and directories. It is still possible to get extended attributes (getxattr()) and set
extended attributes (setxattr()) by using the specific xattr name:
"netware.metadata"
 On: listxattr() returns “netware.metadata” as an extended attribute for NSS files and
directories.
Examples
To enable the return of netware.metadata information, enter the following in the NSS Console:
nss /ListXattrNWmetadata
To disable the return of netware.metadata information, enter the following in the NSS Console:
nss /NoListXattrNWmetadata
NSS Commands
379
Security Issues for ListXattrNWmetadata
The ListXattrNWmetadata option is available only to the user or application with root access (the
CAP_SYS_ADMIN capability). It is disabled (off) by default.
When this feature is enabled (on) (such as by the backup user or by third-party backup software), and
if the user or application has root user access, the following occurs:
 When copying NSS files or directories with the Linux cp utility from NSS volumes to NSS
volumes, the cp utility copies the trustees assigned to a file or directory to the destination file or
directory. This means that the old trustees of the file or directory now have visibility into the
destination directory. In addition, the old trustees inherit trustee rights from the destination
directory for other files in that directory.
NOTE: For users or applications without root access (without the CAP_SYS_ADMIN capability),
the trustee information is not copied to the destination directory.
 When copying NSS files with the cp utility from NSS volumes to non-NSS volumes, the cp
utility issues a warning message advising that it could not apply the netware.metadata
extended attribute.
NOTE: For users or applications without root access (without the CAP_SYS_ADMIN capability),
the cp utility does not attempt to apply the netware.metadata extended attribute.
There is no work-around for these two copy-related issues for the user or application with root
access. This is how the Linux cp utility works.
Using the Linux cp Command to Copy Files with Extended Attributes
The Linux cp command has changed from OES 2 onwards. In OES 1, when listxattr is enabled, the
extended attributes are also copied when you use the cp command as the root user to copy files.
However in OES 2 and later, in order to copy the extended attributes, you must use the --preserve
option.
The man page of cp on OES 2 and later provides the following description of the --preserve option:
--preserve[=ATTR_LIST]
Preserve the specified attributes (default: mode,ownership,timestamps), if possible additional
attributes: links, xattrs, all.
For example, after you enable listxattr, you can copy a file and its netware.metadata by logging
in as the root user, then entering the following at a terminal console prompt:
cp --preserve=all /path/file1 /newpath/file1
Using the Linux rsync Command to Copy Files with Extended Attributes
When using the Linux rsync command to copy files with extended attribute, use the rsync -A and X options. For example:
rsync -A -X -av test/ test2/
380
Options
Description
-A, --acls
preserve ACLs (implies -p)
OES 11 SP2: NSS File System Administration Guide for Linux
A.10.3
Options
Description
-X, --xattrs
preserve extended attributes
-p, --perms
preserve permissions
Additional Information
For information about how to use the XAttr Extension for NSS, see the NDK: XAttr Extension for NSS
(http://developer.novell.com/documentation/xattr/attr_enu/data/bktitle.html).
For information about how to use the Linux listxattr(2) command, see the man page (enter man 2
listxattr at a terminal console prompt).
Novell Cool Solutions has a listxattrs tool you can use to check if you get the extended attributes after
enabling /ListXattrNWmetadata. The listxattrs tool can be downloaded from the Cool Solutions >
Cool Tools > List Extended Attributes with xattr APIs (http://www.novell.com/coolsolutions/tools/
18206.html).
A.11
Event File List (EFL) Command
For information about the Event File List (EFL) feature for developers, see “FileEvent.xml
Definitions” (http://developer.novell.com/documentation/vfs/vfs__enu/data/ak7gh2x.html) in NDK:
Virtual File Services (http://developer.novell.com/documentation/vfs/vfs__enu/data/bktitle.html).
nss /ResetEFLTree=volumename
Reset the Event File List (EFL) tree on the given volume.
A.12
Hard Links Commands
Use the commands in this section for managing hard links capability for NSS volumes.
 Section A.12.1, “Hard Links Media Upgrade Commands for Existing Volumes,” on page 381
 Section A.12.2, “Hard Links Media Upgrade Commands for New Volumes,” on page 382
 Section A.12.3, “Hard Links Attribute Commands,” on page 382
 Section A.12.4, “Hard Links Management Commands,” on page 383
A.12.1
Hard Links Media Upgrade Commands for Existing Volumes
Use the commands in this section to perform the one-time upgrade for currently active volumes.
nss /ZLSSUpgradeCurrentVolumeMediaFormat=volumename
Upgrades the specified active volume. If it is a shared volume, the clustering system vetoes the
upgrade until all nodes in the cluster are running a supported operating system.
nss /ZLSSUpgradeCurrentVolumeMediaFormat=all
Upgrades all currently active volumes. If a shared volume is encountered, the clustering system
vetoes the upgrade until all nodes in the cluster are running a supported operating system.
NSS Commands
381
nss /ZLSSUpgradeCurrentVolumeMediaFormat=all /include=shared
Upgrades all currently active volumes that are part of a shared pool, but only if the clustering
system permits it.
nss /ZLSSUpgradeCurrentVolumeMediaFormat=all /include=local
Upgrades all currently active volumes that are not part of a shared pool.
nss /ZLSSUpgradeCurrentVolumeMediaFormat=all /include=local,shared
Upgrades all currently active volumes whether they are shared or local. If a shared volume is
encountered, the clustering system vetoes the upgrade until all nodes in the cluster are running
a supported operating system.
A.12.2
Hard Links Media Upgrade Commands for New Volumes
Use the commands in this section to enable all future NSS volumes to automatically be created with
the new media format. Make sure to use the commands on the servers where you plan to use hard
links. After a volume is created, you must manually enable the Hard Links attribute in order to create
hard links on it. For command information, see Section A.12.3, “Hard Links Attribute Commands,”
on page 382.
If the command is issued from the command line, it persists until a server reboot. If the command is
placed in the nssstart.cfg file, it persists across server reboots.
nss /ZLSSUpgradeNewVolumeMediaFormat
Creates all new NSS volumes with the upgraded structure. This applies to volumes on both local
and shared pools.
nss /ZLSSUpgradeNewVolumeMediaFormat /include=shared
Creates all new NSS volumes with the upgraded structure if the new volume is on a shared pool.
However, the clustering system vetoes the upgrade until all nodes in the cluster are running a
supported operating system.
nss /ZLSSUpgradeNewVolumeMediaFormat /include=local
Creates all new NSS volumes with the upgraded structure if the new volume is not on a shared
pool.
nss /ZLSSUpgradeNewVolumeMediaFormat /include=local,shared
Creates all new NSS volumes with the upgraded structure whether the volume is local or
shared. In a cluster, the clustering system vetoes the upgrade until all nodes in the cluster are
running a supported operating system.
A.12.3
Hard Links Attribute Commands
Use the commands in this section to enable or disable the Hard Links attribute for an NSS volume.
The Hard Links attribute cannot be set or viewed in NSSMU or in the Storage plug-in to iManager.
nss /HardLinks=volumename
Enables the Hard Links attribute for the specified volume. This enables hard links to be created
on the volume.
382
OES 11 SP2: NSS File System Administration Guide for Linux
nss /HardLinks=all
Sets the Hard Links attribute for all NSS volumes on the server. This enables hard links to be
created on any volume on the server. Any given hard link can point only to a file on the same
volume.
nss /NoHardLinks=volumename
Disables the Hard Links attribute for the specified volume. Existing hard links continue to
function, but no new hard links can be created on the specified volume.
nss /NoHardLinks=all
Disables the Hard Links attribute for all NSS volumes on the server. Existing hard links continue
to function, but no new hard links can be created on any NSS volume on the server.
A.12.4
Hard Links Management Commands
You can view a report of hard links for a file to identify its primary link and the hard link that
becomes the primary link if the primary link is deleted. For information, see Section 24.7, “Viewing
Hard Links for a File,” on page 315.
/nss /ListHardLinks=vol:path\filename.ext
View information about the primary link and hard links for a file on an NSS volume.
Replace path with the file’s primary link path or one of its hard link paths where you want to
start the search. Replace filename.ext with the actual filename of the file, including the
extension.
Use the /volumes option to verify that the Hard Links attribute is enabled.
nss /volumes
View a list of NSS volumes on the server and information about them. In the Attributes column,
the HardLinks attribute is listed if it is enabled for the volume.
A.13
I/O Write Commands
Use the commands in this section to control the write behavior of dirty blocks from the cache to the
disk. These commands are available for OES 2 and later. For more information about using these
commands, see Section 27.3, “Configuring or Tuning Group I/O,” on page 334.
nss JournalGroupWriteTime=seconds
Specify the elapsed time to wait before group writes of journal blocks.
Replace seconds with the maximum number of seconds to elapse before forcing journal blocks to
be written to the volume. The default value of seconds is 1.
Example
To group write journal blocks every 2 seconds, enter
nss Journal GroupWriteTime=2
nss MetadataGroupWriteTime=seconds
Specify the elapsed time to wait before group writes of metadata blocks. Decreasing the
metadata group write timer can help reduce the mount time for the volume after a crash.
Replace seconds with the maximum number of seconds to elapse before forcing metadata blocks
to be written to the volume. The default value of seconds is 40.
NSS Commands
383
Example
To group write metadata blocks every 30 seconds, enter
nss MetadataGroupWriteTime=30
nss UserDataGroupWriteTime=seconds
Specify the elapsed time to wait before group writes of user data blocks. Decreasing the user
data group write timer can help reduce the risk of data loss for a volume after a crash.
Replace seconds with the maximum number of seconds to elapse before forcing user data blocks
to be written to the volume. The default value of seconds is 3.
Example
To group write user data blocks every 1 second, enter
nss UserDataGroupWriteTime=1
nss MetadataGroupWriteLimit=blocks
Specify the maximum number of metadata blocks that can be dirty before a group write is
performed.
Replace blocks with the maximum number of metadata blocks that can be dirty before forcing
them to be written to the volume. The default value of blocks is 20000.
Examples
To decrease the maximum number of dirty metadata blocks to 15,000 for the purpose of
reducing the mount time, enter
nss MetadataGroupWriteLimit=15000
To increase the maximum number of dirty metadata blocks to 30,000 for the purpose of
increasing the file system performance, enter
nss MetadataGroupWriteLimit=30000
A.14
LAF Audit Log Messages Commands
Use the NSS audit log messages commands to enable or disable messages via Lightweight Auditing
Framework (LAF) for NSS trustee changes for NSS volumes on OES 2 and later. When it is enabled,
NSS reports changes for the following subset of NSS events:
 Adding trustees (AddTrustee)
 Removing trustees (RemoveTrustee)
 Setting the inherited rights mask (SetInheritedRightsMask)
The messages are stored in the/var/log/audit/audit.log file. For information about the content
and format of messages in the log, see Section 20.4.1, “Understanding NSS Audit Log Messages,” on
page 259.
nss /(No)LAFAuditTrustee
Enable or disable the generation of audit messages via Lightweight Auditing Framework for
NSS trustee changes for NSS volumes.
After you enable the audit log messages, the setting persists until the server reboot. After a
server reboot, the audit log is disabled again by default. To make the command persist across
reboots, add it to the /etc/opt/novell/nss/nssstart.cfg file.
384
OES 11 SP2: NSS File System Administration Guide for Linux
To have the setting persist across reboots, add it to the /etc/opt/novell/nss/nssstart.cfg
file.
Default: Off (disabled)
Values: On or Off
Examples
To enable NSS audit messages, enter the following at the nsscon prompt:
nss /LAFAuditTrustee
To disable NSS audit messages, enter the following at the nsscon prompt:
nss /NoLAFAuditTrustee
A.15
Load Commands for the nssstart.cfg File
Use the commands in this section to load the services automatically by placing them in the
nssstart.cfg file in the /etc/opt/novell/nss directory.
nss /(No)SkipLoadModules
If enabled, skips the auto-loading of the NSS support modules.
Default: Off
/NumWorkToDos=value
If used in the nssstart.cfg file, sets the number of WorkToDo entries. Entries can execute
concurrently.
NSS uses WorkToDo entries for tasks such as flushing file metadata to disk in the background.
Increasing the number of WorkToDo entries might be useful on a system that is heavily used.
NSS always reserves 20 WorkToDo entries.
Default: 50
Range: 5 to 100
/zLSS
If it is specified in the nssstart.cfg file, loads only those modules that are essential for zLSS
support.
A.16
Low Storage Alert Messages Commands
nss /(No)StorageAlertMessages
Enables or disables NSS to send Low Storage messages to all users.
Default: On
nss /StorageAlarmThreshold=value
Sets the threshold (in MB) for a Low Storage space warning.
Default: 10
Range: 0 to 1000000
nss /StorageResetThreshold=value
Resets the threshold (in MB) for a Low Storage space warning.
NSS Commands
385
Default: 10
Range: 0 to 1000000
A.17
Migration Support Commands for Near-Line Storage
nss /(No)Migration=<volumename | all>
Enables or disables migration of files on the specified volumes to a third-party near-line storage
system.
This option is used only for third-party vendor applications that provide near-line storage. It is
not a migration tool for migrating data from NetWare to Linux, and it is not associated with
Dynamic Storage Technology.
A.18
Modified File List (MFL) Commands
Use the commands in this section to manage the Modified File List (MFL) feature for NSS volumes.
The MFL attribute enables NSS to create a list of all files modified since the previous backup. The log
is available only through third-party software.
nss /(No)MFL=volumename
Enables or disables the Modified File List attribute for the specified volume.
Examples
To enable the MFL attribute for a given volume, enter
nss /MFL=volumename
To disable the MFL attribute for a given volume, enter
nss /NoMFL=volumename
nss /MFLVerify=volumename
Compares the modified file list with the specified volume and reports any inconsistencies.
nss /FixMFL=volumename
Repairs the modified file list to be consistent with the file system.
nss /GetMFLStatus=volumename
Shows the modified file list status for the specified volume.
A.19
Multipath I/O Failover Commands
The Media Manager multipath I/O management is not available for Linux. For devices on Linux, use
the Linux multipath I/O management tools. For information, Chapter 14, “Managing Multipath I/O
to Devices,” on page 161.
386
OES 11 SP2: NSS File System Administration Guide for Linux
A.20
Multiple Server Activation Prevention (MSAP) Commands
Use the commands in this section to prevent NSS pools from being concurrently active on multiple
nodes. The management file for pool Multiple Server Activation Prevention (MSAP) is
_admin\manage_nss\pool\ poolname\zlss\msap.xml. One file exists for each pool. This file
contains MSAP statistics for the pool. The MSAP attribute is displayed in the enabledAttributes tag of
the poolinfo.xml management file.
For manage.cmd, the pool operation getPoolInfo returns the MSAP tag (<msap>) in the
supportedAttributes tag (<supportedAttributes>) and the enabledAttributes tag
(<enabledAttributes>).
For APIs, the pool feature zpool_feature_msap can be viewed and controlled using the zGetInfo
and zModifyInfo commands.
nss /MSAPServer
Enables MSAP for all the pools on the server. By default, MSAP is enabled for every pool on the
server. We recommend that you never disable MSAP.
nss /NoMSAPServer
Disables MSAP for all the pools on the server. This command remains in effect only until the
server is rebooted. We recommend that you never disable MSAP.
nss /MSAPRebuild=poolname
Rebuilds a corrupt MSAP block. If the MSAP block for a pool becomes corrupt, it prevents a pool
from going into Maintenance state. Use this command to rebuild the MSAP block for a given
pool. Before issuing the command to rebuild, you must deactivate the pool. Rebuilding an
MSAP block does not give the rebuilder ownership of the pool.
nss /PoolMSAP=poolname
Enables MSAP for a given pool on the server. MSAP is enabled the next time the pool is
activated. (Enter the command, deactivate the pool, then activate the pool. MSAP is now
enabled.)
nss /NoPoolMSAP=poolname
Disables MSAP for a given pool. Use the command when the pool is activated. MSAP is disabled
the next time the pool is activated. (Enter the command, deactivate the pool, then activate the
pool. MSAP is now disabled.)
nss /MSAPServerID=id
Sets the ID for MSAP to use to uniquely identify the server.
nss /MSAPIDDisplay
Displays current IDs used by MSAP.
nss /pools
Displays the message Multi-Use Detect for NSS pools that have MSAP enabled.
NSS Commands
387
A.21
noatime and atime Commands
The /noatime and /atime commands for a volume allow the administrator to control whether access
times are updated when files and directories are read. They are available for NSS on OES 2 SP1 and
later. The setting persists across reboots.
Using /noatime is useful for backup, news servers, and mail servers where the extra disk activity
associated with updating the access time is not desired. Avoiding the writes associated with updating
the access time can result in measurable performance gains.
WARNING: Use the /noatime switch with caution. Do not use the /noatime switch on an NSS
volume if the programs and scripts that are used to manipulate its data require access times to be
updated. The program or script cannot work as designed, and performance can significantly
decrease.
For example, the following programs are affected by using the /noatime switch:
 NSS Compression: Because the access time is not updated, the compression algorithm
eventually considers that all of the files on the NSS volume are unused, and it compresses them.
If a file is accessed multiple times in a specified time period, the compression algorithm cannot
detect that the access occurred, and it does not decompress the often-used files. Every time a
compressed file is accessed, the file must be uncompressed before giving the file to the user.
When the file is closed, the uncompressed instance is discarded, and the file remains
compressed. Thus, when you enable /noatime on an NSS volume where compression is
enabled, you effectively break how compression works, and cause the read access performance
of the NSS volume to decrease significantly.
 Dynamic Storage Technology: Both global policies and volume policies that rely on the access
time cannot be enforced.
 The "Shift Accessed Shadow Files" global parameter controls how files are automatically
moved from the secondary storage area to the primary storage. A file is shifted if it is
accessed a second time during a specified interval. If the access time is not recorded, the file
is never shifted by the policy. You can use /noatime switch on an NSS volume when you
use DST if you disable the "Shift Accessed Shadow Files" global parameter, and you do not
use the "Last Time Accessed" as a filter for a volume policy. The other filters will work fine.
 The "Time Stamp Restrictions" identifies which time stamps to use when applying the
policy. A policy based on "Last Time Accessed" can be enforced only for files with access
times recorded before you disabled the access-time updates.
A.21.1
Using noatime or atime at the Command Line
Issue the following commands in the NSS Console.
nss /atime=volumename
This option enables the updating of access time for both files and directories in a specified
volume so that reading a file updates its access time. This is enabled by default.
nss /noatime=volumename
This option disables the updating of access time for both files and directories in a specified
volume so that reading a file does not update its access time.
388
OES 11 SP2: NSS File System Administration Guide for Linux
A.21.2
Using noatime in a Cluster Load Script
You can also use the noatime option when mounting an NSS volume in a Novell Cluster Services
cluster load script. The atime setting is enabled by default, so it is not necessary to specify it
explicitly.
In the cluster load script, modify the ncpcon mount command for the volume:
exit_on_error ncpcon mount /opt="noatime" volumename=volume_id
When you create a new volume on a cluster-enabled pool, Cluster Services automatically assigns it a
volume ID that is unique in the entire cluster and writes the value to the cluster resource load script
for the pool. When you add the /opt option to volume’s mount line in the load script, the volume ID
is already part of the command.
For example, for a volume named VOL1 and a volume ID of 254, the commands would be:
mount /opt="noatime" VOL1 VOLID=254
exit_on_error ncpcon mount /opt="noatime" VOL1=254
A.21.3
Viewing the atime or noatime Setting
You can view the current setting for the atime attribute by using the nss /volumes command at the
NSS Console.
atime attribute is enabled - Attribute is not displayed.
atime attribute is disabled - No Access Time is displayed.
A.22
noatime and nodiratime Support for Linux open, mount,
nfsmount, and /etc/fstab
NSS on OES 11 SP2 supports the O_NOATIME option for the Linux open(2) command, and the
noatime and nodiratime options for the mount and nfsmount command and the /etc/fstab file.
These options have the same objective—that is, to prevent the access time from being updated unless
the access involves a modification of a file’s or directory’s metadata or content.
 noatime: Disables the updating of access time for both files and directories so that reading a file
does not update their access time (atime).
 nodiratime: Disables updating of access time when opening directories so that the access time is
not modified when enumerating directories. This routine also checks that the object is a
directory, which slows down the routine.
These options are useful for backup, news servers, and mail servers where the extra disk activity
associated with updating the access time is not desired. Avoiding the writes associated with updating
the access time can result in measurable performance gains.
IMPORTANT: Typically, you need to use only the noatime option so that atime is not updated for
the accessed file and its directory when the file is accessed. To determine if the noatime and
nodiratime options can help the performance of a particular application, refer to the documentation
and best practices for that application.
NSS Commands
389
For information about applying these options, see the following:
 Section A.22.1, “Linux open(2) Command,” on page 390
 Section A.22.2, “Linux mount Command,” on page 390
 Section A.22.3, “Linux nfsmount Command,” on page 391
 Section A.22.4, “Linux /etc/fstab File,” on page 391
A.22.1
Linux open(2) Command
By default, the open command updates the access time whenever a file is opened or a directory is
accessed. The O_NOATIME option disables the updating of access time, so that reading a file does not
update its access time. Using this option allows you to back up a volume without modifying the
access times (ATIME) of its files.
The man page for the open(2) command defines the O_NOATIME option as follows:
O_NOATIME
(Since Linux 2.6.8) Do not update the file last access time when the file is read. This flag is
intended for use by indexing or backup programs, where its use can significantly reduce the
amount of disk activity.
For information about how to use the O_NOATIME option, see the man page for the Linux open(2)
command by entering man 2 open at a terminal console prompt.
A.22.2
Linux mount Command
The noatime and nodiratime options for the mount command are available for all Linux file systems,
including NSS.
IMPORTANT: Typically, you need to use only the noatime option so that atime is not updated for
the accessed file and its directory when the file is accessed. To determine if the noatime and
nodiratime options can help the performance of a particular application, refer to the documentation
and best practices for that application.
To enable noatime or nodiratime options when mounting an NSS volume:
1 Open a terminal console on the server, then log in as the root user.
2 At the terminal console prompt, enter (all on the same line, of course):
mount -t nssvol VOL vol_mountpoint -o name=volname,noatime
or
mount -t nssvol VOL vol_mountpoint -o name=volname,nodiratime
Replace vol_mountpoint with the mount point for the volume, such as /media/nss/NSSV1.
Replace volname with the name of the volume, such as NSSV1.
For example, enter:
mount -t nssvol VOL /media/nss/NSSV1 -o name=NSSV1,noatime
This command mounts the volume NSSV1 with the noatime option so that the file and directory
access times are not updated when a file is accessed but not modified.
390
OES 11 SP2: NSS File System Administration Guide for Linux
A.22.3
Linux nfsmount Command
1 In YaST, click Network Services > NFS Server > NFS Mount.
2 Specify the volume, then enter
rw,no_root_squash,sync,fsid=value,noatime
For information about the other nfsmount options used here, see Section 18.16, “Exporting and
Importing NSS Volumes for NFS Access,” on page 230.
3 Click OK.
A.22.4
Linux /etc/fstab File
The noatime and nodiratime options are available as mount options in the /etc/fstab file for all
Linux file systems, including NSS. To enable the noatime and nodiratime options as default
mounting options for a volume so they are in effect at boot time, modify the entry for the NSS volume
in the /etc/fstab file.
1 Open the /etc/fstab file in a text editor.
2 Modify the entry for the NSS volume by adding noatime or nodiratime as options.
For example, type
volname vol_mountpoint nssvol noauto,rw,name=volname,noatime 0 0
Replace volname with the name of the volume, such as NSSV1. Replace vol_mountpoint with
the mount location of the NSS volume, such as /media/nss/NSSV1.
For example (all on the same line, of course):
NSSV1 /media/nss/NSSV1 nssvol noauto,rw,name=NSSV1,noatime 0 0
3 Save the file.
4 Reboot the server to apply the changes.
A.23
Opportunistic Locking Commands
The opportunistic locking is controlled by NCP Server. Use the NCP Server option
OPLOCK_SUPPORT_LEVEL to manage oplocks for NSS volumes on OES. For more information, see
“Using Opportunistic Locking for NCP File Handling”.
A.24
Pool Freeze and Thaw Commands
Use the commands in this section to temporarily freeze (quiesce) and thaw activity on a pool.
nss /PoolFreeze=poolname
Temporarily quiesces activity on the specified pool.
nss /PoolThaw=poolname
Resume activity on the specified pool that has previously been frozen.
NSS Commands
391
A.25
Pool Management Commands
Enter the following commands to perform maintenance for NSS pools and volumes.
NOTE: You can execute some commands from NSSCON and some commands from both NSSCON
and server console.
A.25.1
Pool Status
nss /LVScan=poolname
Available from NSSCON. Scans for logical volumes within the specified active pool.
nss /pools, or pools
Available from NSSCON and server console both. Shows all of the currently available NSS
pools.
nss /ZLSSPoolIOErrors
Available from NSSCON. Shows the last 100 pool I/O errors reported from the block layer that
have occurred since the server’s last reboot. The block layer does not have many error codes, so
the usefulness of the error may be limited. NSS stores some block layer errors in the /var/log/
messages file. Look for BIO_ in the log file to see if any pool I/O error messages are present.
A.25.2
PoolAuto Commands for Load Time
Use the pool commands in this section at load time. Any encrypted volumes on the pool cannot be
automatically activated. Encrypted volumes remain deactive until you activate the volume manually
and provide the password on the first activation after a reboot.
Place the PoolAuto commands in the nssstart.cfg file to make them persist across reboots. The file
is in the /etc/opt/novell/nss directory.
The following commands are available only from NSSMU:
PoolAutoDisplay=<poolname | all>
Displays the pool’s current load-time policies.
nss /PoolAutoActivate=<poolname | all>
Activates specified pools at load time.
nss /PoolAutoDeactivate=<poolname | all>
Leaves specified pools in the deactivated state at load time.
nss /PoolAutoMaintenance=<poolname | all>
Places specified pools in maintenance mode at pool load time.
nss /PoolAutoVerify=<poolname | all>
Verify the specified pool’s physical integrity at startup time.
The following modifier commands can be used with the /PoolAuto commands to automatically
handle pools of status type shared, corrupt, and questions.
392
OES 11 SP2: NSS File System Administration Guide for Linux
nss </PoolAuto_command> [/IncludeType=type | OverrideType=type]
Use the IncludeType or OverrideType modifier commands to include or override pools of the
specified status type for a given /PoolAuto... command. Possible status types are SHARED,
CORRUPT, and QUESTIONS.
A.26
Pool Snapshot Commands
NSS pool snapshots are supported on OES 2 and later; however, there are no command line options
available. You must use NSSMU to manage NSS pool snapshots. For general information about pool
snapshots, see Chapter 17, “Managing NSS Pool Snapshots,” on page 193. For an NSSMU for Linux
quick reference on snapshots, see “Snapshots” in Table 9-11 on page 92.
A.27
Pool Verify and Rebuild Commands
For NSS volumes, use the RAVSUI utility to verify and rebuild pools. For more information about
rebuilding pools, see Chapter 16, “Verifying and Rebuilding NSS Pools and Volumes,” on page 183.
A.28
POSIX Permission Mask Command
nss /PosixPermissionMask=mask
Specify the octal mask to control which bits in the POSIX permissions (drwxrwxrwx) are allowed
to be set. The octal digits correspond to directory, user, group, and other fields. By default, NSS
sets the POSIX permissions to 0777.
IMPORTANT: NSS uses the Novell trustee model to authenticate and give access to users, not
the Linux ACLs and POSIX permissions.
The command applies to all NSS volumes on the Linux server. In a cluster environment, make
sure that the setting is the same on all nodes. This command should normally be added in the /
etc/opt/novell/nss/nssstart.cfg file so that it persists across reboots.
Example
For example, SSH requires that the permissions in the Other field be disabled. If you use NSS
volumes for home directories and you want users to have SSH access to them, you must modify
the POSIX permissions to 0770. The following command in the /etc/opt/novell/nss/
nssstart.cfg file turns off all of the bits corresponding to the Other field:
/PosixPermissionMask=0770
The setting applies to all NSS volumes on the server. You must also Linux-enable users and
enable SSH with Linux User Management
If the volume is shared in a cluster, make sure to add the command to the nssstart.cfg file on
all nodes and to Linux-enable SSH on all nodes.
NSS Commands
393
A.29
Quotas Commands
Use the commands in this section to manage quotas (space restrictions) for NSS pools and volumes.
 Section A.29.1, “Sys: Volume Quota Command,” on page 394
 Section A.29.2, “Directory Quotas Commands,” on page 394
 Section A.29.3, “User Quotas Commands,” on page 394
A.29.1
Sys: Volume Quota Command
nss /ChangeSysQuota=size
Lets you change the quota (in MB) for the sys: volume. Setting this value to zero sets the Quota
to none and allows the sys: volume to grow to the size of the sys pool.
A.29.2
Directory Quotas Commands
nss /(No)DirectoryQuotas=volumename
Enables or disables the Directory Quotas attribute on the volume.
nss /FixDirectoryQuotas=volumename
Recomputes used space for directory quotas for the specified volume.
This process can take time to recalculate, depending on the number of directory quotas set on
the volume. There are no interim messages to report status.
A.29.3
User Quotas Commands
nss /(No)UserSpaceRestrictions=volumename
Enables or disables the User Quotas attribute on the volume. After you enable quotas, use go to
the Storage > User Quotas page in iManager to set quotas for users.
A.30
Read Ahead Blocks and Allocate Ahead Blocks Commands
NSS offers the Read Ahead Blocks command for tuning read performance, and the Allocate Ahead
Blocks command for tuning write performance. You can enter the commands in the NSS Console.
 Section A.30.1, “Read Ahead Blocks,” on page 394
 Section A.30.2, “Allocate Ahead Blocks,” on page 395
A.30.1
Read Ahead Blocks
The Read Ahead Blocks parameter can be set differently on each NSS volume. It specifies the number
of data blocks that NSS reads ahead for any open file on which read operations are ongoing in the
specified server. The Read Ahead Blocks parameter is enabled by default and set at 16 blocks. To
modify the value, you must set it from the command line using the ReadAheadBlks switch. You can
also set the value from NSSMU in the volume properties.
The valid range for block count is 0 blocks to 1024 blocks, where a block count of zero (0) implies no
read ahead.
394
OES 11 SP2: NSS File System Administration Guide for Linux
The most efficient value for block count depends on your hardware. In general, we recommend a
block count of 8 to 16 blocks for large data reads; 2 blocks for CDs, 8 blocks for DVDs, and 2 blocks
for ZLSS.
Aggressive read ahead is optimal only for sequential access. As the number of concurrent
connections to unique files increases, you should reduce the read-ahead block count.
nss /ReadAheadBlks=volname:count
Specify VolName as the name of the volume that you are setting the attribute for. Specify the
Count to be the number of 4 KB blocks that you want to NSS to read ahead. The valid range for a
block count is 0 blocks to 1024 blocks, where a block count of zero (0) implies no read ahead.
However, 128 blocks is the practical maximum.
Read-ahead block counts higher than 128 can starve other system components for memory or
buffers, which can impair performance or cause the system to hang. As the number of
concurrent connections to unique files increase, you should reduce the number of read-ahead
blocks.
Default: 16
Range: 0 to 1024 blocks, where 0=Off; Practical maximum: 128
A.30.2
Allocate Ahead Blocks
When applications write data to the file system, NSS allocates the blocks at the logical write time and
the new data is cached in memory. At a later time, all cached blocks are grouped and are written to
the disk in an optimal order based on the flush policy. AllocAheadBlocks helps applications and
protocol servers that do small writes for moderate or large files so they have fewer transactions and
the blocks on the disk are less fragmented. Having less fragmentation on disk substantially helps all
further physical reads from the disk, because the disks do not need to use as much time in seeking.
Logical Write Time: When the application writes to NSS.
Physical Write Time: When NSS writes to the disk.
If the application is doing small writes NSS allocates blocks in small increments. Depending on the
other parallel activities going on in the system, the blocks on disk could become fragmented, in spite
of the attempts made by NSS to allocate these blocks in the same area.
Use the Allocate Ahead Blocks parameter for tuning write performance for all NSS volumes on the
server.
nss /AllocAheadBlks=value
Sets the number of blocks to allocate ahead on writes.
Default: 0
Range: 0 to 63
For example, if most of the files that are created and extended are 64 KiB, AllocAheadBlocks should
be set to 16 KiB (64 KiB / 4 KiB = 16) (a data block is 4 KiB bytes).
The default is set to zero because higher settings can slow down small file creation. For example, if
AllocAheadBlks is set to 16, NSS allocates 16 data blocks. For a small file this is the only logical write.
When the file is closed, NSS is forced to truncate the file to the correct size which slows down small
file creation.
NSS Commands
395
A.31
Salvage and Purge Commands
Use the commands in this section for tuning the purging processes for NSS volumes. For more
information about managing salvage for NSS volumes, see Chapter 23, “Salvaging and Purging
Deleted Volumes, Directories, and Files,” on page 295.
nss /(No)ImmediatePurgeOfDeletedFiles
Enables or disables files to be purged immediately upon deletion.
NSS volumes on the server can optionally use the salvage feature of NSS to save deleted files
until space is needed. When you enable the Immediate Purge of Deleted Files parameter, it
affects all volumes on the server. If the Salvage attribute is enabled for a volume, enabling this
flag overrides it so that deleted files and directories are purged immediately.
Default: Disabled
nss /(No)Salvage=<volumename | all>
Enables or disables the ability to salvage of deleted files on volumes. Volume enables the Salvage
command on the specified NSS volume on the specified server. All enables the Salvage
command on all NSS volumes on the specified server.
nss /LogicalVolumePurgeDelay=value
The number of seconds before deleted logical volumes are purged. This allows time to reverse
the deletion.
Default: 345600 (4 days)
nss /LogicalVolumePurgeDelayAfterLoad=value
The number of seconds after NSS loads before deleted logical volumes are purged. This allows
time to pause autopurging.
Default: 7200
nss /LogicalVolumePurgeDelayAfterContinue=value
The number of seconds to delay purging a deleted logical volume after clicking Continue. After
a volume starts to purge, it cannot be salvaged.
Default: 900
nss /PoolHighWaterMark=poolname:Percent
Purging begins in the salvage area when the pool’s low watermark is reached, and continues
until its high watermark is reached, or until there are no deleted files and volumes left to purge,
whichever occurs first. Autopurging does not start again until free space again drops below the
low watermark. Specify a given poolname or All to apply the setting to all pools.
The high and low watermarks must be at least 2% apart from each other.
Default: 20
Range: 2 to 100
nss /PoolLowWaterMark=poolname:Percent
Purging begins in the salvage area when the pool’s low watermark is reached and continues
until its high watermark is reached. When free disk space falls below a low watermark, NSS
begins autopurging the salvage area. Specify a given poolname or All to apply the setting to all
pools.
The high and low watermarks must be at least 2% apart from each other.
Default: 10
396
OES 11 SP2: NSS File System Administration Guide for Linux
Range: 0 to 98
nss /SalvageSys
Lets you restore the sys: volume if you have deleted it (if it has not yet been purged).
nss /LVDeleteStatusBasic, or LVDeleteStatusBasic
Displays information about deleted logical volumes.
nss /LVDeleteStatusSalvageable, or LVDeleteStatusSalvageable
Displays information about salvageable logical volumes.
A.32
Security Equivalence Vector Update Commands
Use the Security Equivalence Vector (SEV) Update commands in the NSS Console utility (nsscon) to
enable or disable the update, to set the update interval from 5 minutes to 90 days (specified in
seconds), and to force an immediate update of security equivalence vectors. Polling too frequently
can impact performance. Polling too infrequently can cause delays in granting or restricting access to
certain users. For more information about SEV, see Section 20.2, “Configuring the Security
Equivalence Vector Update Frequency,” on page 255.
nss /(No)SecurityEquivalenceUpdating
Enables or disables SEV updates to occur in the background in addition to updates that occur
when the system reboots. If it is disabled, SEV updates occur only at system reboots.
To make it persistent, include the command in the /etc/opt/novell/nss/nssstart.cfg file.
Default: On (enabled)
Examples
To enable background updating, enter
nss /SecurityEquivalenceUpdating
To disable background updating, enter
nss /NoSecurityEquivalenceUpdating
nss /UpdateSecurityEquivalenceInterval=value
Sets the SEV update interval to the specified value in seconds. At the end of the elapsed time,
NSS requires updated SEVs from eDirectory.
To make it persistent, include the command in the /etc/opt/novell/nss/nssstart.cfg file.
Default: 7200 (2 hours)
Range: 300 (5 minutes) to 7776000 (90 days).
nss /ForceSecurityEquivalenceUpdate
Forces the SEV update to occur immediately for all users in the NSS file system. Use this
command if you modify a user’s access control settings in eDirectory and want those changes to
be reflected immediately in the user’s active SEV for this server.
This command is invalid if used in the /etc/opt/novell/nss/nssstart.cfg file.
A unique abbreviation such as
nss /ForceS
also works.
NSS Commands
397
ForceSecurityEquivalenceUpdate
Forces the user security equivalence background updating to start immediately. Use this
command if you modify a user’s access control settings in eDirectory and want those changes to
be reflected immediately in the user’s active SEV for this server.
A.33
Sendfile API Support Command
nss /(No)SendfileSupport
Enable or disable support for the sendfile() API.
Default: On
A.34
Status Commands
Enter the following commands in a server console to show the status of various NSS parameters.
CompScreen
Displays the NSS volume compression statistics on the compression screen.
nss /pools, Pools
Lists all of the NSS pools that are currently available on the server.
PoolsAutoDisplay
Displays load-time policies for pools on the server.
nss /status, Status
Lists the current NSS status information.
nss /volumes, Volumes
Lists all of the NSS volumes that are currently mounted and active, including the _admin
volume.
VolumesAutoDisplay
Displays load-time policies for volumes on the server.
SpaceInformation
Lists the amount of space on active pools and their associated volumes.
ListFreeSpace
Lists the amount of available space that has not been assigned to a pool.
nss /ErrorCode=code
Translates and describes the specified error code.
Modules
Lists the providers, loadable storage subsystems, and semantic agents.
Version
Displays the version information for NSS.
398
OES 11 SP2: NSS File System Administration Guide for Linux
A.35
Visibility Rebuild Command
You might need to rebuild the visibility list in order to address problems where the computed value
does not equal the stored value for the visibility lists. For example, if the dmptrust volumename
command reports unknown trustees or many mismatched user GUIDs, you might need to run a
visibility rebuild for the NSS volume.
You can use checkvis.nlm to check visibility for issues.
WARNING: The visibility rebuild process can be destructive. Users who could see directories before
might not be able to afterwards. Run this operation only as required. Do not use visibility list rebuilds
as a regular maintenance tool.
 Section A.35.1, “Description,” on page 399
 Section A.35.2, “Syntax,” on page 399
 Section A.35.3, “Additional Information,” on page 399
A.35.1
Description
When you perform a visibility rebuild, the system first deletes all current entries in the visibility list.
Then, in a second pass, the system attempts to rebuild the visibility list for assigned trustees. The
numbers you see at the conclusion of the rebuild give the following information:
Parameter
Description
Objects examined
Indicates the total number of file system objects looked at (a total of files
and directories, plus special file system beasts).
Objects cleaned
Indicates the number of directories where visibility information was
removed during the first pass.
Overflow objects removed
Indicates the number of visibility overflow objects removed during the first
pass.
Trustees re-added
Indicates the number of trustees found on the volume and re-added to
visibility lists. Although this number is non-zero, it doesn't mean problems
were fixed or resolved. Instead, the number indicates the number of
trustees found and included in the visibility list (whether they were there
before the rebuild started or not).
After you run a visibility rebuild, make sure you run the visibility check again. If there are still errors,
you must examine your trustees for problem before running a visibility rebuild again.
A.35.2
Syntax
nss /VisibilityRebuild=volumename
Rebuild the authorization visibility lists for an NSS volume.
A.35.3
Additional Information
For information about checking, repair, and troubleshooting the visibility list, see NetWare 6 Trustee
Rights: How They Work and What to Do When All Goes Wrong (http://support.novell.com/techcenter/
articles/ana20030202.html) in Novell AppNotes (2003, February 1).
NSS Commands
399
A.36
Volume Management Commands
 Section A.36.1, “Volumes Command,” on page 400
 Section A.36.2, “Volume Activity Commands,” on page 401
 Section A.36.3, “Encrypted Volume Activity Commands,” on page 401
 Section A.36.4, “VolumeAuto Commands for Load Time,” on page 402
A.36.1
Volumes Command
NSS provides the volumes command and /volumes option for viewing a list of the currently
mounted volumes, their status, and the attributes for NSS volumes. To view which attributes are
currently set for a volume, enter
nss /volumes
You can also use enter the volumes command to get the same output.
The volumes utility for NCP provides additional information about the mounted volumes on a Linux
server, such as its Linux path. For information, see Section B.11, “volumes (NCP Console Utility),” on
page 427.
For example, the NSS volumes command outputs state and attributes information in a tabular
format:
Volume Name
----------------------_ADMIN
DATA1
State
----------ACTIVE
ACTIVE
DATA2
VOL1
ACTIVE
ACTIVE
VOL2
ACTIVE
Attributes
----------------------------------Hardlinks
Salvage
Compression
Data Shredding
Salvage
Salvage
User Space Restrictions
Directory Quotas
Salvage
The following volume attributes are displayed:
400
Attribute
For Information
Compression
Section A.6, “Compression Commands,” on page 374
Data shredding
Section A.7, “Data Shredding Commands,” on page 377
Directory quotas
Section A.29.2, “Directory Quotas Commands,” on page 394
Encryption
Section A.36.3, “Encrypted Volume Activity Commands,” on page 401
Hard links
Section A.12, “Hard Links Commands,” on page 381
Migration (for near-line
storage support)
Section A.17, “Migration Support Commands for Near-Line Storage,” on
page 386
Modified File List (MFL)
Section A.18, “Modified File List (MFL) Commands,” on page 386
Salvage
Section A.31, “Salvage and Purge Commands,” on page 396
User quotas
Section A.29.3, “User Quotas Commands,” on page 394
OES 11 SP2: NSS File System Administration Guide for Linux
A.36.2
Volume Activity Commands
nss /ExtendMac=volumename
Enable extended Macintosh name space on the specified NSS volume.
nss /ForceActivate=volumename
Forces an NSS volume to become active. For encrypted NSS volumes, this command cannot
force an activation unless the volume has been previously activated with a password on the first
activation after a reboot.
nss /ForceDeactivate=volumename
Forces an NSS volume to the deactive state. Does not prompt for open files.
nss /VolumeActivate=volumename
Activates the specified NSS volume.
nss /VolumeDeactivate=volumename
Deactivates the specified NSS volume.
nss /VolumeMaintenance=volumename
Places a specified volume into maintenance mode. Volumes can be put in maintenance mode,
but maintenance occurs only at the storage pool level.
mount volume_name
Mount the specified unencrypted NSS volume or an encrypted NSS volume that has been
previously activated with its password. If it has not been previously activated, it returns an error
message, requesting more information.
mount all
Mount all unencrypted NSS volumes and all encrypted NSS volumes that have been previously
activated with their passwords. Encrypted NSS volumes that were not previously activated
return error messages, requesting more information.
A.36.3
Encrypted Volume Activity Commands
Use the commands in this section to display volume status and to activate, mount, deactivate, or
dismount encrypted NSS volumes.
You must enter a password the first time the volume is activated or mounted following a system
reboot. Thereafter, other environmental security and authentication measures control access to user
data.
IMPORTANT: Use NSSMU to mount encrypted volumes the first time after a server reboot.
Thereafter, you can use the Linux mount command.
You cannot use wildcard commands, such as nss /VolumeAutoActivate, to activate encrypted NSS
volumes.
You cannot use the wildcard option of All as the volume name for volumes where the password has
not previously been provided. Until an encrypted volume is activated with its password following
each system reboot, the All option does not find the volume and does not execute the command. The
system returns an error message.
NSS Commands
401
nss /activate=volume_name
Activate the specified unencrypted NSS volume.
On Linux, this command cannot be used to activate encrypted NSS volumes.
nss /activate=all
Activate all unencrypted NSS volumes. On Linux, this command cannot be used to activate
encrypted NSS volumes.
mount volume_name
Mount the specified unencrypted NSS volume or an encrypted NSS volume that has been
previously activated with its password. If it has not been previously activated, it returns an error
message, requesting more information.
mount all
Mount all unencrypted NSS volumes and all encrypted NSS volumes that have been previously
activated with their passwords. Encrypted NSS volumes that were not previously activated
return error messages, requesting more information.
nss /volumeactivate=volume_name
Activate the specified unencrypted NSS volume. If you are prompted for it, enter the encryption
password. The password is required only on the first activation following a system reboot.
On Linux, this command cannot be used to activate encrypted NSS volumes.
nss /volumes
View the status of an encrypted and unencrypted NSS volumes. The encrypted volume returns a
status of Encrypted.
A.36.4
VolumeAuto Commands for Load Time
Use the following command to view the volume’s current load-time policies.
VolumeAutoDisplay=volumename
Displays the volume’s current load-time policies.
Use the following commands in the nssstart.cfg file to control which volumes are active at load
time. You cannot use these commands for encrypted NSS volumes. You must use NSSMU to activate
the volume on the first time after restart so you can provide the password.
nss /VolumeAutoActivate=volumename
Activates the specified volume at load time.
nss /VolumeAutoDeactivate=volumename
Deactivates the specified volume at load time.
A.37
ZID Commands
Use the commands in this section to manage the file numbering, or ZIDs, of files on an NSS volume.
nss /ZIDNameSpace=namespace
Specify the name space (DOS, Long, Macintosh, or UNIX) the command /ZIDToFileName
should use.
402
OES 11 SP2: NSS File System Administration Guide for Linux
nss /ZIDtoFilename=ZIDnumber
For a specified ZID, reports the file’s full path and filename for a given volume and name space.
Use only with /ZIDVolumeName and /ZIDNameSpace to provide context for the command.
nss /ZIDVolumeName=volumename
The volume name the command /ZIDToFileName should use.
/(No)ReZID
Use this option only with the pool rebuild options in the ravsui utility to enable or disable the
rebuild to reZID the volume(s) in the pool that is being rebuilt.
Default: Off
Range: On or Off
Examples
For guidelines and instructions for how to use the ReZID option with pool rebuild commands,
see Section 16.3, “ReZIDing Volumes in an NSS Pool,” on page 189.
NSS Commands
403
404
OES 11 SP2: NSS File System Administration Guide for Linux
B
NSS Utilities
B
This section details the syntax and options for the following Novell Storage Services utilities for
Novell Open Enterprise Server 11 SP2.
 Section B.1, “attrib,” on page 405
 Section B.2, “compfix,” on page 407
 Section B.3, “metamig,” on page 410
 Section B.4, “nsscon,” on page 413
 Section B.5, “nssmu,” on page 415
 Section B.6, “nssupdate,” on page 415
 Section B.7, “ravsui,” on page 416
 Section B.8, “ravview,” on page 419
 Section B.9, “refreshids,” on page 423
 Section B.10, “rights,” on page 423
 Section B.11, “volumes (NCP Console Utility),” on page 427
 Section B.12, “nssquota,” on page 428
 Section B.13, “nssraid,” on page 430
 Section B.14, “ncsinit,” on page 430
 Section B.15, “nsschown,” on page 431
B.1
attrib
Use the Attribute (attrib) utility to set NSS file system directory and file attributes.
B.1.1
Syntax
attrib [options] [filename]
If both the set and clear options are selected, the clear option is completed before the set option. If the
filename is not specified, the operation is completed on the current directory.
B.1.2
Options
Option
Description
-s, --set=ATTRIBUTES
Set the attributes on the file.
-c, --clear=[ATTRIBUTES | all]
Clear the attributes on the file.
NSS Utilities
405
B.1.3
Option
Description
-l, --long
Displays a long version of the file attributes.
-q, --quiet
Does not display any normal output.
-d, --dos
Use DOS compatible attributes (that is, ro=ro,di,ri)
-v, --version
Displays the program version information.
-h, --help
Displays the ATTRIB help screen.
-S, --softlink
Do not follow link option.
-r, --recursive
Set the attributes recursively on the directory.
Attributes
Multiple attributes are separated with commas.
Applies to
Applies to
Directorie
Files
s
Attribut
e
Description
aa
Attribute Archive identifies that a file’s metadata has been modified
since the last backup. This attribute is assigned automatically.
Yes
No
all
All (used only for the Clear option) represents all attributes that can
be modified.
Yes
Yes
ar
Archive identifies files that have modified content since the last
backup. This attribute is assigned automatically.
Yes
No
cc
Cannot Compress (status display only) displays if the file cannot be
compressed because of limited space savings.
Yes
No
ci
Copy Inhibit prevents users from copying a file. This attribute
Yes
overrides the Read and File Scan trustee rights. This attribute works
only for clients using Macintosh operating systems to access NSS
volumes on NetWare.
No
cm
Compressed (status display only) indicates whether the file is
currently stored in compressed format.
Yes
No
dc
Don’t Compress keeps data from being compressed. This attribute
overrides settings for automatic compression of files not accessed
within a specified number of days.
Yes
No
di
Delete Inhibit prevents users from deleting a directory or file.
Yes
Yes
This attribute overrides the file system trustee Erase right. When
Delete Inhibit is enabled, no one, including the owner and network
administrator, can delete the directory or file. A trustee with the
Modify right must disable this attribute to allow the directory or file to
be deleted.
406
ex
Execute indicates program files, such as .exe or .com files.
Yes
No
hi
Hidden hides directories and files so they do not appear in a file
manager or directory listing.
Yes
Yes
OES 11 SP2: NSS File System Administration Guide for Linux
Attribut
e
ic
Description
Immediate Compression sets data to be compressed as soon as a
file is closed. If applied to a directory, every file in the directory is
compressed as each file is closed.
Applies to
Applies to
Directorie
Files
s
Yes
Yes
The files in the specified directory are compressed as soon as the
operating system can perform the operation after the file is closed.
This does not apply to the directory’s subdirectories and the files in
them.
B.1.4
ip
Immediate Purge flags a directory or file to be erased from the
system as soon as it is deleted. Purged directories and files cannot
be recovered.
Yes
Yes
ln
Link (status display only) indicates a symbolic link (soft link).
Yes
No
mg
Migrated (status display only) displays if the file or directory is
migrated to near-line media.
Yes
Yes
mi
Migrate Inhibit prevents directories and files from being migrated
from the server’s disk to a near-line storage medium.
Yes
Yes
ri
Rename Inhibit prevents the file or directory name from being
modified.
Yes
Yes
ro
Read Only prevents a file from being modified.
Yes
No
sd
Subdirectory (status display only) indicates that the entry is a
directory, not a file.
No
Yes
sh
Shareable allows more than one user to access the file at the same
time. This attribute is usually used with Read Only.
Yes
No
sy
System hides the directory or file so it does not appear in a file
manager or directory listing. This attribute is normally used with
system files.
Yes
Yes
tr
Transactional allows a file to be tracked and protected by the
Transaction Tracking System (TTS).
Yes
No
vo
Volatile indicates that a file can change without being written to so
that opportunistic locks cannot be set on it.
Yes
No
Example
attrib /designs/topsecret -c=all -s=ro,di
This command clears all attributes, then sets Read Only and Delete Inhibit on the /designs/
topsecret file.
B.2
compfix
 Section B.2.1, “Prerequisite for Computing Compression Statistics,” on page 408
 Section B.2.2, “Syntax,” on page 408
 Section B.2.3, “Parameters,” on page 408
NSS Utilities
407
 Section B.2.4, “Help Options (HOPTION),” on page 408
 Section B.2.5, “General Options (GOPTION),” on page 409
 Section B.2.6, “Volume-Level Options (VOPTION),” on page 409
 Section B.2.7, “File-Level Options (FOPTION),” on page 409
 Section B.2.8, “Examples,” on page 409
Use the COMPFIX utility to repair compression information for compressed NSS volumes or to clear
the Cannot Compress attribute for files in the compressed NSS volume. This tool can help identify
which compressed files are corrupted and can be fixed; however, not all corrupted compressed files
are fixable.
B.2.1
Prerequisite for Computing Compression Statistics
Before using the COMPFIX utility to compute compression statistics, make sure that your volume is
in ACTIVE mode in order for statistics to be computed.
B.2.2
Syntax
Run the COMPFIX utility (/opt/novell/nss/sbin/compfix) from the terminal console prompt.
compfix [HOPTION]
compfix [GOPTION]... [VOPTION] VOLUMENAME
compfix [GOPTION]... [FOPTION] FILENAME
Mandatory arguments to long options are also mandatory for short options.
B.2.3
Parameters
Parameter
Description
VOLUMENAME
Specifies the volume name of the compressed volume you want to repair,
such as VOL1.
FILENAME
Specifies the full path to the individual compressed file that has
compression errors you want to repair, including its filename and
extension. The filename must an absolute path.
For example:
/media/nss/VOL1/dir1/dir2/myfile.xxx
B.2.4
408
Help Options (HOPTION)
Option
Description
-h, --help
Displays help information and exits.
-v, --version
Displays version information and exits.
OES 11 SP2: NSS File System Administration Guide for Linux
B.2.5
General Options (GOPTION)
Multiple general options can be selected.
Option
Description
-H, --no-header
If this option is specified, COMPFIX does not validate compression
headers. Use this option on volumes restored from scan files that do not
have user data blocks.
-p, --logpath=path
Specifies the location of the log file.
The default location is at the root of the compressed volume you are
analyzing or fixing.
Default: ./compfix.log
B.2.6
Volume-Level Options (VOPTION)
Only one volume-level option can be selected.
B.2.7
Option
Description
-D, --delete-all
Deletes all non-fixable compressed files on the specified volume.
-F, --fix-all
Fixesall repairable compressed files on the specified volume.
-L, --list-all
Lists all problematic compressed files on the specified volume.
-S, --fix-stats
Fixes volume compression-related statistics. For accurate results, make
sure the volume is in maintenance mode before issuing this command.
-C, --clear-all
Clears the Cc (Cannot_Compress_File) attribute for all files on the
specified volume.
File-Level Options (FOPTION)
Only one file-level option can be selected.
B.2.8
Option
Description
-b, --background
Checks if the specified file is eligible for the next background compression
process.
-d, --delete
Deletes the specified compressed file if it is non-fixable.
-f, --fix
Tries to fix the specified file’s compression-related problem.
-l, --list
Lists the specified file’s compression-related information.
-c, --clear
Clears the specified file’s Cc (Cannot_Compress_File) attribute.
Examples
The following table illustrates typical uses of the COMPFIX utility. The commands in the left column
should be written all on the same line, of course.
NSS Utilities
409
Command
Description
compfix --fix-stats VOL1
Fixes compression statistics for the
specified volume, VOL1.
Log the results in the default location of ./
compfix.log.
compfix --list-all VOL1
--logpath=/var/log/compfix.log
Lists all corrupted compressed files on the
specified volume, VOL1.
Log the results in the specified location of /
var/log/compfix.log.
compfix -f /media/nss/VOL1/dir1/dir2/
myfile.xxx
Fixes an individual compressed file,
myfile.xxx.
Log the results in the default location of ./
compfix.log.
compfix -F VOL1
--logpath=/var/log/compfix.log
Fixes all fixable corrupted compressed files
on the specified volume, VOL1.
Log the results in the specified location of /
var/log/compfix.log.
compfix -b /media/nss/VOL1/dir1/dir2/
myfile.xxx
Checks whether an individual compressed
file, myfile.xxx, is eligible for the next
background compression process.
Log the results in the default location of ./
compfix.log.
compfix -C VOL1
--logpath=/var/log/compfix.log
Clears the Cannot Compress ( Cc) attribute
for files in the specified volume, VOL1.
Logs the results in the specified location of
/var/log/compfix.log.
B.3
metamig
The NSS File System Metadata Migration Utility (METAMIG) for Linux allows you to save and
restore NSS file system trustee, user quota, and directory quota metadata.
 Section B.3.1, “Syntax,” on page 410
 Section B.3.2, “Arguments,” on page 411
 Section B.3.3, “Options,” on page 411
 Section B.3.4, “Examples,” on page 413
B.3.1
Syntax
METAMIG is located in the /opt/novell/nss/sbin/metamig directory.
metamig [OPTIONS]
metamig save volume [SOPTIONS]
410
OES 11 SP2: NSS File System Administration Guide for Linux
metamig restore volume [ROPTIONS]
B.3.2
Arguments
The first argument indicates the action to be taken on the specified NSS volume. Possible actions are
Save and Restore.
Argument
Description
save
Saves the indicated metadata to stdout.
restore
Restores the indicated metadata to stdin.
The second argument specifies the NSS volume name to be saved or restored.
The third argument specifies the path to a file. This is the file to be created on save, or the file to be
restored from on restore.
B.3.3
Options
Several option types are available:
 “General Options” on page 411
 “SOptions (Save)” on page 411
 “ROptions (Restore)” on page 412
 “Mask” on page 412
General Options
The following options are general options available to actions related to save or restore.
Option
Description
-v, --version
Displays the program version information.
-h, --help
Displays the help screen.
SOptions (Save)
The following options are options available to actions related to save.
NSS Utilities
411
Option
Description
-m, --meta=mask
The types of metadata to be saved. By default, all the types of metadata
are included and all the trustees are retained. For information, see Mask.
-p
Exports the settings of all the files under a given path.
-n, --ncp
Saves the trustee metadata from the NCP Trustee database instead of
parsing the volume to generate the data. Use this option only if you are
confident that the NCP Trustee database is available and current.
For example, if the NCP Server has been turned off and its database is
not yet resynchronized with the volume, do not use this command option
until the database is again current.
ROptions (Restore)
The following options are options available to actions related to restore.
Option
Description
-m, --meta=mask
The types of metadata to be restored. By default, all the types of metadata
are included and all the trustees are retained. For information, see Mask.
-n, --ncp
Builds a new NCP trustee file from trustee data. This automatically
disables restoration of quotas.
-f, --filter=regexp
A filter that restores only files and directories that match the specified
regular expression. Use a regular expression to specify one or more files
and directories to be restored.
For example, to set criteria to restore only files with names that start with
the letter “a”, use this option:--filter=.*\\a.*
Please refer to a programming textbook or search the Internet for
information about how to construct regular expressions.
-t, --tree=treename
If this value is specified, this setting overrides the saved directory
treename.
-d, --details
Displays all actions taken.
-w, -inputfilefromnetware
Specify this option if the input file is generated by TRUSTEE.NLM on the
NetWare 6.5 SP8 server.
-r, -relativepathprefix
Specifies the reference location of the OES server for the relative path, if
the input file is generated by TRUSTEE.NLM on NetWare server with /R
switch. TRUSTEE.NLM /R option puts relative path for file in its output.
Mask
The mask is a string of characters with each character representing a type of NSS file system
metadata. Use the mask to specify values for the meta parameter in SOptions (Save) and ROptions
(Restore). If no mask is specified, only t, u and d are considered as the default parameters.
412
OES 11 SP2: NSS File System Administration Guide for Linux
B.3.4
Option
Description
t
Trustees
u
User quotas
d
Directory quotas
a
All metadata
r
Removes unknown trustees from the filesystem. This works only for save
and not for the restore operations.
Examples
To save the user quota data and directory quota data for a volume called VOL1 in the /backup/
volquotas file, enter
metamig save VOL1 -m ud >/backup/volquotas
To restore the user quota data from the file /backup/volquotas to VOL1, enter
metamig restore VOL1 -m u </backup/volquotas
To restore the trustee data from the file /backup/trustees, enter
metamig restore vol1 -m t -r dir1/dir2 -w -t TREENAME </backup/trustees
If trustees are backed up at a volume level using trustee.nlm with /R switch and all the source
volume data is copied to the directory /media/nss/<volume>/dir1 on target volume, then -r
should be dir1. If all the source volume data is copied to the root of the target volume /media/nss/
<volume> then -r should be "/” or ".".
metamig save NSSVOL > unfiltered.xml
metamig save -p dir1 NSSVOL > filtered.xml
The first command exports the setting of all the files under the volume NSSVOL, and the second
command exports the settings of all the files under the dir1 directory only.
B.4
nsscon
The NSS Console (NSSCON, nsscon) utility for OES provides a command line interface in a console
environment familiar to NetWare administrators. Use it to issue NSS commands and to monitor NSS
activity through console messages.
Unlike NSS utilities, the NSS commands cannot be issued directly at the Linux terminal console.
Therefore, you start the nsscon utility, then enter the usual NSS commands from the nsscon prompt.
You can issue any NSS command that is valid for use at the nsscon command prompt. For
information about NSS commands, see Appendix A, “NSS Commands,” on page 369.
The Linux install creates symlinks in the /opt/novell/nss/sbin folder for common NSS utilities,
including nsscon. Symlinks allow the path to the nsscon to become part of the root user’s path,
which allows you to run it by entering nsscon at the system prompt.
NSS Utilities
413
The NSSCON utility uses a device file (/dev/nsscmd), which is set up to allow access only for the
root user. Thus, only root can run nsscon. If you want to give access to a group of local users to run
nsscon, use the chmod command on /dev/nsscmd to change the POSIX permissions for that group.
 Section B.4.1, “Adding /opt/novell/nss/sbin to the PATH Environment Variable,” on page 414
 Section B.4.2, “Starting nsscon,” on page 414
 Section B.4.3, “Using nsscon in a Script,” on page 414
B.4.1
Adding /opt/novell/nss/sbin to the PATH Environment Variable
You can add /opt/novell/nss/sbin to the PATH environment variable:
1 At a shell prompt, log in as the root user.
2 From a Bash shell, set the path with the command
export PATH=$PATH:/opt/novell/nss/sbin
This allows you to run nsscon by entering ./nsscon at the system prompt.
B.4.2
Starting nsscon
1 At a shell prompt, log in as the root user.
2 Use one of the following methods to start NSSCON:
 At a shell prompt, load NSSCON when its path is symlinked by entering
nsscon
 At a shell prompt, load NSSCON by its absolute pathname by entering
/opt/novell/nss/sbin/nsscon
 At a shell prompt, load NSSCON when its path is in the PATH environment variable by
entering
./nsscon
B.4.3
Using nsscon in a Script
Only one instance of nsscon can be run at a time. If nsscon is already running when a script tries to
run it, nsscon returns an error.
To work around this issue, you can send NSS commands directly to NSS via the /dev/nsscmd device.
For example, if you use a script to put pools into maintenance, use the following syntax:
echo "/PoolMaintenance=mypool" >/dev/nsscmd
This causes NSS to place MYPOOL into maintenance state even if nsscon is already running. Note
that you do not echo an "nss " in front of commands to /dev/nsscmd.
If nsscon is running, then the output of your command is displayed by nsscon immediately.
Otherwise, the output is held by NSS until nsscon is run. In OES 11 SP2, NSS holds 400K worth of
output before starting to throw the oldest away. In OES 1, NSS holds only 32K of output before
wrapping output.
414
OES 11 SP2: NSS File System Administration Guide for Linux
B.5
nssmu
The NSS Management Utility (NSSMU) is a server-based tool that allows you to manage disks,
software raids, pools, volumes, and snapshots for the NSS file system. For information, see
Section 9.2, “NSS Management Utility (NSSMU) Quick Reference,” on page 92.
B.6
nssupdate
The NSS Update (nssupdate) utility is used to resize a pool to a larger size so that it can consume
contiguous free space that follows the existing pool on a device.
You do not need to run this utility if you increase the size of a pool by using standard NSS
management tools. When you work with the NSS Management Utility (NSSMU), the NLVM
command line utility, or the Storage plug-in for Novell iManager to manage NSS pools and volumes,
NSS automatically makes any necessary changes to the underlying structure of the pool.
You can use this tool to manually resize a pool if you are using the NSS file system on a device that is
managed by a volume manager other than NLVM, such as Linux Volume Manager 2 (LVM2). After
you have performed all steps to increase the size of the LVM2 device and partitions by using Linux
tools, use the NSS Update utility to let NSS know to expand the pool size to the desired new size
(specified in bytes).
B.6.1
Syntax
nssupdate -pool poolName -size sizeInBytes [-shared | -notshared]
Issue this command from a terminal console prompt as the root user.
B.6.2
Availability
Novell Open Enterprise Server 11
B.6.3
Options
-pool poolname
Specifies the name of the pool you want to resize.
-size sizeInBytes
Specifies the new maximum size of the pool in bytes.
The value for sizeInBytes cannot exceed the actual size of the device. You can enter any value
larger than the pool’s current size, up to the size of the device. The space you designate must
already be free for consumption by the pool. Make sure that you understand exactly which
space is free on the device so that you do not inadvertently overwrite any metadata stored at the
end of the device. The pool begins at the same sector location as it currently does, and the
extension is based on the space that follows its current end location on the device.
[-shared | -notshared]
Sets the share state of the pool. The -shared option sets the share state to Shareable in a Cluster so
that the pool can be shared in a cluster environment that is using Novell Cluster Services (NCS)
for Linux. The -unshared option sets the share state to Not Shareable.
NSS Utilities
415
These options are intended to be used when you are using NSS pools with Novell Cluster
Services clusters. This requires that you use NSS partitions to create the pool. Use NSSMU, the
NLVM command line utility, or the Storage plug-in to Novell iManager to set the device as
shared and to manage the pool.
IMPORTANT: The shared state is not viable for NSS on pools created on non-NSS partitions.
B.6.4
Example
Open a terminal console, then log in as the root user to run this command.
nssupdate -pool puserdata -size 2147483648
Resizes the pool named puserdata to 2 GB (where 1 GB is 1024E3 bytes or 1,073,741,824 bytes).
B.7
ravsui
Use the Rebuild and Verify Simple User Interface (RAVSUI) utility to rebuild or verify an NSS pool
that is in a maintenance state. You must log in as the root user to run this utility. You must place the
pool in maintenance mode before starting the rebuild or verify process.
B.7.1
Syntax
/opt/novell/nss/sbin/ravsui
ravsui [OPTION]... [ROPTION]... rebuild poolname
ravsui [OPTION]... [VOPTION]... verify poolname
Replace poolname with the name of the pool you want to rebuild or verify, such as POOL1. Poolnames
are case sensitive.
B.7.2
Arguments
The first mandatory argument specifies the action to be performed as rebuild or as verify.
 The rebuild action checks the integrity of the data in the pool and rebuilds the pool. A rebuild
process can take up to several hours, depending on the size of the pool. The rebuild argument
can be combined with one or more OPTION and ROPTION options.
 The verify action checks the integrity of the data in the pool; it does not perform the repair. A
verify process can take several minutes, depending on the size of the pool. The verify argument
can be combined with one or more OPTION and VOPTION options.
The second mandatory argument poolname is the target of the action. Replace poolname with the
name of an NSS pool to be rebuilt or verified. Poolnames are case sensitive.
416
OES 11 SP2: NSS File System Administration Guide for Linux
B.7.3
Options
This section describes the OPTION, ROPTION, and VOPTION options available for the RAVSUI
(ravsui) utility. Mandatory arguments for long options are also mandatory for short options.
 “OPTION” on page 417
 “ROPTION” on page 417
 “VOPTION” on page 418
OPTION
General options can be used for both rebuild and verify actions.
Option
Description
-a, --attach
Attach to a rebuild or verify that it is running.
-D, --log-kernel=MASK
Controls the amount of log output from the kernel.
Default: 0x70
-d, --log-application=LEVEL
Controls the amount of log output from the
application.
Default: 0x60
-h, --help
Displays this help information and exits.
-P, --path=PATH
Specifies the Linux path where the log file is written
that contains the results of the rebuild or verify
action.
Default: /var/opt/novell/log/nss/rav/
-v, --version
Displays version information and exits.
ROPTION
ROPTIONs can be used only for the rebuild action.
Option
Description
-i, --iv-prune
Prunes an internal volume.
-l, --loss-file-limit=LIMIT
Specifies the maximum number of files per volume
to quietly prune.
Default: 100
-p, --purge-deleted-files
Purges deleted files.
NSS Utilities
417
Option
Description
-r, --rezid=ZID
Specifies the threshold to cause a reZID of a
volume.
IMPORTANT: For NSS, a rebuild automatically
causes a reZID of a volume if the rebuild finds a
ZID over 2 billion.
This checks all blocks in the system. Rebuilding
can take several minutes to several hours,
depending on the number of objects in the pool. For
all systems, reZID adds a third pass to the rebuild,
which increases the time to rebuild a volume by
about 50%.
For more information about reZID, see
Section 16.3, “ReZIDing Volumes in an NSS Pool,”
on page 189.
Default: 0xefffffff
-u, --unknown-loss-prune
Prunes if losses unknown.
VOPTION
VOPTIONs can be used only for the verify action.
B.7.4
Option
Description
-q, --quick
Skips cross-tree validations.
Files
/opt/novell/nss/sbin/ravsui
The Rebuild and Verify Simple User Interface (RAVSUI, ravsui(8)) utility file.
/var/opt/novell/log/nss/rav/
The default location of the log file for the RAVSUI utility’s rebuild or verify actions. The path is
the directory where the file is stored and does not include the filename itself. You can specify a
different path for the log file by using the -P=PATH option.
B.7.5
Note
Before you run the RAVSUI utility to perform a pool verify or pool rebuild, you must put the pool
into maintenance mode.
Log in as the root user, then open a terminal console.
At a terminal prompt, enter
nsscon
In nsscon, enter
nss /PoolMaintenance=poolname
418
OES 11 SP2: NSS File System Administration Guide for Linux
Replace poolname with the name of the pool you plan to rebuild or verify, such as POOL1.
If you do not place the pool in maintenance mode before starting the utility, you receive NSS Error
21726:
NSS error: PoolVerify results
Status: 21726
Name: zERR_RAV_STATE_MAINTENANCE_REQUIRED
Source: nXML.cpp[1289]
B.7.6
Example
Log in as the root user, then open a terminal console.
At a terminal prompt, enter
nsscon
At the nsscon prompt, put the pool that you want to verify (such as POOL1) in maintenance mode by
entering
nss /PoolMaintenance=POOL1
At the terminal prompt, start the RAVSUI utility to verify the pool by entering
ravsui -q verify POOL1
This command checks the integrity of the data in POOL1 in the current tree. It skips cross-tree
validations.
B.8
ravview
The Rebuild and Verify View (RAVVIEW) utility displays specified rebuild or verify log files in
human-readable format. The log files are generated by the Rebuild and Verify Simple User Interface
(RAVSUI, ravsui(8)) utility.
You must log in as the root user to run this utility. If the specified pool is active when you run the
utility, the ZIDs of files that appear in the log are converted to their /pathname/filename format so
that they are more easily understood by the reader.
 Section B.8.1, “Syntax,” on page 419
 Section B.8.2, “Arguments,” on page 420
 Section B.8.3, “Options,” on page 420
 Section B.8.4, “Files,” on page 421
 Section B.8.5, “Example,” on page 421
B.8.1
Syntax
/opt/novell/nss/sbin/ravview
ravview [OPTION]... rtf filename
ravview [OPTION]... [NOPTION]... rtfn poolname
ravview [OPTION]... [VOPTION]... vbf filename
NSS Utilities
419
ravview [OPTION]... [NOPTION]... [VOPTION]... vbfn poolname
Poolnames and filenames are case sensitive.
B.8.2
Arguments
The first mandatory argument specifies the type of log file you want to view. This can be rtf, rtfn, vbf,
or vbfn. The second argument specifies information about the log you want to view.
IMPORTANT: In the raw log files, files that were acted upon are identified by their ZIDs, not the
path and filename. If the pool that was rebuilt or verified is active when you view its log, the utility
converts each ZID to display the path and filename instead.
First
Description
Argument
Second
Argument
rtf
Specifies that you want to view a rebuild results. You must specify the filename filename
of the log you want to view. The utility displays any messages that occurred
during the rebuild.
rtfn
Specifies that you want to view the newest rebuild results in a specified path for poolname
a specified poolname. Optionally, specify the path to the logs by using the P=PATH option. The utility displays any messages that occurred during the
rebuild.
vbf
Specifies that you want to view a verify results. You must specify the filename of filename
the log you want to view. The utility converts the binary file into a humanreadable format.
vbfn
Specifies that you want to view the newest verify results in a specified path for a poolname
specified poolname. Optionally, specify the path to the logs by using the -P
option. The utility converts the binary file into a human-readable format.
Replace filename with the path and name of the log file for the rebuild or verify process that you
want to view.
Replace poolname with the name of the NSS pool that was rebuilt or verified.
B.8.3
Options
This section describes the OPTION, VOPTION, and NOPTION options available for the RAVVIEW
utility. Mandatory arguments for long options are mandatory for short options too.
OPTION
General options can be used for viewing rebuild or verify logs. Use them in combination with any of
the rtf, rtfn, vbf, or vbfn arguments.
420
Option
Description
-h, --help
Displays help information and exits.
-V, --verbose=LEVEL
Controls the amount of output.
-v, --version
Displays version information and exits.
OES 11 SP2: NSS File System Administration Guide for Linux
VOPTION
VOPTIONs are available only for viewing the verify results. Use them in combination with the vbf or
vbfn arguments.
Option
Description
-a, --actions
Displays action information.
-H, --histograms
Displays histograms.
-o, --object-details
Displays object details.
NOPTION
NOPTIONS are available for viewing the newest log for a specified pool. Use this option in
combination with the rtfn and vbfn arguments.
Option
Description
-P, --path=PATH
Specifies the path to RAV log files.
Default: /var/opt/novell/log/nss/rav/
B.8.4
Files
/opt/novell/nss/sbin/ravview
The Rebuild and Verify View utility file.
/var/opt/novell/log/nss/rav/
The default location of the log file for the Rebuild and Verify Simple User Interface (RAVSUI,
ravsui(8)) utility’s rebuild or verify actions. The path is the directory where the file is stored
and does not include the filename itself.
You can specify a different path for the log file by using the -p=path or -P option. This should be
the same path that you used when you ran the RAVSUI utility.
B.8.5
Example
Log in as the root user, then open a terminal console.
To view the results of the last rebuild of the pool named POOL, enter the following command at the
terminal prompt:
ravview rtfn --path=/test/path/novell/nss/rav_logs/ POOL
ravview rtfn -P /test/path/novell/nss/rav_logs/ POOL
You can specify a path for the log file by using the -path=PATH or -P option. This should be the same
path that you specified when you used the ravsui utility. If you do not specify any path then, the
default path searched is /var/opt/novell/log/nss/rav/.
To view a rebuild results, enter the following command at the terminal prompt:
ravview rtf /var/opt/novell/log/nss/rav/POOL_rebuild_4ff6bbfd_log.xml
NSS Utilities
421
Note: Specify the full path to the location of your log file.
If you want to view older rebuild results for a pool named DATAPOOL, enter the following
command at the terminal prompt:
cd /var/opt/novell/log/nss/rav
and then do ls -larth DATAPOOL_rebuild_*_log.xml
The command lists all the DATAPOOL rebuild results that are present on the server (in the current
directory, which is the default location for rebuild logs in this case).
-rw-r--r-- 1 root root 1.8K May 24 14:58 DATAPOOL_rebuild_4fbea0bd_log.xml
-rw-r--r-- 1 root root 1.9K May 30 10:52 DATAPOOL_rebuild_4fc65010_log.xml
To view the May 30 rebuild results, enter the following command:
cd /var/opt/novell/log/nss/rav
ravview rtf DATAPOOL_rebuild_4fc65010_log.xml
To view the results of the last verify operation performed on a pool named "TEST", enter the
following command at the terminal prompt:
ravview vbfn TEST
To view a specific verify results, enter the following command at the terminal prompt:
ravview vbf /var/opt/novell/log/nss/rav/TEST_verify_4ffd76a8_report.vbf
Note: Specify the full pathname of the file where your log file is located.
If you want to view older verify results for a pool named TEST, enter the following command at the
terminal prompt:
cd var/opt/novell/log/nss/rav
ls -larth TEST_verify_*.vbf
The command lists all the TEST verify results that are present on the server (in the current directory,
which is the default location for rebuild logs in this case).
-rw-r--r-- 1 root root 38K Oct
7
2009 TEST_verify_4acd3641_report.vbf
-rw-r--r-- 1 root root 38K Jan 15
2010 TEST_verify_4b50f319_report.vbf
-rw-r--r-- 1 root root 38K Feb 17 11:52 TEST_verify_4f3ea059_report.vbf
To view the October 7, 2009 verify results, enter the following command:
cd /var/opt/novell/log/nss/rav
ravview vbf TEST_verify_4acd3641_report.vbf
Clustered Environment
In a clustered deployment using Novell Cluster Services (NCS), verify and rebuild results for a pool
can exist on any of the nodes in a cluster (since pools can be migrated between nodes), or even on a
node that is no longer a part of the cluster, but was hosting the pool at the time the verify/rebuild
operation was performed. It may be useful to create a common volume/directory, mounted through
NFS or other access protocol on all nodes in the cluster to store the rebuild and verify results. For
example, you can have an auto-mounted path /my/shared/rav/logs/ on all the nodes in a cluster.
422
OES 11 SP2: NSS File System Administration Guide for Linux
To verify a pool named MYPOOL on node 1, enter
ravsui -P /my/shared/rav/logs/ verify MYPOOL
If the pool is then migrated to the node2 in the cluster. To view the logs on node 2, enter
ravview -P /my/shared/rav/logs/ vbfn MYPOOL
Note: This will work on node 2, if the same log directory is mounted on this node.
To view a specific verify or rebuild file, enter:
ravview vbf /my/shared/rav/logs/MYPOOL_verify_4ffd74b8_report.vbf
B.9
refreshids
Use this command to force a reset of the NSS ID caches on your OES 11 SP2 server.
B.9.1
Syntax
Issue the command as the root user in a terminal console on the OES 11 SP2 server where the NSS
volume exists.
refreshids
refreshids --help
B.10
rights
The Trustee Rights Utility (rights) for Linux allows you to specify trustee rights for directories and
files in the NSS file system. This utility does not provide support for trustees on Linux file systems. It
is also not meant to be used to set trustees for NSS volumes on NetWare. The trustee information is
saved in the file and directory metadata in the NSS volume and works seamlessly with NetWare if
the volume is moved to a NetWare server.
 Section B.10.1, “Syntax,” on page 423
 Section B.10.2, “Options,” on page 424
 Section B.10.3, “Example,” on page 426
 Section B.10.4, “See Also,” on page 426
B.10.1
Syntax
rights [OPTIONS]
rights [TOPTIONS] trustee username
rights [DOPTIONS] delete username
rights [IOPTIONS] irf
rights [EROPTIONS] effective username
rights [SOPTIONS] show
NSS Utilities
423
B.10.2
Options
ACTIONS
The first argument indicates the action to be taken.
Option
Description
trustee
Adds or modifies a trustee on a file or directory.
delete
Removes a trustee from a file or directory.
irf
Sets the inherited rights filter on a directory.
effective
Displays a user’s effective rights.
show
Displays the trustees and inherited rights filter.
OPTIONS
Option
Description
-v, --version
Displays the program version information.
-h, --help
Displays the help screen.
TOPTIONS
Option
Description
-r, --rights=MASK
Specifies the rights to be given to this trustee. For more information, see
“MASK” on page 426.
If the No Rights (n) option is assigned, the trustee is removed.
If rights are not specified, the default assignment is Read and File Scan
rights.
-f, --file=filename
Specifies the name of file or directory to assign trustees to. Filename is
the path for the file or directory. For example:
-f /users/username/userfile.sxi
--file=/designs/topsecret
If a file or directory is not specified, the current directory is used.
-S, --softlink
424
Do not follow link option.
OES 11 SP2: NSS File System Administration Guide for Linux
DOPTIONS
Option
Description
-f, --file=filename
Specifies the name of file or directory to delete trustees from. Filename is
the path for the file or directory.
If a file or directory is not specified, the current directory is used.
-S, --softlink
Do not follow link option.
IOPTIONS
Option
Description
-r, --rights=MASK
Specifies the rights to be passed through the filter. For more information,
see “MASK” on page 426.
If rights are not specified, the default assignment is All Rights.
-f, --file=filename
Specifies the name of the directory where the filter is to be applied.
Filename is the path for the directory.
If a directory is not specified, the current directory is used.
-S, --softlink
Do not follow link option.
EROPTIONS
Option
Description
-f, --file=filename
Specifies the name of file or directory where effective rights are to be
calculated. Filename is the path for the file or directory.
If a file or directory is not specified, the current directory is used.
-S, --softlink
Do not follow link option.
SOPTIONS
Option
Description
-f, --file=filename
Specifies the name of the file or directory to display a list of trustees for
that file or directory.
If a file or directory is not specified, the current directory is used.
-S, --softlink
Do not follow link option.
USERNAME
The username is the Fully Distinguished Name of a NetIQ eDirectory object, including the tree name.
Use the username.context.treename format, such as
NSS Utilities
425
joe.engineer.acme_tree
If you use special characters in a username, you must escape those special characters in the command
line.
For example, the $ (dollar sign) is a special character reserved to the shell and must be escaped. For
the bash shell, the command could be written in one of two ways on the command line:
rights -f /media/nss/DATA/stuff -r none \$j\$o\$e.engineer.acme_tree
rights -f /media/nss/DATA/stuff -r none '$j$o$e.engineer.acme_tree'
If you are using another shell, the special characters might need a different escape technique. In this
case, please refer to the shell documentation for this information.
MASK
The mask is a string of characters, with each character representing certain rights. The following table
lists the rights, the letter to use for each right, and what the right is used for.
B.10.3
Right
Letter
Description
Supervisor
s
Has all rights to the file or directory. Also can grant or
revoke the Access Control right.
Read
r
Grants the right to open and read files in the directory.
Write
w
Grants the right to open and write to files in the directory.
Create
c
Grants the right to create files and subdirectories. The user
can also salvage (undelete) deleted files.
Erase
e
Grants the right to erase files and directories. The user can
also purge deleted files.
Modify
m
Modifies the metadata of the file or directory. For example,
rename files and directories, or change file attributes.
File Scan
f
Grants the right to display and search on file and directory
names in the file system structure.
Access Control
a
Grants the right to add and remove trustees, and change
trustee rights to files and directories.
No Rights
none
Revokes all rights.
All Rights
all
Grants all rights (srwcemfa)
Example
rights -f /designs/topsecret -r rwfc trustee joe.engineer.acme_tree
This command assigns Read, Write, File Scan, and Create rights to the /designs/topsecret
directory for user Joe in the engineer context of the acme_tree eDirectory tree.
B.10.4
See Also
For information about setting file system directory and file attributes, see “attrib” on page 405.
426
OES 11 SP2: NSS File System Administration Guide for Linux
B.11
volumes (NCP Console Utility)
Use this utility at the ncpcon (NCP Console utility) prompt (Linux) to list mounted volumes or
information about a specified volume. For information about the NSS volumes command, see
Section A.36.1, “Volumes Command,” on page 400.
 Section B.11.1, “Syntax,” on page 427
 Section B.11.2, “Using volumes,” on page 427
 Section B.11.3, “Using volume name,” on page 427
B.11.1
Syntax
Command
Description
volume name
ncpcon volume name
Displays details about the specified volume. Linux is case-senstive, so
make sure that you enter the volume name in all caps, such as
volume VOL1
Displays general information about all mounted volumes.
volumes
ncpcon volumes
B.11.2
Using volumes
When you execute volumes at the ncpcon prompt, a list of the mounted volumes is displayed.
For example, a simple list of volumes is displayed:
Mounted Volumes
SYS
USERS
TEST
_ADMIN
VOL1
5 volumes mounted.
B.11.3
Using volume name
When you execute volume name, the screen displays detailed information about the specific volume.
The following is an example of the output on Linux in response to entering volume USERS at the
ncpcon prompt, or entering ncpcon volume USERS at the terminal console prompt:
Volume:
Status:
Mount point:
Shadow Mount point:
Capacity:
ID:
GUID:
Pool Name:
USERS
online mounted NSS "user quotas" "directory quotas" salvageable
/media/nss/USERS
(null)
8.83 GB
4
9a894a30-70a3-01dd-80-00-32b3b21ae612
POOL1
NSS Utilities
427
B.12
nssquota
Use this utility to set, get, or clear the user quota, directory quota, and volume quota on NSS volumes
and files.
 Section B.12.1, “Syntax,” on page 428
 Section B.12.2, “Options,” on page 428
 Section B.12.3, “Examples,” on page 429
B.12.1
Syntax
Command
Description
nssquota
<USERQUOTAOPTIONS>
It is used to set, get, or clear user space quota on NSS volumes.
It is used to set, get, or clear directory quota on NSS sub-directory files.
nssquota
<DIRECTORYQUOTAOPTIONS
>
It is used to set, get, or clear volume quota on NSS volumes.
nssquota
<VOLUMEQUOTAOPTIONS>
B.12.2
Options
Usage Options
Option
Description
-h, --help
Displays the help information.
-v, --version
Displays the program version information.
USERQUOTAOPTIONS
428
Option
Description
-U, --userquota
Sets the userquota options (set, get, or clear).
-s, --size=quota (in KB/
MB/GB)
Storage space allowed for the specified user. The default unit is
MB.
-V, -volumename=volumename
Name of the volume for which quota has to be set.
-u, --username
Specify the user name.
-g, --getquotas
Gets the user quota.
-c, --clear
Removes the quota restriction.
OES 11 SP2: NSS File System Administration Guide for Linux
DIRECTORYQUOTAOPTIONS
Option
Description
-D, --directoryquota
Sets the directory quota options (set, get, or clear).
-s, --size=quota (in
Multiples of 4KB/MB/GB)
Size of the quota. The default unit is KB.
-d, --directoryname
Sub-directory name.
-n, --namespace
Indicates the path lookup namespace(DOS, UNIX, LONG,
MACINTOSH).
-g, --getquotas
Gets the directory quota.
-c, --clear
Removes the quota restriction.
VOLUMEQUOTAOPTIONS
B.12.3
Option
Description
-V, --volumename
Name of the volume for which quota has to be set.
-s, --size=quota (in
Multiples of KB/MB/GB/TB)
Size of the quota. The default unit is GB.
-Q, --volumequota
Sets, gets or clears the volume quota options.
-g, --getquota
Gets the volume quota.
-c, --clear
Removes the quota restriction
Examples
nssquota -U -V VOL
-u wwrun.novell -s 30GB
This example is for user quota.
nssquota -D -d /media/nss/VOL/test -s 4GB
This example is for directory quota.
nssquota -Q -V VOL -s 100GB
This example is for the volume quota.
NSS Utilities
429
B.13
nssraid
Use this utility to view the status of the NSS RAID 1 (mirror) devices, segment information, to restart
and stop the remirror process of the NSS RAID 1 devices, and to delete a single disk mirror RAID 1
device.
 Section B.13.1, “Syntax,” on page 430
 Section B.13.2, “Options,” on page 430
 Section B.13.3, “Example,” on page 430
B.13.1
Syntax
nssraid status | remirror | abort | delete <RAID_name>
B.13.2
Options
Usage Options
B.13.3
Option
Description
nssraid status
Display the status of all the NSS RAID 1 devices.
nssraid status
<RAID_name>
Display segment information of the NSS RAID 1 device.
nssraid remirror
<RAID_name>
Restart remirror process on NSS RAID 1 device.
nssraid abort
<RAID_name>
Stop remirror process on NSS RAID 1 device.
nssraid delete
<RAID_name>
Deletes a single disk mirror RAID 1 device.
Example
To stop remirroring mirror the NSS RAID 1 device, MYRAID1, enter
nssraid abort MYRAID1
B.14
ncsinit
Use this utility to initialize a device and to set the device to a shared state.
 Section B.14.1, “Syntax,” on page 431
 Section B.14.2, “Options,” on page 431
 Section B.14.3, “Example,” on page 431
430
OES 11 SP2: NSS File System Administration Guide for Linux
B.14.1
Syntax
ncsinit [-f -z -u] <full_disk_path>
B.14.2
Options
Usage Options
Option
Description
-f
Forces initialization. Initializes the device even if the device is already
initialized.
-z
Adds zero to the end of the specified device for upto a maximum of 32
sectors.
-u
Undo the initialization if the device is already initialized. Need to run nlvm
to re-scan the device.
NOTE: -u and -z options can not be used together.
B.14.3
Example
Example for initializing and sharing the disk,
ncsinit /dev/sda, where sda is the device which is getting initialized and shared.
Example for reinitializing a disk that is already initialized,
ncsinit -f /dev/sda, where sda is the device which is getting reinitialized and shared.
B.15
nsschown
Use this utility to change and list the owners of the NSS files and directories, to identify the files and
directories that have obsolete owners, and to list or change the files owned by a owners in a directory
or volume level. It is available from Novell Open Enterprise Server 11 SP2 onwards.
NOTE: This utility does not support DFS Junctions, DST and POSIX NCP volumes.
B.15.1
Syntax
nsschown -l <path> -r <yes/no> {-g <oldGUID> |-u <olduserFDN> | -a | -i [-b
<bind_FDN>] [-p <password>]} [-n <newuserFDN>]
NOTE: nsschown utility does not support paths containing *.
NSS Utilities
431
B.15.2
Options
Usage Options
-l <path>
To list or replace the ownership of a specified directory.
-r <yes/no>
To list or replace the ownership of the specified directory, the files and sub-directories. The
default is yes and that will list or replace the entire directory. If set to no, it will list or replace
only the specified directory or file.
-g <oldGUID>
To list or replace all the files and directories with a specified owner GUID.
-u <Old_User_FDN>
To list or replace all the files and directories with a specified owner. The owner can be a fully
qualified FDN or root.
-a
To list or replace all the files and directories irrespective of the current owner.
-i
To list or replace all the files and directories having invalid owner IDs.
-b <Bind_FDN>
Provide a user FDN that has browse rights at root tree level. This user will be used to login to the
eDirectory tree for doing a tree-wide lookup for owner GUIDs. By default, [Public] has browse
tree right. User FDN and password must be provided if [Public] does not have browse tree
rights.
-p <password>
Provide the appropriate user FDN password for the eDirectory login. Failing which you will be
prompted to provide a password.
-n <new_user_FDN>
To replace the ownership of all the files or directories found with the new user. It can be FDN or
root.
Examples
 To list the owners of all files and directories under the directory named “dir”:
nsschown -l CVOL:dir -a
 To replace owners of all files and directories under “CVOL:dir” that have owner GUID as
“1234567a-7834-0000-00-12-123456781234” with the user “.newuser.context.tree.”. This includes
all files and sub-directories under the “dir” directory.
nsschown -l CVOL:dir -g 1234567a-7834-0000-00-12-123456781234 -n
.newuser.context.tree.
 To replace the ownership of “CVOL:dir” alone with the new user “.newuser.context.tree.”.
nsschown -l CVOL:dir -r no -a -n .newuser.context.tree.
432
OES 11 SP2: NSS File System Administration Guide for Linux
 To list all files and directories under “CVOL:dir” that have invalid owners, and the eDirectory
login credentials used to valid the owners are “.loginuser.context.tree.” and “password123”.
nsschown -l CVOL:dir -i -b .loginuser.context.tree. -p password123
NSS Utilities
433
434
OES 11 SP2: NSS File System Administration Guide for Linux
C
Using Linux Commands to Manage
NSS Devices
C
The Novell Storage Services (NSS) file system can exist on devices managed by any volume manager.
WARNING: The NSS configurations described in this section are untested and are not supported by
Novell Support. Also, some key features, such as clustering with Novell Cluster Services, are not
available.
Beginning with OES 11, the Enterprise Volume Management System (EVMS) was obsoleted by the
new Novell Linux Volume Manager (NLVM). NLVM addressed EVMS deficiencies that were reasons
that you might have used Linux commands to create pools and volumes on OES 2 SP3 and earlier.
For example:
 NLVM allows you to easily create pools on devices that contain Linux partitions and volumes.
 NLVM provides a command line interface that supports scripting to create and manage pools
and volumes.
 NLVM creates pool objects in Device Mapper (DM) directly off the device, which improves NSS
file system performance as compared the way the storage objects were created in DM by EVMS.
On OES 11 and later, the NSS management code does not support the pools that you create with the
Linux mkfs command. You can use NSS commands to view and activate/deactivate the pool, but you
cannot use NSS management tools (NSSMU, NLVM commands, and the Storage plug-in to
iManager) to perform NSS management tasks such as expand, delete, and move. You must use mkfs
to create volumes on the pool, but once created, the volumes can be detected and managed with NSS
commands and management tools.
For these reasons, we strongly discourage you from using the Linux mkfs command to create NSS
pools and volumes on OES 11 and later.
These instructions are only for those who can accept the risks involved with using an unsupported
method. Everyone else should follow the instructions in Chapter 3, “Installing and Configuring
Novell Storage Services,” on page 33.
This section describes the following:
 Section C.1, “Creating and Mounting NSS Pools and Volumes by Using Linux Commands,” on
page 436
 Section C.2, “Configuring Default Mount Settings for NSS Pools and Volumes,” on page 439
 Section C.3, “Expanding NSS Pools,” on page 441
 Section C.4, “Deleting NSS Pools,” on page 441
 Section C.5, “Copying Data from a Linux-Managed Pool to an NSS-Managed Pool,” on page 441
Using Linux Commands to Manage NSS Devices
435
C.1
Creating and Mounting NSS Pools and Volumes by Using
Linux Commands
Use the following procedure to create and mount an NSS pool and volume.
 Section C.1.1, “Using the Linux mkfs Command to Create NSS Pools,” on page 436
 Section C.1.2, “Creating a Partition,” on page 437
 Section C.1.3, “Creating and Mounting an NSS Pool,” on page 437
 Section C.1.4, “Creating and Mounting an NSS Volume,” on page 438
C.1.1
Using the Linux mkfs Command to Create NSS Pools
The Linux mkfs command is used to build a file system on Linux. This section describes how to use
the Linux mkfs command to create an NSS file system pool.
WARNING: Be careful with this command. The mkfs command destroys any existing data on the
specified device or partition.
Syntax
mkfs [ -t fs-type ] [ fs-options ] filesys
Option or Parameter
Description
-t fs-type
Specifies the type of file system to be built, such as nsspool or nssvol. For
example:
-t nsspool
-t nssvol
fs-options
File-system-specific options to be passed to the real file system builder.
When creating an NSS pool, use this fs-option:
-n poolname
Replace poolname with the actual name of the pool you want to create.
When creating an NSS volume, use this fs-option:
-n volname
Replace volname with the actual name of the volume you want to create.
filesys
When using nsspool as the fs-type, replace filesys with the device name
(devname such as /dev/hda1 or /dev/sdb2).
When using nssvol as the fs-type, replace filesys with the pool name
(poolname such as POOL2).
Examples
mkfs -t nsspool -n poolname devname
436
OES 11 SP2: NSS File System Administration Guide for Linux
mkfs -t nssvol -n volname poolname
C.1.2
Creating a Partition
Use the following procedure to create a /dev/partitiondevice, such as /dev/hda2or /dev/sda5.
1 Log in to the server as the root user, or use the su command to become root.
2 Go to the YaST > System Partitioner, then select Create a Partition.
3 Select a device with free space available.
4 Create a partition. The name is automatically specified, such as hda2 or sda5.
5 If you do not want to use all of the available free space, specify the amount of space to use.
Make sure the partition size is sufficient for the NSS pool you want to create later; the partition
size determines the pool size.
6 Select Unformatted.
7 Do not specify the mount point; leave the Mount Point field blank.
8 Click OK to create the partition.
9 Continue with Section C.1.3, “Creating and Mounting an NSS Pool,” on page 437.
C.1.3
Creating and Mounting an NSS Pool
1 Log in to the server as the root user, or use the su command to become root.
2 At a system command prompt, enter
mkfs -t nsspool -n poolname devname
Replace poolname with the name you want to use, such as POOL2. Replace devname with the
device you created in Section C.1.2, “Creating a Partition,” on page 437.
IMPORTANT: Do not use ampersand (&) and pound (#) characters in pool and volume names;
it creates problems in the /etc/fstab file. For information about other naming conventions, see
“Naming NSS Storage Objects” on page 61.
For example, enter
mkfs -t nsspool -n POOL2 /dev/hda2
where POOL2 is the pool name and /dev/hda2 is the device name.
3 If the location where you want to mount the NSS pool does not already exist, create the mount
point. At the server command prompt, enter
mkdir /mnt/pooldir
The /mnt directory is the default location for mounting devices. If you are using a different
location, replace /mnt with that path. Replace pooldir with the name (path) you want to use,
such as POOL2. If you want to make it a hidden directory, begin the directory name with a period,
such as .POOL2.
For example, enter
mkdir /mnt/.POOL2
where /mnt/.POOL2 is the mount point for your pool.
4 Mount the NSS pool. At a system command prompt, enter
Using Linux Commands to Manage NSS Devices
437
mount -t nsspool devname mountpoint -o name=poolname
For example, enter
mount -t nsspool /dev/hda2 /mnt/.POOL2 -o name=POOL2
5 Use NSSMU to create a Storage object in eDirectory for the newly created pool.
5a At a terminal prompt, enter
nssmu
5b From the NSSMU menu, select Pools.
5c Select the pool from the Pools list, then press F4 (NDS Update).
6 Continue with Section C.1.4, “Creating and Mounting an NSS Volume,” on page 438.
C.1.4
Creating and Mounting an NSS Volume
1 Log in to the server as the root user, or use the su command to become root.
2 Make sure the pool where you want to create the volume is mounted.
For information, see Section C.1.3, “Creating and Mounting an NSS Pool,” on page 437.
3 Create an NSS volume. At the system console, enter
mkfs -t nssvol -n volname poolname
Replace volname with the name you want to use, such as NSSV1. Replace poolname with the NSS
pool where the volume will reside, such as POOL2. This is the pool you created in Section C.1.3,
“Creating and Mounting an NSS Pool,” on page 437.
IMPORTANT: Do not use ampersand (&) and pound (#) characters in pool and volume names;
it creates problems in the /etc/fstab file. For information about other naming conventions, see
“Naming NSS Storage Objects” on page 61.
For example, enter
mkfs -t nssvol -n NSSV1 POOL2
4 If the location where you want to mount the NSS volume does not already exist, create the
directory path. At the server command prompt, enter
mkdir /media/nss/volname
The /media/nss path is the default location for NSS volumes. If you are using a different
location, replace /media/nss with that path. Replace volname with the name of the volume you
created in Step 3.
For example, enter
mkdir /media/nss/NSSV1
5 Mount the NSS volume. At a system command prompt, enter
mount -t nssvol VOL volmountpoint -o name=volname
For example, enter
mount -t nssvol VOL /media/nss/NSSV1 -o name=NSSV1
438
OES 11 SP2: NSS File System Administration Guide for Linux
C.2
Configuring Default Mount Settings for NSS Pools and
Volumes
Whenever you create NSS pools and volumes from the command line, their mount information is not
added by default to the /etc/fstab configuration file. After creating the pool and volume, make
sure to edit the /etc/fstab configuration file to add entries for them.
 Section C.2.1, “Understanding Entries in the /etc/fstab Configuration File,” on page 439
 Section C.2.2, “Adding NSS Pool and Volume Mount Information to /etc/fstab,” on page 440
C.2.1
Understanding Entries in the /etc/fstab Configuration File
The /etc/fstab file is a configuration file that contains information about all the devices and
partitions on your Linux computer. Each line is an entry that describes where and how to mount one
device or one partition. The following table describes the field information needed for NSS pools and
volumes.
Table C-1 Options in the /etc/fstab File
Column in /etc/
fstab
Device name
Mount point
File system type
Description
Examples
The location of the device or partition you want
to mount.
/dev/hda3
The default location where the device or
partition is to be mounted if the mount point is
not otherwise specified in a mount command.
/mnt/.pool2
The file system type of the device or partition.
nsspool
/dev/sdb1
/media/nss/NSSV1
nssvol
Using Linux Commands to Manage NSS Devices
439
Column in /etc/
fstab
Mount options
Description
Examples
Lists the comma-delimited mount options for the noauto,rw,name=POOL2
device or partition. Use a comma without spaces
noauto,rw,name=NSSV1
between options.
name=POOL2,NSSV1,noatime
auto or noauto: Use auto if you want the
,nodiratime
volume to mount automatically after a system
reboot. Use noauto if you want the device to be name=NSSV1,noatime,nodir
atime
mounted only when you explicitly issue the
mount command. Do not use the auto command
for encrypted NSS volumes; they require a
password to be entered on the first mount after a
system reboot.
rw: Use rw to mount the device as read-write.
name=<poolname | volname>: Specify the
poolname or volname of the partition to be
mounted.
noatime: Use noatime for volumes when you
want a file’s access time (atime) to not be
updated for reads.
nodiratime: Use nodiratime for volumes when
you want a directory’s access time to not be
updated for enumerations.
Dump option
number
Designates if the file system should be dumped
with the Linux Dump utility. A value of 0
indicates that Dump should ignore this file
system.
0
Set this value to 0 for NSS pools and volumes.
File system check
number
Designates if the file system should be checked 0
with the Linux fsck utility. A value of 0 indicates
that fsck should ignore this file system.
Set this value to 0 for NSS pools and volumes.
C.2.2
Adding NSS Pool and Volume Mount Information to /etc/fstab
For information about completing the fields for an entry in the /etc/fstab file, see Section C.2.1,
“Understanding Entries in the /etc/fstab Configuration File,” on page 439.
1 Log in as the root user, or use the su command to become root.
2 In a text editor, open the /etc/fstab configuration file.
IMPORTANT: When working in /etc/fstab, make sure not to leave any stray characters or
spaces in the file. This is a configuration file, and it is highly sensitive to such mistakes.
3 Locate the area in /etc/fstab where partitions are defined, then add a line defining the mount
information for the NSS pool.
devname mountpoint fstype mountoptions dump# fsck#
440
OES 11 SP2: NSS File System Administration Guide for Linux
For example, suppose you created POOL2 on device /dev/hda2 and mounted it at /mnt/.POOL2.
In the /etc/fstab file, the line to add would be
/dev/hda2 /mnt/.POOL2 nsspool noauto,rw,name=POOL2 0 0
If you want to mount an NSS pool automatically after a system reboots, make sure to use the
auto command.
4 Locate the area in /etc/fstab below where you entered pool information, then add a line
defining the mount information for the NSS volume.
devname mountpoint fstype mountoptions dump# fsck#
Make sure to place the NSS volume entry after its pool entry to ensure that the pool is mounted
before the volumes in it.
For example, suppose you created volume NSSV1 and mounted it at /media/nss/NSSV1. In the /
etc/fstab, the line to add would be
NSSV1 /media/nss/NSSV1 nssvol noauto,rw,name=NSSV1 0 0
If you want to mount a non-encrypted NSS volume automatically after a system reboots, make
sure to use the auto command on both the pool it is in and the volume.
5 Save the /etc/fstab file to accept your changes.
C.3
Expanding NSS Pools
After using Linux utilities to move the pool to a larger device and to increase the partition size to the
size of the device, use the NSS Update utility to increase the pool size up to the size of the new
device. For instructions see, Section B.6, “nssupdate,” on page 415.
C.4
Deleting NSS Pools
To delete a pool that you created with the mkfs command, unmount the pool and volume, then delete
the partition that you created in Section C.1.2, “Creating a Partition,” on page 437. You can use fdisk,
YaST, or parted to delete the partition. You can also use the nlvm delete partition
<partition_name> command.
C.5
Copying Data from a Linux-Managed Pool to an NSSManaged Pool
There is no direct method for converting a Linux-managed pool to be recognized and managed by
NSS management tools. You can create an NSS pool with NSS tools, and then copy or use backup/
restore to copy the files from the old pool to the new pool.
Using Linux Commands to Manage NSS Devices
441
442
OES 11 SP2: NSS File System Administration Guide for Linux
D
Comparison of NSS on NetWare and
NSS on Linux
D
This section compares features and capabilities of Novell Storage Services on NetWare 6.5 SP8 and
Novell Open Enterprise Server 11 SP2 servers.
Feature Description
NSS for NetWare 6.5 SP8
NSS for OES 11 SP2
Management interfaces
Novell iManager
Novell iManager
NSSMU for NetWare
NSSMU for Linux
NLVM. For more information, see
OES 11 SP2: NLVM Reference.
Utilities in the server console
(NSSMU, RIGHTS, FLAG)
Utilities in the terminal console
(NSSMU, RIGHTS, NSSCON,
ATTRIB, RAVSUI, RAVVIEW,
nssquota, ncsinit, nbackup(1)
(http://www.novell.com/
documentation/oes/smsadmin/
data/nbackup.1.html))
RIGHTS in the server console
NSS commands in the NSS
Console (NSSCON)
Novell Remote Manager for
NetWare
Novell Remote Manager for Linux
(for Dynamic Storage Technology
shadow volumes and for
managing NCP Server
connections to NSS volumes)
Novell NetStorage for NetWare
Novell NetStorage for Linux
File system trustees, trustee
Files and Folders plug-in to
rights, and inherited rights filter to iManager
control access to directories and
Novell Remote Manager for
files
NetWare
Files and Folders plug-in to
iManager
Novell NetStorage for NetWare
(via Web browser only, not Webbased Distributed Authoring and
Versioning (WebDAV))
Novell NetStorage for Linux (via
Web browser only, not WebDAV)
Novell Client
Novell Client
RIGHTS utility for NetWare
RIGHTS utility for Linux
Comparison of NSS on NetWare and NSS on Linux
443
Feature Description
NSS for NetWare 6.5 SP8
NSS for OES 11 SP2
File system directory and file
attributes to control functions
available for directories and files
Files and Folders plug-in to
iManager
Files and Folders plug-in to
iManager
Novell NetStorage for NetWare
Novell NetStorage for Linux
Novell Client
Novell Client
Novell Remote Manager for
NetWare
Directory quotas management
(requires the Directory Quotas
attribute for the volume)
Files and Folders plug-in to
iManager
Files and Folders plug-in to
iManager
Novell NetStorage for NetWare
Novell NetStorage for Linux
Novell Remote Manager for
NetWare
User space quota management
(requires the User Space Quotas
attribute for the volume)
Storage plug-in to Novell
iManager
Storage plug-in to Novell
iManager
Default mount location for NSS
pools
Not applicable
/opt/novell/nss/mnt/
.pools/poolname
Default mount location for NSS
volumes
Server root
/media/nss/volumename
File system type (as recognized
and reported by the operating
system)
nss
nssvol
File access protocols
NCP
NCP
Native File Access Protocols
(AFP, CIFS, and NFS)
Novell AFP for Linux
Novell CIFS for Linux
CIFS/Samba using Novell Samba
Linux NFS (version 3)
Linux NFS and Samba requires
users to be Linux-enabled with
Linux User Management. The
service must also be enabled.
Interface
64-bit
64-bit
Character format
Unicode
Unicode
Maximum device size recognized 2 TB
(physical or logical)
444
OES 11 SP2: NSS File System Administration Guide for Linux
2E64 512-byte sectors
Feature Description
NSS for NetWare 6.5 SP8
NSS for OES 11 SP2
Maximum software RAID device
size (combined total for all
member segments)
2 TB
8 TB
NOTE: The Maximum segment
size corresponds only to DOS
partitioned devices. If all devices
are using GPT, the size limits are
removed. Since pools are
currently limited to 8TB, RAID1
sizes for pool objects are also
limited to 8TB.
Minimum software RAID segment 12 MB per segment
size
12 MB per segment
Maximum partition size
2 TB
8 TB
Valid Range: 10 MB to 2 TB
Valid Range: 12 MB to 8 TB
NOTE: The Maximum segment
size corresponds only to GPT
partition. If all devices are
partioned using DOS, the size
limits to 2TB.
Maximum number of partitions
per pool
No practical limit
No practical limit
Maximum pool size (created by
8 TB
using at least 4 partitions of up to
2 TB each)
8 TB
Minimum pool size
10 MB
12 MB
Maximum size of a volume
Up to 8 TB, depending on the
pool size and available space in
the pool.
Up to 8 TB, depending on the
pool size and available space in
the pool.
Maximum file size
Up to 8 TB, depending on the
volume size and available space
in the volume.
Up to 8 TB, depending on the
volume size and available space
in the volume.
Maximum number of files per
volume
Up to 8 trillion, regardless of how
many name spaces are loaded.
Up to 8 trillion, regardless of how
many name spaces are loaded.
NOTE: NSS can support this.
You are limited by the browser
and application ability, of course.
NOTE: NSS can support this.
You are limited by the browser
and application ability, of course.
1 million
1 million
Maximum number of files open
concurrently
Maximum number of volumes per 255 plus the sys: volume.
server
You can mount NSS volumes
beyond 256, but they are not
visible or accessible through the
normal Netware APIs.
No practical limit on the number
of NSS data volumes.
Time to mount a volume
Requires only a few seconds,
thanks to journaling.
Requires only a few seconds,
thanks to journaling.
Comparison of NSS on NetWare and NSS on Linux
445
446
Feature Description
NSS for NetWare 6.5 SP8
NSS for OES 11 SP2
Volume name space
Accommodates all name spaces
(DOS, Macintosh, Long, and
UNIX). Long is the default name
space.
Accommodates all name spaces
(DOS, Macintosh, Long, and
UNIX). Long is the default name
space.
In OES 2 and later, the Lookup
Name Space attribute allows you
to set the default name space
used when mounting volumes.
The Lookup Name Space
attribute allows you to set the
default name space used when
mounting volumes.
Directory names and filenames
are case insensitive.
Directory names and filenames
are case insensitive with the Long
name space.
Minimum server memory required Requires only 4 MB available
to activate a volume
RAM to activate a single volume
of any size and any number of
files. Loads a file’s metadata into
memory only as you access the
file.
Requires only 4 MB available
RAM to activate a single volume
of any size and any number of
files. Loads a file’s metadata into
memory only as you access the
file.
File access time
Same for each file, regardless of
its location on the volume.
Same for each file, regardless of
its location on the volume.
Error correction and data
recovery time on system failure
Journaling file system logs
changes.
Journaling file system logs
changes.
On system failure, replays the
most recent transactions to
confirm validity, then repairs
errors or rolls back to the original
condition, typically in 15 to 60
seconds, unless the volume is
corrupted.
On system failure, replays the
most recent transactions to
confirm validity, then repairs
errors or rolls back to the original
condition, typically in 15 to 60
seconds, unless the volume is
corrupted.
Repair of corrupted pools and
volume
Ongoing journaling of the file
system; if the pool metadata
structure is corrupted, use the
NSS verify and rebuild functions.
Ongoing journaling of the file
system; if the pool metadata
structure is corrupted, use the
RAVSUI utility to verify and
rebuild the volume.
Time to repair corrupted volume
From a few seconds to several
hours, depending on the volume
size.
From a few seconds to several
hours, depending on the volume
size.
Multiple connection paths to
storage media
Yes, Media Manager multipath I/
O
Use a native Linux multiple path I/
O solution.
Software RAID support
RAID 0 (striping)
RAID 0 (striping)
RAID 1 (mirroring)
RAID 1 (mirroring)
RAID 5 (striping with parity)
RAID 5 (striping with parity)
RAID 0+1 (mirroring RAID 0
devices)
RAID 0+1 (mirroring RAID 0
devices)
RAID 5+1 (mirroring RAID 5
devices)
RAID 5+1 (mirroring RAID 5
devices)
OES 11 SP2: NSS File System Administration Guide for Linux
Feature Description
NSS for NetWare 6.5 SP8
NSS for OES 11 SP2
Volume encryption
Yes
Yes
You must mount encrypted
volumes only from NSSMU on
the first mount after a system
reboot so that you can enter the
password. The NSSCON utility
does not support entering a
password from the command line.
Data shredding
Yes, up to 7 times
Yes, up to 7 times
File compression
Yes
Yes
Data migration
Yes
Yes
Directory quotas
Yes
Yes
User space quotas (user space
restrictions)
Yes
Yes
Salvage or purge deleted files,
directories, or volumes
Yes
Yes
Transaction Tracking System
(TTS)
Yes
Not supported.
Read ahead blocks
Yes
Yes
File save time
Provides the Flush Files
Immediately attribute for NSS
volumes to write files to disk on
save instead of waiting for the
next disk write cycle. This helps
prevent possible data loss
between disk write cycles.
Provides the group write options
and timers. For information, see
Section 27.3, “Configuring or
Tuning Group I/O,” on page 334.
File-level snapshot
Yes
No
Modified File List
Yes
Yes
Pool snapshot
Yes; allows backup of block-level Yes
changes only, without
The stored-on pool must be on a
deactivating the volume.
separate partition.
Uses a brief freeze-release
process to capture information for Pool snapshots are not supported
for clustered NSS pools on Linux.
last remaining open files.
If you need content tracking and
trustee support, use NCP
volumes on Linux reiser, XFS, or
ext3 file systems, then set the file
system’s journaling mode to the
Journaling option.
(make a temporary snapshot
copy of an open file for backup)
(retain point-in-time version of a
pool using block-level copy on
write)
Comparison of NSS on NetWare and NSS on Linux
447
Feature Description
NSS for NetWare 6.5 SP8
NSS for OES 11 SP2
Backup systems support
NW 6.5 SP8: SBCON
Administration Guide
OES 11 SP2: Storage
Management Services
Administration Guide for Linux
For an overview of backup
resources on Linux, see
Chapter 26, “Managing Backup
and Restore for Data and Trustee
Information,” on page 329.
Distributed File Services for
moving and splitting NSS
volumes
Yes
Yes
Novell Archive and Version
Services
Yes
Yes
Device maintenance support
Activate and deactivate devices
by pool.
Activate and deactivate devices
by pool.
CD and DVD device recognition
Automatic process with full
support for UDF, ISO 9660, and
Macintosh HFS formats.
No; use Linux POSIX file system
options instead.
Use CDs and DVDs as read-only
NSS volumes.
CD and DVD image files
Activate as read-only NSS
volumes.
No; use Linux POSIX file system
options instead.
Ability to access DOS partitions
on the NetWare server
Load dosfat.nss to treat the
partition as a standard NSS
volume
No; use Native Linux file system
options instead.
Operating system version
detection
Default process
Default process
Cache balancing for NSS cache
buffers
Yes; for information, see Tuning
NSS Performance on NetWare.
Yes; for information, see “Tuning
NSS Performance” on page 333.
Tuning I/O write behavior
Set I/O tuning parameters for
NSS on NetWare. For
information, see Setting the File
and Buffer Flush Timers.
Set group I/O write parameters
for NSS on Linux. For
information, see “Configuring or
Tuning Group I/O” on page 334.
Dynamic Storage Technology
(DST)
DST supports using NetWare
iSCSI target devices to store NSS
volumes in the shadow volume
pair. The target devices are
attached to the OES 11 SP2
server by using the Linux iSCSI
initiator software.
DST supports NSS volumes on
OES 11 SP2 servers as the
primary or secondary volume in
the shadow volume.
For information about NSS
volume attributes and features
that are supported in DST
For information, see the OES 11
shadow volumes, see “Using
SP2: Dynamic Storage
NSS Volumes in DST Shadow
Technology Administration Guide.
Volume Pairs” in the OES 11
For information, see “iSCSI Block
SP2: Dynamic Storage
Storage Devices” in the OES 11
Technology Administration Guide. SP2: Dynamic Storage
Technology Administration Guide.
448
OES 11 SP2: NSS File System Administration Guide for Linux
E
Comparison of NSS on Linux and
NCP Volumes on Linux POSIX File
Systems
E
This section compares features and capabilities of the Novell Storage Services file system on Novell
Open Enterprise Server 11 SP2 to those of NCP volumes on Linux POSIX file systems such as Ext3,
XFS, BtrFS, and Reiser. For information, see “Managing NCP Volumes” in the OES 11 SP2: NCP
Server for Linux Administration Guide.
For information to help you choose from among the numerous Linux file system offerings, see
“Overview of Linux POSIX File Systems” in the OES 11 SP2: Linux POSIX Volume Administration
Guide.
Feature Description
NSS on OES 11 SP2
Management interfaces
Novell iManager Storage plug-in.
For information, see Section 9.1,
“Novell iManager and StorageRelated Plug-Ins,” on page 77.
NSSMU for Linux
NLVM
NCP Volumes on Linux POSIX
File Systems
YaST > Partitioner for managing
devices
Novell Remote Manager for Linux
(Managing Shares)
NCP commands in the NCP
Console (NCPCON)
NSS utilities in a terminal console Various Linux commands and
utilities in a terminal console.
NSS commands in the NSS
NSSMU
Console (NSSCON)
Novell Remote Manager for Linux NLVM commands
(browse only)
File system trustees and trustee
rights to control access to
directories and files
Yes, works with or without
concurrent running of NCP
Server.
Yes, requires NCP Server to
enforce the rights and access on
the extended attributes.
Comparison of NSS on Linux and NCP Volumes on Linux POSIX File Systems
449
Feature Description
NSS on OES 11 SP2
NCP Volumes on Linux POSIX
File Systems
File access protocols
NCP
NCP
CIFS/Samba using Novell Samba CIFS/Samba using Novell Samba
Linux NFS (version 3)
Linux NFS and Samba requires
users to be Linux-enabled with
Linux User Management. The
service must also be LUM
enabled.
NCP Volumes and Samba
requires users to be Linuxenabled with Linux User
Management. The service must
also be LUM enabled.
Novell AFP for Linux
Novell CIFS for Linux
File system directory and file
attributes to control functions
available for directories and files
Files and Folders plug-in to
iManager
Not applicable. Use POSIX file
and directory attributes.
Novell NetStorage
Novell Client
Novell Remote Manager for
Linux. See “Viewing Key NSS
Directory and File Attributes as
Linux POSIX Permissions” in the
OES 11 SP2: File Systems
Management Guide.
450
Directory quotas
Yes, requires the Directory
Quotas attribute to be enabled.
User space quotas (user space
restrictions)
Yes, for OES Linux SP2 and later Yes, if the Linux file system being
used under the NCP share
supports user quotas and the
Linux file system resides on a
local, iSCSI, or Fibre Channel
drive. All users of the NCP
volume must be LUM enabled.
Manage the user quotas using
the Linux file system tools.
Default mount location for NSS
pools
/opt/novell/nss/mnt/
.pools/poolname
OES 11 SP2: NSS File System Administration Guide for Linux
Yes
Not applicable.
Feature Description
NSS on OES 11 SP2
NCP Volumes on Linux POSIX
File Systems
Volume name space
Long is the default name space,
which is case insensitive.
UNIX; no support for case
insensitive names.
You can specify the UNIX name
spaces on mounting the NSS
volume to make its directory
names and filenames case
sensitive. Using UNIX name
space slows performance
compared to using Long.
For example:
ncpcon mount /
opt=ns=<long|unix|dos|ma
c> <volume_name>
The name space options are
case sensitive.
Salvage for deleted volumes,
directories, and files
Yes
No
Volume encryption
Yes, for OES SP2 and later
Yes, for Reiser
File compression
Yes
No
Data shredding (secure deletion)
Yes, up to 7 times
No
Online resizing of volumes and
pools
Yes
Yes, depending on the file system
Multiple I/O paths to storage
media
No; NSS-specific multipath I/O
tools as are not available on
Linux.
Use the Linux Device Mapper
driver support for mutlipath I/O on
devices. (NCP is not required to
make this work.)
For information, see Managing
Multipath I/O for Devices (http://
www.suse.com/documentation/
sles11/stor_admin/?page=/
documentation/sles11/
stor_admin/data/
multipathing.html) in the SLES
11: Storage Administration Guide
(http://www.suse.com/
documentation/sles11/
stor_admin/?page=/
documentation/sles11/
stor_admin/data/bookinfo.html).
Use the Linux Device Mapper
driver support for mutlipath I/O on
devices where you plan to create
NSS file systems.
Software RAID support
RAID 0, 1, 5, 0+1, and 5+1.
Pool snapshot
Yes, using iManager, NSSMU, or Depends on the file system.
NLVM command line interface.
NLVM supports device snapshots
for the devices it manages. (NCP
Snapshots of cluster-enabled
is not required to make this work.)
pools is not supported.
(retain point-in-time version of a
pool using block-level copy on
write)
RAID 0, 1, 4, 5 and 6. RAID 0+1
can be created using the Linux
mdadm(8) command as a
complex RAID using the RAID
0+1 option, or as a nested RAID.
Comparison of NSS on Linux and NCP Volumes on Linux POSIX File Systems
451
NCP Volumes on Linux POSIX
File Systems
Feature Description
NSS on OES 11 SP2
Hard links
Yes; enhanced hard links support Yes
is available in OES 2 and later.
For information, see Chapter 24,
“Managing Hard Links,” on
page 307.
Backup support
Yes, using Novell Storage
Management Services for Linux.
For information, see Chapter 26,
“Managing Backup and Restore
for Data and Trustee Information,”
on page 329.
Data migration from NSS
volumes on NetWare
Yes
Yes
Novell Archive and Version
Services
Yes.
No
Novell Distributed File Services
Yes, for OES 2 and later.
For information, see the OES 11
SP2: Novell Distributed File
Services Administration Guide for
Linux.
NSS volumes on OES 11 SP2
can contain junctions or be a
DFS does not support junctions
junction target. NSS volumes on on NCP volumes.
OES 1 can be a junction target,
but junctions are not supported in
the volume.
Dynamic Storage Technology
Yes
For information, see the OES 11
SP2: Novell Archive and Version
Services Administration Guide.
For information, see the OES 11
SP2: Dynamic Storage
Technology Administration Guide.
452
Yes. You can use SMS. For more
information, see POSIX File
System Support.
OES 11 SP2: NSS File System Administration Guide for Linux
Only as targets of junctions in
OES 2 and later.
Not available.
Feature Description
NSS on OES 11 SP2
Novell Cluster Services for Linux
Yes
For information, see the OES 11
SP2: Novell Cluster Services for
Linux Administration Guide.
For information, see “Creating
Cluster-Enabled Pools and
Volumes”.
NCP Volumes on Linux POSIX
File Systems
Yes; cluster the Linux POSIX file
system, then create the NCP
volume on it. For information, see
“Configuring and Managing
Cluster Resources for Shared
LVM Volume Groups”.
You can NCP enable the
clustered Linux volume as you
create it by using NSSMU or the
'nlvm create linux volume'
command. For information, see
“Clustering LVM Volume Groups
with Novell Cluster Services” in
the OES 11 SP2: Linux POSIX
Volume Administration Guide.
“Create Linux Volume:” in
theOES 11 SP2: NLVM
Reference.
Auditing
Yes
No
Novell Transaction Tracking
System (TTS)
No
Use the Journal mode for Linux
POSIX file systems that support
journaling.
Operating system version
detection
Default process
Default process
Device maintenance support
Activate and deactivate devices
by pool.
Activate and deactivate devices
using Linux tools.
Cache balancing for NSS cache
buffers
You can specify a minimum
cache buffer size.
Integrated with the Linux file
system cache.
For information, see “Tuning NSS
Performance” on page 333.
CD and DVD device recognition
No; not managed by NSS.
Yes, default
Use Linux services to mount CDs
and DVDs as Linux volumes.
Ability to access DOS partitions
as on a NetWare server
No; not managed by NSS. Use
Linux services instead.
Yes, using Linux services.
Default mount location for NSS
volumes
/media/nss/volumename
Not applicable.
Interface
64-bit
64-bit
Character format
Unicode
UTF-8
Comparison of NSS on Linux and NCP Volumes on Linux POSIX File Systems
453
Feature Description
NSS on OES 11 SP2
NCP Volumes on Linux POSIX
File Systems
Maximum device size recognized 8 TB
For a 64-bit OS:
(physical or logical)
NSS management tools
 2 to 32 TB for Ext2 or Ext3,
recognize devices of any size,
depending on the block size
and support both the GPT and
 16 TB (minus 1 Byte) for
DOS partitioning schemes. For
Reiser
use in an NSS pool, the
maximum device sizes supported
are:
8 TB (GPT)
2 TB (DOS)
Maximum software RAID size
(combined total of all member
segments)
2 TB
NOTE: The Maximum segment
size corresponds only to DOS
partitioned devices. If all devices
are using GPT, the size limits are
removed. Since pools are
currently limited to 8TB, RAID1
sizes for pool objects are also
limited to 8TB.
See Maximum device size
recognized.
Minimum software RAID segment 12 MB per segment
size
Depends on the file system.
Maximum partition size
Up to 16 TB, depending on the
file system and block size as
noted above.
8 TB
Valid Range: 12 MB to 8 TB
NOTE: The Maximum segment
size corresponds only to GPT
partition. If all devices are
partioned using DOS, the size
limits to 2TB.
Maximum number of partitions
(logical or physical devices) per
pool
No practical limit
Maximum pool size
8 TB (using 4 or more partitions
Up to the partition size,
of up to 2 TB each and if device is depending on the file system.
partitioned with GPT which is of
>=8TB size you can have a pool
of 8TB with single partition only)
Minimum pool size
12 MB
Not applicable.
Maximum size of a volume
Up to 8 TB, depending on the
pool size and available space in
the pool.
Up to the partition size,
depending on the file system.
Volume quotas can be
overbooked. For information, see
Section 18.2, “Guidelines for NSS
Volumes,” on page 214.
454
OES 11 SP2: NSS File System Administration Guide for Linux
Not applicable.
NCP Volumes on Linux POSIX
File Systems
Feature Description
NSS on OES 11 SP2
Maximum file size
Up to 8 TB, depending on the
volume size and available space
in the volume.
2 GB to 2 TB for Ext2 or Ext3,
depending on the block size.
Up to 8 trillion (10E12),
regardless of how many name
spaces are loaded.
Up to 8 trillion (10E12),
regardless of how many name
spaces are loaded.
1 million (10E6)
Millions (10E6), depending on the
file system
Maximum number of files per
volume
(In practice, how many files be
managed is limited only by the file Up to 4 billion (10E9) files in a
browser’s and application’s ability single directory.
to list and access the files.)
Maximum number of files open
concurrently
Up to 8 TB for Reiser.
Maximum number of volumes per Unlimited NSS data volumes, but Unlimited
server
only 255 can be mounted at a
time
Time to mount a volume
Requires only a few seconds
NSS uses a journaling file system
and does not need to scan the
entire file system to create a
directory entry table (DET) and to
load a File Allocation Table (FAT).
Time to repair corrupted volume
Up to several hours, depending
on the volume size.
Depends on the file system; from
a few seconds to a few minutes.
Up to several hours, depending
on the volume size
Comparison of NSS on Linux and NCP Volumes on Linux POSIX File Systems
455
456
OES 11 SP2: NSS File System Administration Guide for Linux
F
NSS Nomenclature
F
This section describes the nomenclature used for key Novell Storage Services media objects in Novell
Open Enterprise Server 11 SP2. This information can help you better understand the nature of error
messages you might receive when using NSS. The table identifies the media object, defines it, and
indicates the version of NetWare or OES where the media object first appeared.
NOTE: All ZLSS (NSS Journaled Storage System) file blocks are 4 KB in size.
Version
Where First
Used
Media Object
Definition
Area Seed
To improve performance for OES 11 SP2, metadata blocks use
OES 2 Linux
an area seed logic to make sure that related metadata blocks are (not available
physically stored near each other. For information about
on NetWare)
configuring area size, see Section 27.3.1, “Viewing the Metadata
Area Size,” on page 335.
Beast B-Tree
The Balanced Tree (B-Tree) that tracks all the file’s metadata.
NetWare 5.0
This includes when the file was created, who created the file, the
size of the file, and the location of the file’s data.
Checkpoint
Four blocks (one in each Superblock) that track where to start
NetWare 5.0
playing the journal if the server crashes. The checkpoint contains
the metadata of the Journal.
Directory B-Tree
Used to implement an NSS volume’s Directory Quota feature.
NetWare 5.0
Support Packs
Entry ID (EID)
An eDirectory entry within the FLAIM database (the underlying
database for Directory Services 8.35 and later, including all
eDirectory versions). EIDs are 32 bits and are unique for each
server.
NetWare 6.0
EIDs are used within eDirectory to reference objects on a specific
server. They are unique for each object for each server, meaning
that object CN=Admin.O=EMA_CORP potentially has a different
EID on server FS1 than on server FS2.
This can be demonstrated with dsbrowse.nlm, a tool you can
use to examine the local eDirectory database on the NetWare
server. If you examine an object on one server and compare it
with the same object on another server, the EID between the two
instances of the object are probably dissimilar.
Epoch File Log
(EFL) B-Tree
Tracks files that change during an administrator-specified interval NetWare 6.5
of time called an epoch. This feature is used by Novell Archive
and Version Services.
NSS Nomenclature
457
Media Object
Definition
Globally Unique ID
(GUID)
A globally unique identifier within eDirectory. The scope of this
uniqueness is within one tree, although no actual checking is
done to ensure this.
Version
Where First
Used
NetWare 5.0
GUIDs are 128 bits and are unique for each object. GUIDs allow
an object to be referenced no matter which server you are
accessing.
458
Journal
The file used to quickly make the file system consistent after a
server crash. The journal is sometimes referred to as a zlog.
NetWare 5.0
Logged Pool Data
Block
The block that tracks information about the pool, including the
number of used blocks and salvageable blocks. Holds some of
the items found in the zPoolInfo_s portion of the zInfo_s
structure.
NetWare 6.0
Logged Volume
Data Block
The block that tracks the number of files, used blocks, and
compressed files of a volume. Holds some of the items found in
the zVolumeInfo_s portion of the zInfo_s structure.
NetWare 5.0
Modified File List
(MFL) B-Tree
Tracks files and folders that have the Archive attribute set by the NetWare 6.0
user. The Archive flag indicates that the file or folder needs to be
backed up. NSS uses this list to quickly find files that need to be
backed up during scheduled backups. This Archive file-and-folder
attribute is unrelated to Novell Archive and Version Services.
Multiple Server
Access Protection
(MSAP)
The block used to reduce accidental use of a pool by more than
one server at a time. A single copy of this block is stored in the
second Superblock.
NetWare 6.0
Support Packs
Name Tree B-Tree
The B-Tree that tracks the directory structure of a volume.
NetWare 5.0
Pool Data Block
The block that tracks persistent pool configurable items. For
example, a pool’s features are stored here. Holds some of the
items found in zPoolInfo_s portion of the zInfo_s structure.
NetWare 6.0
Purge Log
Tracks transactions over an extended period so they can be
NetWare 5.0
completed after a crash. For example, the log records file deletes
and truncates that need to be completed after a crash.
Purge Tree B-Tree
Used to store information about all salvageable files. This tree is NetWare 5.0
used when the file system needs to autopurge files to create free
blocks.
Snapshot
Used by the Media Manager to track pool snapshots. The object
tracks which snapshots exist on a pool and where all blocks of a
snapshot are stored. The root of this object resides in the first
Superblock. All other blocks are allocated from a file on the
internal volume.
Superblock
There are four Superblocks of 16 blocks each. The four
NetWare 5.0
Superblocks are replicas that reside in four fixed locations within
the pool. The Superblock is used by the Checkpoints, Superblock
Header, Snapshot, and Multiple Server Access Prevention.
Superblock Header
Four blocks (copies of each other, one in each Superblock) used NetWare 5.0
to locate all other ZLSS media objects. These are the first blocks
that the file system reads when a volume is loaded. Starting with
NetWare 6.0, these blocks are read when a pool is loaded.
OES 11 SP2: NSS File System Administration Guide for Linux
NetWare 6.5
Version
Where First
Used
Media Object
Definition
User B-Tree
Used to implement an NSS volume’s User Quota feature. Starting NetWare 5.0
with NetWare 6.0, the B-Tree also stores NetIQ eDirectory
Support Packs
information related to User Quota.
Volume Data Block
The block that tracks configurable items in NSS volumes such as NetWare 5.0
the volume’s attributes and the high and low watermarks for
salvage. It holds some of the items found in zVolumeInfo_s
portion of the zInfo_s structure.
ZID
A numeric ID within the NSS file system, used to reference an
NetWare 6.0
object with the object store (also known as the “beast tree”). ZIDs
are 64 bits and are unique for each volume.
The number of ZIDs is limited to 4 billion (4xE9) because of 32-bit
limitations from the client. ZIDs are not reused, so if your
processes create and delete lots of files in a volume, the
numbering of ZIDs might approach this limit. and cause files not
to be allowed to be created. If this occurs, you should rebuild the
pool to renumber the files, referred to as “reZIDing”. For more
information, see Section 16.3, “ReZIDing Volumes in an NSS
Pool,” on page 189.
NSS Nomenclature
459
460
OES 11 SP2: NSS File System Administration Guide for Linux
G
Documentation Updates
G
This section contains information about documentation content changes made to the Novell NSS
Administration Guide since the initial release of Novell Open Enterprise Server 11.
This document was updated on the following dates:
 Section G.1, “January 2014 (OES 11 SP2),” on page 461
 Section G.2, “July 2012 (OES 11 SP1),” on page 461
G.1
January 2014 (OES 11 SP2)
Updates were made to the following sections. The changes are explained below.
 Section G.1.1, “What’s New,” on page 461
G.1.1
What’s New
Location
Change
Whole Guide
Updated the guide for OES 11 SP2 and included the following new features:
 “Viewing or Modifying the Effective Rights of a Trustee” on page 254
 “Assigning or Modifying Rights to Files and Folders” on page 254
 Section B.15, “nsschown,” on page 431
G.2
July 2012 (OES 11 SP1)
Updates were made to the following sections. The changes are explained below.
 Section G.2.1, “What’s New,” on page 462
 Section G.2.2, “nssquota,” on page 462
 Section G.2.3, “metamig,” on page 462
 Section G.2.4, “ravview,” on page 462
 Section G.2.5, “Troubleshooting,” on page 462
Documentation Updates
461
G.2.1
G.2.2
G.2.3
G.2.4
G.2.5
What’s New
Location
Change
Chapter 2, “What’s New or
Changed in NSS,” on page 29
This section is new.
nssquota
Location
Change
Section B.12, “nssquota,” on
page 428
This section is updated for the clear option and Volume quota.
metamig
Location
Change
Section B.3, “metamig,” on
page 410
This section is updated for the -w, --inputfilefromnetware option.
This section is updated for the -r, --relativepathprefix option.
ravview
Location
Change
Section B.8.5, “Example,” on
page 421
Example is updated.
Troubleshooting
Location
Change
Section 29.22, “Debugging of
nlvm Results in the Display of
Warning Messages in the
nlvm_debug.log File,” on
page 354
Added.
Section 29.23, “NLVM Pool
Move Fails and Deactivates
the Pool,” on page 354
462
OES 11 SP2: NSS File System Administration Guide for Linux
Download PDF
Similar pages