Qsan XCubeSAN XS5226S User manual
Qsan XCubeSAN XS5226S is a high-performance, scalable storage system designed for enterprise and mission-critical applications. It offers a wide range of features and capabilities that make it an ideal solution for a variety of storage needs.
The XS5226S is powered by dual controllers, each with its own dedicated processor and memory, providing high availability and performance. It supports up to 26 SFF drives, providing a maximum storage capacity of over 1PB. The XS5226S also supports a variety of RAID levels, including RAID 0, 1, 5, 6, and 10, providing flexibility and data protection.
The XS5226S is easy to manage and configure, with a user-friendly web-based interface. It also supports a variety of management tools, including SNMP, CLI, and SSH, providing flexibility and control.
Advertisement
Advertisement
XCubeSAN SANOS 4.0
User’s Manual
Applicable Models:
XS5224D, XS5216D, XS5212D, XS5212S, XS5226D, XS5226S, XS3224D
XS3224S, XS3216D, XS3216S, XS3212D, XS3212S, XS3226D, XS3226S
QSAN Technology, Inc. www.QSAN.com
Copyright
© Copyright 2017 QSAN Technology, Inc. All rights reserved. No part of this document may be reproduced or transmitted without written permission from QSAN Technology, Inc.
April 2017
This edition applies to QSAN XCubeSAN SANOS (SAN Operating System) 4.0. QSAN believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.
Trademarks
QSAN, the QSAN logo, XCubeSAN, and QSAN.com are trademarks or registered trademarks of QSAN
Technology, Inc.
Microsoft, Windows, Windows Server, and Hyper-V are trademarks or registered trademarks of
Microsoft Corporation in the United States and/or other countries.
Linux is a trademark of Linus Torvalds in the United States and/or other countries.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Mac and OS X are trademarks of Apple Inc., registered in the U.S. and other countries.
Java and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates.
VMware, ESXi, and vSphere are registered trademarks or trademarks of VMware, Inc. in the United
States and/or other countries.
Citrix and Xen are registered trademarks or trademarks of Citrix Systems, Inc. in the United States and/or other countries.
Other trademarks and trade names used in this document to refer to either the entities claiming the marks and names or their products are the property of their respective owners.
Notices
This XCubeSAN SANOS 4.0 user’s manual is applicable to the following XCubeSAN models:
XCubeSAN Storage System 4U 19” Rack Mount Models
Model Name
XS5224D
XS3224D
XS3224S
Controller Type
Dual Controller
Dual Controller
Single Controller
Form Factor, Bay Count, and Rack Unit
LFF 24-disk 4U Chassis
LFF 24-disk 4U Chassis
LFF 24-disk 4U Chassis
XCubeSAN Storage System 3U 19” Rack Mount Models
Model Name
XS5216D
XS3216D
XS3216S
Controller Type
Dual Controller
Dual Controller
Single Controller
Form Factor, Bay Count, and Rack Unit
LFF 16-disk 3U Chassis
LFF 16-disk 3U Chassis
LFF 16-disk 3U Chassis
XCubeSAN Storage System 2U 19” Rack Mount Models
Model Name
XS5212D
XS5212S
XS3212D
XS3212S
XS5226D
XS5226S
XS3226D
XS3226S
Controller Type
Dual Controller
Single Controller
Dual Controller
Single Controller
Dual Controller
Dual Controller
Single Controller
Single Controller
Form Factor, Bay Count, and Rack Unit
LFF 12-disk 2U Chassis
LFF 12-disk 2U Chassis
LFF 12-disk 2U Chassis
LFF 12-disk 2U Chassis
SFF 26-disk 2U Chassis
SFF 26-disk 2U Chassis
SFF 26-disk 2U Chassis
SFF 26-disk 2U Chassis
Information contained in this manual has been reviewed for accuracy. But it could include typographical errors or technical inaccuracies. Changes are made to the document periodically. These changes will be incorporated in new editions of the publication. QSAN may make improvements or changes in the products. All features, functionality, and product specifications are subject to change without prior notice or obligation. All statements,
Notices
i
information, and recommendations in this document do not constitute a warranty of any kind, express or implied.
Any performance data contained herein was determined in a controlled environment.
Therefore, the results obtained in other operating environments may vary significantly.
Some measurements may have been made on development-level systems and there is no guarantee that these measurements will be the same on generally available systems.
Furthermore, some measurements may have been estimated through extrapolation. Actual results may vary. Users of this document should verify the applicable data for their specific environment.
This information contains examples of data and reports used in daily business operations.
To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental.
ii ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Table of Contents
Management IP Configuration Planning .......................................................... 16
Contents
iii
iv ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Operations on Thick Provisioning Pools ........................................................ 133
Operation Limitations during Migration ......................................................... 139
Map a LUN of Fibre Channel Connectivity ...................................................... 154
Contents
v
Operations on Thin Provisioning Pools .......................................................... 170
Add a Disk Group in a Thin Provisioning Pool ................................................ 173
Create a Volume in a Thin Provisioning Pool ................................................. 175
List Volumes and Operations on Volumes ..................................................... 179
System Memory and SSD Cache Capacity ..................................................... 183
vi ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Add a Tier (Disk Group) in an Auto Tiering Pool ............................................ 227
Create a Volume in an Auto Tiering Pool ....................................................... 228
List Volumes and Operations on Volumes ..................................................... 232
Transfer from Thick Provisioning Pool to Auto Tiering ................................. 233
Transfer from Thin Provisioning Pool to Auto Tiering ................................... 235
Configure Schedule Local Clone Tasks .......................................................... 256
Configure Schedule Remote Replication Tasks ............................................. 273
Configure MPIO in Remote Replication Task ................................................. 275
Configure MC/S in Remote Replication Task Path ........................................ 281
Local Clone Transfers to Remote Replication ............................................... 284
Contents
vii
viii ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figures
Contents
ix
x ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Contents
xi
xii ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Contents
xiii
xiv ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Contents
xv
Tables
Table 10-1 The Relationship between System Memory and SSD Cache Capacity .................... 183
xvi ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Contents
xvii
Preface
About This Manual
This manual provides technical guidance for designing and implementing QSAN XCubeSAN series SAN system, and it is intended for use by system administrators, SAN designers, storage consultants, or anyone who has purchased these products and is familiar with servers and computer networks, network administration, storage system installation and configuration, storage area network management, and relevant protocols.
Related Documents
There are related documents which can be downloaded from the website.
All XCubeSAN Documents
XCubeSAN QIG (Quick Installation Guide)
XCubeSAN Hardware Owner’s Manual
XCubeSAN Configuration Worksheet
XCubeSAN SANOS 4.0 User’s Manual
Compatibility Matrix
White Papers
Application Notes
Technical Support
Do you have any questions or need help trouble-shooting a problem? Please contact QSAN
Support, we will reply to you as soon as possible.
Via the Web: https://qsan.com/support
Via Telephone: +886-2-7720-2118 extension 136
(Service hours: 09:30 - 18:00, Monday - Friday, UTC+8)
Via Skype Chat, Skype ID: qsan.support
(Service hours: 09:30 - 02:00, Monday - Friday, UTC+8, Summer time: 09:30 - 01:00)
Via Email: [email protected]
xviii ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Information, Tip and Caution
This manual uses the following symbols to draw attention to important safety and operational information.
INFORMATION:
INFORMATION provides useful knowledge, definition, or terminology for reference.
TIP:
TIP provides helpful suggestions for performing tasks more effectively.
CAUTION:
CAUTION indicates that failure to take a specified action could result in damage to the system.
Conventions
The following table describes the typographic conventions used in this manual.
Conventions
Bold
<Italic>
[ ] square brackets
{ } braces
| vertical bar
Description
Indicates text on a window, other than the window title, including menus, menu options, buttons, fields, and labels.
Example: Click the OK button.
Indicates a variable, which is a placeholder for actual text provided by the user or system.
Example: copy <source-file> <target-file>.
Indicates optional values.
Example: [ a | b ] indicates that you can choose a, b, or nothing.
Indicates required or expected values.
Example: { a | b } indicates that you must choose either a or b.
Indicates that you have a choice between two or more options or
Preface
xix
/ Slash underline arguments.
Indicates all options or arguments.
Indicates the default value.
Example: [ a | b ]
xx ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
1. SANOS Overview
SANOS (SAN Operating System) is QSAN’s proprietary SAN operating system and allows you to manage, monitor, and analyze the configuration and performance of your XCubeSAN series SAN storage system. This chapter provides an overview of the SANOS functionality and includes a brief explanation of storage terminology for you to be more familiar with the storage technologies used by the XCubeSAN system.
1.1. Introduction to SANOS
SANOS is equipped with a refreshingly simple to use web-based GUI and easily is deployable into any infrastructure. Based on the Linux kernel, the 64-bit in-house developed
SANOS delivers comprehensive storage functionality and is suitable for primary or secondary storage for SMB to entry-level enterprise businesses. The following sections will introduce the SANOS system and storage pool architecture with enterprise-grade storage features.
1.1.1. SANOS System Architecture
SANOS supports Dual Active (Active/Active) controller system architecture with high availability. Figure 1-1 shows the SANOS system architecture with dual controller.
Figure 1-1 SANOS System Architecture
SANOS Overview
1
Figure 1-2 shows the SANOS system architecture with a single controller configuration. The data path is back and forth between the host interfaces and disk drives via the backplane.
The LVM (Logical Volume Management) is the core to handle this data flow between them.
It provides a method of allocating space on mass-storage devices that is more flexible than conventional partitioning schemes. In particular, a volume manager can concatenate, stripe together or otherwise combine partitions into larger virtual ones that administrators can resize or move, potentially without interrupting system use. Of course, it may rely on the system resources manager to arrange the processor time slot and scheduled tasks.
Figure 1-2 SANOS System Architecture with Single Controller Configuration
Base on LVM technology, it develops the features of auto tiering, thin provisioning, SSD cache and also data backup functions of snapshot, local clone, and remote replication.
Upon LVM, QSOE (QSAN Storage Optimization Engine) which we are proud to develop, increases IOPS performance and overall system throughput. SANOS also supports several
2 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
critical data services such as MPIO (Multi-Path I/O), MC/S (Multiple Connections per
Session), Microsoft VSS (Volume Shadow Copy Service), Microsoft ODX (Offloaded Data
Transfer), and VMware VAAI (VMware vSphere Storage APIs for Array Integration). In the periphery, there are management service daemon to provide user interface operations, C2F
(Cache to Flash) mechanism includes BBM (Battery Backup Module) / SCM (SuperCap
Module) and M.2 flash module to protect cache data from power shortage. These key enterprise storage features are all together in a single box.
Under the operation of dual controllers, HAC (High Availability Control) module has a heartbeat mechanism to detect whether the other controller is alive. Cache mirror synchronizes the memory data of both controllers including the status and storage configuration. When one controller fails, the other controller can seamlessly take over all the tasks of the failed controller thanks to this cache sync. In addition, zero system downtime for the dual controller system takes place during firmware upgrading period.
1.1.2. SANOS Storage Pool Architecture
QSAN storage pool supports a variety of 3.5”/2.5” SAS/NL-SAS HDD and 2.5” SAS/SATA
SSD flash drives. Figure 1-3 shows the storage pool architecture. Several disk drives are combined together to form a “disk group” with RAID protection. Then several disk groups can be combined to form a storage pool. A volume (virtual disk) is then created out of the storage pool and served to application servers over either iSCSI or Fibre Channel connections.
Figure 1-3 SANOS Storage Pool Architecture
SANOS Overview
3
If the local disk drives are not enough, QSAN expansion enclosures including disk drives can be added to increase storage capacity. There are host spares for standby if any disk drives in pools fail. Base on snapshot technology, the data backup features such as local clone and remote replications are also provided.
1.1.3. SANOS 4.0 Functionality List
SANOS 4.0 provides the following functionality for administrator management.
Figure 1-4 SANOS 4.0 Desktop Panel
System Management
Change system name, date and time.
Change admin’s password, user’s password, and configure login options.
Configure management IP address, DNS, and service ports.
Support boot management including auto shutdown, wake-on-LAN, and wake-on-SAS.
Support network UPS via SNMP.
Monitor cache to flash memory protection status. (BBM (Battery Backup Module), SCM
(Super Capacitor Module), and flash module are optional add-ons.
Configure alert notifications through email, syslog server, or SNMP traps.
Obtain system information and download service package.
4 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Update firmware of head unit or enclosure unit(s).
Change operation mode of single or dual controller.
Blink UID (Unique Identifier) LEDs for locating the storage arrays.
System reset to default, configuration backup, and volume restoration for maintenance usage.
System reboot or shutdown.
Host Connectivity
Obtain host connectivity information.
Configure iSCSI connectivity with IP address, link aggregation, VLAN (Virtual LAN) ID and jumbo frame.
Setup entity name and iSNS (Internet Storage Name Service).
Configure iSCSI target with CHAP (Challenge-Handshake Authentication Protocol) authentication.
List and disconnect iSCSI sessions.
Configure fibre channel connectivity with link speed and topology.
Storage Management
Support RAID pool with RAID level 0, 1, 3, 5, 6, 0+1, 10, 30, 50, 60, and N-way mirror.
Support thick provisioning pool and online migrate RAID pool.
Support thin provisioning pool with space reclamation.
Support 3-tier auto tiering with scheduled relocation. (auto tiering function is optional)
Support SSD read cache or read-write cache to improve performance. (SSD cache function is optional)
Support online storage pool capacity expansion and volume capacity extension.
Configure disk properties with disk write cache, disk read-ahead, command queuing, and disk standby.
Configure volume properties with background I/O priority, volume write-back cache, and video editing mode for enhanced performance.
Support access control for LUN mapping.
Support global, local, and dedicated hot spares for pool.
Support fast RAID rebuild.
Support disk drive health check and S.M.A.R.T attributes.
Support hard drive firmware batch update.
Support pool parity check and media scan for disk scrubbing.
Support pool activated and deactivated for disk roaming.
SANOS Overview
5
Data Backup
Support writable snapshot with manual or schedule tasks.
Support volume cloning for local replication.
Support remote replication with traffic shaping for dynamic bandwidth controller
Virtualization Integration
Seamless integrated with popular hypervisor including VMware vSphere, Microsoft
Hyper-V, and Citrix XenServer.
Monitoring
View event logs with different levels of event and download event logs.
Monitor enclosure status of head and enclosure units.
Monitor storage array performance.
1.2. Terminology
In this section, we introduce the terms that are used for the storage system throughout this manual.
RAID
RAID is the abbreviation of Redundant Array of Independent Disks. There are different RAID levels with different degrees of data protection, data availability, and performance to the host environment.
Pools
A storage pool is a collection of disk drives. One pool consists of a set of volumes and owns one RAID level attribute.
Volumes
Each pool can be divided into several volumes. The volumes from one pool have the same
RAID level, but may have different volume capacity.
Fast RAID Rebuild
When executing rebuild, the Fast RAID Rebuild feature skips any partition of the volume where no write changes have occurred, it will focus only on the parts that have changed.
6 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
LUN
A LUN (Logical Unit Number) is a unique identifier for designating an individual or collection of physical or virtual storage devices that execute I/O commands with a host computer, as defined by the SCSI (Small Computer System Interface) standard.
iSCSI
iSCSI (Internet SCSI) is a protocol which encapsulates SCSI (Small Computer System
Interface) commands and data in TCP/IP packets for linking storage devices with servers over common IP infrastructures.
Fibre Channel
Fibre channel is an extremely fast system interface. It was initially developed for use primarily in the supercomputing field, but has become the standard connection type for storage area networks (SAN) in enterprise storage.
SAS
Serial-attached SCSI offers advantages over older parallel technologies. The cables are thinner, and the connectors are less bulky. Serial data transfer allows the use of longer cables than parallel data connections.
Thick Provisioning
Thick provisioning is allocated upon creation the physical disk drive space and is equal to the user capacity seen by the host server. It also called fat provisioning.
Thin Provisioning
Thin provisioning is allocated on-demand and can be less than the user capacity seen by the host server. It involves using virtualization technology to give the appearance of having more physical resources than are actually available.
Auto Tiering
Auto Tiering is the automated progression or demotion of data across different tiers (types) of storage devices and media. The movement of data takes place in an automated way with the help of software and is assigned to the related media according to performance and capacity requirements. It also includes the ability to define rules and policies that dictate if and when data can be moved between the tiers, and in many cases provides the ability to pin data to tiers permanently or for specific periods of time.
SANOS Overview
7
SSD Cache
Smart Response Technology (also known as SSD cache) allows an SSD to function as cache for a HDD volume. It is a secondary cache that improves performance by keeping frequently accessed data on SSDs where they are read/write far more quickly than from the
HDD volume.
Snapshot
A volume snapshot is the state of a system at a particular point in time.
Local Clone
Local clone function has another physical data copy as the original volume.
Remote Replication
Remote replication function prevents primary site failure by replicating data to the remote sites.
8 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
2. Prepare for Installation
Preparation for installation is an important task, it will help you to plan your system configuration and install your system smoothly. This chapter provides a description of the configuration planning steps and a configuration worksheet example for reference.
2.1. Prepare for Configuration Planning
Before installing your SAN storage system, it is highly recommended you to do installation preparation using the worksheet. Sticking to the worksheet will help you to setup and initialize the storage system. Please refer to the Table 2-1 for the content of worksheet. You can download this worksheet here: XCubeSAN Configuration Worksheet .
Table 2-1 Configuration Worksheet
1. Initial Configuration
Item
System Name:
The maximum length of the system name is 32 characters.
Valid characters are [ A~Z | a~z | 0~9 | -_ ].
Admin Password:
The maximum length of the password is 12 characters.
Valid characters are [ A~Z | a~z | 0~9 | ~!@#$%^&*_-
+=`|\(){}[]:;”’<>,.?/ ].
NTP Server:
FQDN (Fully Qualified Domain Name)or IP address of NTP
(Network Time Protocol) server.
Time Zone:
Depending on your location.
2. Management Port Setting
Item
Management Port IP Address on Controller 1:
IP address, subnet mask, and gateway of the management port on controller 1.
DNS Server Address:
IP address of DNS (Domain Name System) server.
Value
Value
IP:
SM:
GW:
Prepare for Installation
9
Management Port IP Address on Controller 2: (optional)
IP address, subnet mask, and gateway of the management port on controller 2.
3. Notification Setting
Item
Email-from Address:
Email-from address to send event notification.
Email-to Addresses:
Email-to addresses to receive event notification
IP:
SM:
GW:
Value
Email-to Address 1:
Email-to Address 2:
Email-to Address 3:
SMTP Server:
Network name or IP address of SMTP (Simple Mail
Transfer Protocol) server.
Syslog Server: (optional)
FQDN or IP address of syslog server.
SNMP Trap Addresses: (optional)
FQDNs or IP addresses of SNMP (Simple Network
Management Protocol) trap.
SNMP Trap Address 1:
SNMP Trap Address 2:
SNMP Trap Address 3:
4. iSCSI Port Configuration
Item
Onboard iSCSI Port IP Addresses:
IP address, subnet mask, and gateway of the iSCSI ports.
Onboard 2 x 10GBASE-T iSCSI (RJ45) ports
Controller 1
IP Address
Onboard LAN1
Value
Onboard LAN2
Subnet Mask
Gateway
Controller 2
IP Address
Subnet Mask
Gateway
Onboard LAN1 Onboard LAN2
10 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Slot 1 iSCSI Port IP Addresses: (optional)
IP address, subnet mask, and gateway of the iSCSI ports.
4-port 10GbE iSCSI Host Card (SFP+) or
4-port 1GBASE-T iSCSI Host Card (RJ45)
Controller 1
IP Address
Subnet Mask
Slot 1 LAN1 Slot 1 LAN2 Slot 1 LAN3
Gateway
Controller 2
IP Address
Slot 1 LAN1 Slot 1 LAN2 Slot 1 LAN3
Subnet Mask
Gateway
Slot 2 iSCSI Port IP Addresses: (optional)
IP address, subnet mask, and gateway of the iSCSI ports.
4-port 10GbE iSCSI Host Card (SFP+) or
4-port 1GBASE-T iSCSI Host Card (RJ45)
Controller 1
IP Address
Subnet Mask
Gateway
Slot 2 LAN1 Slot 2 LAN2 Slot 2 LAN3
Controller 2
IP Address
Subnet Mask
Gateway
Slot 2 LAN1 Slot 2 LAN2
Entity Name:
The entity name is for a device or gateway that is accessible from the network. The maximum length of the entity name is 200 characters. Valid characters are [ a~z |
0~9 | -.: ]. iSNS IP Address: (optional)
IP address of iSNS (Internet Storage Name Server) server.
CHAP Username: (optional)
CHAP (Challenge-Handshake Authentication Protocol) username. The maximum length of the username is 223 characters. Valid characters are [ A~Z | a~z | 0~9 |
~!@#%^&*_-+=|(){}[]:;<>.?/ ].
CHAP Password: (optional)
Slot 2 LAN3
Slot 1 LAN4
Slot 1 LAN4
Slot 2 LAN4
Slot 2 LAN4
Prepare for Installation
11
CHAP (Challenge-Handshake Authentication Protocol) password. The length of the password is between12 to 16 characters. Valid characters are [ A~Z | a~z | 0~9 |
~!@#$%^&*_-+=`|\(){}[]:;”’<>,.?/ ].
5. Fibre Channel Port Configuration
Item
Slot 1 Fibre Channel: (optional)
Link speed and topology of the fibre channel ports.
Topology support: FC-AL, point-to-point, Fabric (16Gb Fibre
Channel only supports Point-to-Point topology)
4-port 16Gb Fibre Channel Host Card (SFP+)
Controller 1
Link Speed
Topology
Slot 1 FC1 Slot 1 FC2
Value
Slot 1 FC3
Controller 2
Link Speed
Slot 1 FC1
Topology
6. Pool Configuration
Slot 1 FC2 Slot 1 FC3
Item
Pool Type:
Thick Provisioning, Thin Provisioning, or Auto Tiering (Thin
Provisioning Enabled).
Pool Name:
The maximum length of the pool name is 16 characters.
Valid characters are [ A~Z | a~z | 0~9 | -_<> ].
Disks:
Disk type, disk quantity, and the capacity.
Value
SSD:
SAS:
NL-SAS:
RAID Level:
RAID level 0, 1, 3, 5, 6, 0+1, 10, 30, 50, 60, and N-way mirror
Raw Capacity:
Sum of disk capacity.
Estimate Capacity:
Estimate capacity according to the RAID level.
7. Volume Configuration
Item
Volume Name:
Value
Slot 1 FC4
Slot 1 FC4
12 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
The maximum length of the volume name is 32 characters.
Valid characters are [ A~Z | a~z | 0~9 | -_<> ].
Capacity:
Required capacity of the volume.
Volume Type:
RAID Volume or Backup Volume
8. LUN Mapping Configuration
Item
Protocol: iSCSI or FCP.
Volume Name:
Select one of created volumess.
Allowed Hosts: iSCSI IQN or Fibre Channel WWNN for access control.
Wildcard (*) for access by all hosts.
Target: iSCSI Target or Fibre Channel Target
LUN:
Support LUN (Logical Unit Number) from 0 to 255.
Permission:
Read-only or Read-write.
9. SSD Cache Configuration
Item
SSD Cache Pool Name:
The maximum length of the pool name is 16 characters.
Valid characters are [ A~Z | a~z | 0~9 | -_<> ].
Cache Type:
Read Cache (NRAID+) or Read-write Cache (RAID 1 or
NRAID 1+).
I/O Type:
Database, File System, or Web Service.
SSDs:
SSD quantity and the capacity.
Raw Capacity:
Sum of disk capacity.
10. Snapshot Configuration
Item
Value
Value
SSD:
Value
Prepare for Installation
13
Volume Name:
Select one of created volumes.
Snapshot Space:
Reserved snapshot space for the volume.
Snapshot Name:
The maximum length of the snapshot name is 32 characters. Valid characters are [ A~Z | a~z | 0~9 | -_<> ].
Schedule Snapshots: (optional)
Define the cycle of snapshots.
11. Local Clone Configuration
Item
Source Volume Name:
Select one of created volume for source.
Source Volume Capacity:
Check the capacity of source volume.
Target Volume Name:
Select one of created volume for target.
Value
Target Volume Capacity:
Check the capacity of target volume.
Schedule Local Clones: (optional)
Define the cycle of local clones.
12. Remote Replication Configuration
Item Value
Source Volume Name:
Select one of created volumes for source.
Source Volume Capacity:
Check the capacity of source volume.
Source iSCSI Port: iSCSI port of source unit. It can be auto or dedicated iSCSI port.
Target iSCSI Port IP Addresses: iSCSI port IP addresses of target unit.
Target
IP Address
Controller 1 Controller 2 (optional)
Target CHAP Username: (optional)
CHAP (Challenge-Handshake Authentication Protocol) username. The maximum length of the username is 223
14 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
characters. Valid characters are [ A~Z | a~z | 0~9 |
~!@#%^&*_-+=|(){}[]:;<>.?/ ].
Target CHAP Password: (optional)
CHAP (Challenge-Handshake Authentication Protocol) password. The length of the password is between 12 to 16 characters. Valid characters are [ A~Z | a~z | 0~9 |
~!@#$%^&*_-+=`|\(){}[]:;”’<>,.?/ ].
Target Volume Name:
Select one of created volume for target.
Target Volume Capacity:
Check the capacity of target volume.
Schedule Remote Replications: (optional)
Define the cycle of remote replications.
Traffic Shaping for Peak Hour: (optional)
Limit the transfer rate at peak hour.
Traffic Shaping for Off-peak Hour: (optional)
Limit the transfer rate at off-peak hour.
Off-peak Hour: (optional)
Define the off-peak hours.
Now please follow the steps and fill the worksheet out for initial system configuration.
2.1.1. Configuration Planning for Initial Setup
In Table 2-1, the items on part 1, initial configuration, are listed for the initial setup. The information in the following checklist is helpful before the initial configuration is performed.
System name allows maximum 32 characters. Valid characters are [ A~Z | a~z | 0~9 | -
_<> ].
Administrator password allows maximum 12 characters. Valid characters are [ A~Z | a~z | 0~9 | ~!@#$%^&*_-+=`|\(){}[]:;”’<>,.?/ ].
The date and time can be manually entered, but to keep the clock synchronized, recommend using a NTP (Network Time Protocol) service.
Enter the time zone based on your geographical location.
Prepare for Installation
15
2.1.2. Management IP Configuration Planning
With your network administrator, determine the IP addresses and network parameters you plan to use with the storage system, and record the information on part 2, management port setting in Table 2-1. You can manage the storage system through a dedicated management port at least on controller 1. This port must share a same subnet with the host you use to initialize the system. After initialization, any host on the same network and with a supported browser can manage the system through the management port.
It is recommend to use a static IP address for the management port. Please prepare a list of the static IP addresses, the subnet mask, and the default gateway.
DNS (Domain Name Service) server address provides a means to translate FQDN (Fully
Qualified Domain Name) to IP address. Some notification services need DNS setting.
For dual controller configurations, the management port IP address on controller 2 is optional. If enabling the dual management ports setting, both management ports of controller 1 and 2 have their own IP addresses, and both are active. Otherwise, only the management port of master controller is active, the other one is standby. The management port fails over to the slave controller when the master controller goes offline even planned or unplanned.
2.1.3. Notification Configuration Planning
On part 3, notification setting in Table 2-1, prepare to configure the notification. It’s helpful for monitoring the system health.
SMTP server and email address to direct notification alerts the IT administrator and any alternate or back-up personnel.
Syslog server address to log monitored events.
SNMP (Simple Network Management Protocol) trap to send system event logs to SNMP trap agent.
2.1.4. iSCSI Configuration Planning
Although SAN is a private network environment, IP addresses of iSCSI data ports should be planned, too. Optionally, CHAP security information, including CHAP username and password should be prepared.
For an iSCSI dual-controller system, recommend to install an iSNS server on the same storage area network. For better usage of failover, the host must logon the target twice
16 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
(both controller 1 and controller 2), and then the MPIO should setup automatically. More advanced, all the connections among hosts (with clustering), switches, and the dualcontroller are recommended as redundant as shown below.
Figure 2-1 Dual-controller Topology
If you are using onboard iSCSI LAN ports, please fill in part 4, onboard iSCSI port IP addresses of iSCSI port configuration. Optionally, if you have iSCSI host card at slot 1 and slot 2, please fill in iSCSI port IP address.
INFORMATION: iSCSI Host Cards at Slot 1 (PCIe Gen3 x 8)
HQ-10G4S2, 4 x 10GbE iSCSI (SFP+) ports
HQ-10G2T, 2 x 10GbE iSCSI (RJ45) ports
HQ-01G4T, 4 x 1GbE iSCSI (RJ45) ports
Prepare for Installation
17
iSCSI Host Cards at Slot 2 (PCIe Gen2 x 4)
HQ-10G4S2, 4 x 10GbE iSCSI (SFP+) ports (Slot 2 provides 20Gb bandwidth)
HQ-10G2T, 2 x 10GbE iSCSI (RJ45) ports
HQ-01G4T, 4 x 1GbE iSCSI (RJ45) ports
The entity name is for a device or gateway that is accessible from the network.
An iSNS (Internet Storage Name Service) server uses the iSNS protocol to maintain information about active iSCSI devices on the network, including their IP addresses, iSCSI node names, and iSCSI targets. The iSNS protocol enables automated discovery and management of iSCSI devices on an IP storage network. An iSCSI initiator can query the iSNS server to discover iSCSI target devices.
CHAP (Challenge-Handshake Authentication Protocol) authenticates a user or network host to an authenticating entity. CHAP requires that both the client and server know the plaintext of the password/secret, although it is never sent over the network. For CHAP username, the maximum length of name is 223 characters. For CHAP password, the length range is from 12 to 16 alphanumeric characters.
2.1.5. Fibre Channel Configuration Planning
Optionally, if you have fibre channel host card, please fill at slot 1 fibre channel on part 5, fibre channel port configuration in Table 2-1. You have to be familiar the fibre channel protocol. Please consider the fibre channel typology, either direct-attach to HBA or connection with fibre channel switch.
INFORMATION:
Fibre Channel Host Card at Slot 1 (PCIe Gen3 x 8)
HQ-16F4S2, 4 x 16Gb FC (SFP+) ports
Set the link speed as the same as the HBA or fibre channel switch, or set it to auto if you are not sure the link speed.
Set the topology as the same as the HBA or fibre channel switch. Topology supports FC-
AL, point-to-point, Fabric (16Gb fibre channel only supports Point-to-Point topology).
18 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
2.1.6. Storage Configuration Planning
This is the most important part to use the SAN storage system effectively. Depending on your application, estimated volume capacity, and disk failure risk, you should have a wellthought out storage plan. Complete all storage configuration planning jobs on part 6, pool configuration, part 7, volume configuration, part 8, LUN mapping configuration, and part 9,
SSD cache configuration.
Planning the Disks
To plan the disk drives, determine:
The cost/performance ratio of the disk drive type.
Estimate the total required capacity of the system.
Whether the disk drives are SSDs.
Quantity of spare disks. Recommend for hot spare to active drive ratios, 1 in 30 for SAS drives and 1 in 15 for NL-SAS.
Planning the Pools
Necessary activities are in the following, please fill in part 6, pool configuration, in Table 2-1.
The pool name, the maximum length of name is 16 characters.
According to the pool and disk drive characteristics, such as total capacity, budget, performance, and reliability, determine a pool type (thick, thin provisioning, or auto tiering) and a RAID level of storage systems to use. For example, create a pool which is grouped by RAID 10 for performance and RAID 5 in another storage pool to achieve desired results.
Plan to extend the capacity of the storage pool in the future.
Planning the Volumes
A volume is a member of the pool. Determine the following items and fill in part 7, volume configuration, in Table 2-1.
The volume name, the maximum length of name is 32 characters.
The volume capacity.
Plan the volume capacity for local clone or remote replication. Set it as RAID volume for normal RAID volume usage, or set it as backup volume for the backup target of local clone or remote replication.
Prepare for Installation
19
Planning the LUNs
LUN is a unique identifier which volume maps it to differentiate among separate devices. At the same time, there are some access controls by allowed hosts, target, LUN (logic unit number), and read/write permission. Please fill in part 8, LUN mapping configuration, in
Table 2-1.
Prepare an allowed host list for access control. They are (iSCSI Qualified Name) or Fibre
Channel WWNN (World Wide Node Name). Set it as wildcard (*) for access by all hosts.
Select an iSCSI target for iSCSI network portal. Or a FC target.
Support LUN from 0 to 255.
Usually, set the volume as read-write permission. For read only purpose, set it as readonly.
Planning the SSD Cache
Optionally, preparing for some SSDs will enhance the system performance. Please fill in part
9, SSD cache configuration, in Table 2-1.
For SSD cache planning, the capacity of read cache is the sum of SSDs. But the capacity of read-write cache is half of all SSDs because it needs RAID protection for write buffer.
2.1.7. Backup Configuration Planning
Backup plan is also an important job for disaster recovery. Complete the backup configuration planning jobs on part 10, snapshot configuration, part 11, local clone configuration, and part 12, remote replication configuration.
Planning the Snapshots
To plan the snapshots, you have to reserve some snapshot spaces from the pool. Normally, we suggest reserving 20% of the volume capacity as a minimum.
Select a volume and reserve its snapshot space.
Plan the schedule of snapshots, determine the cycle of taken snapshots. It can be hourly, daily, or weekly.
Planning the Local Clones
To plan the local clones, you have to select a source volume and prepare a target volume which the source clones to.
20 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Select a source volume and check its capacity.
Prepare a target volume and set it as backup volume. The capacity of target volume should be larger than the source one.
Plan the schedule local clones, determine the cycle of executing clone tasks. It can be hourly, daily, or weekly.
Planning the Remote Replications
To plan the remote replications, except to select a source volume in source unit and prepare a target volume which the source replicates to in target unit, you have to prepare the information of iSCSI port IP addresses to transfer the data. In addition, according to the network bandwidth, determine the transfer rate at peak and off-peak hours.
Select a source volume and check its capacity in source unit.
Select a source iSCSI port to send the replication data.
Prepare the information of iSCSI port IP addresses in target unit to receive the replication data. Also prepare CHAP username and password if the target unit is enabled CHAP authentication.
Prepare a target volume and set it as backup volume in target unit. The capacity of target volume should be larger than the source one.
Plan the schedule remote replications, determine the cycle of executing replication tasks. It can be hourly, daily, or weekly.
Define the off-peak hours and plan the transfer rate at peak and off-peak hours.
2.2. System Parameters
In this section, we list all of system parameters which will help you to understand and plan the system configuration. The following table shows the maximum number of each item you can use.
Table 2-2 System Parameters iSCSI Parameters
Item
Maximum iSCSI Target Quantity per System
Maximum Initiator Address Quantity per System
Maximum Host Quantity for Single Controller System
Maximum Host Quantity for Dual Controller System
Maximum iSCSI Session Quantity for Single Controller
Value
256
256
512
1,024
1,024
Prepare for Installation
21
Maximum iSCSI Session Quantity for Dual Controller
Maximum iSCSI Connection Quantity for Single Controller
Maximum iSCSI Connection Quantity for Dual Controller
Maximum CHAP Accounts per System
Fibre Channel Parameters
Item
Maximum Host Quantity for Single Controller
Maximum Host Quantity for Dual Controller
Enclosure and Disk Parameters
Item
Maximum Enclosure Quantity in a System
Maximum Disk Quantity in a System
2,048
4,096
8,192
64
Value
256
512
Maximum Disk Drive Quantity in a Thin Provisioning Pool
Maximum Capacity of a Disk Group
Maximum Thin Provisioning Pool Capacity per Pool
Maximum Thin Provisioning Pool Capacity per System
Provisioning Granularity
Auto Tiering Pool Parameters
Value
10
446
Thick Provisioning Pool Parameters
Item
Maximum Disk Drive Quantity in a Thick Provisioning Pool
(Include Dedicated Spares)
Maximum Pool Quantity per System
Maximum Dedicated Spare Quantity in a Pool
Value
64
64
8
Thin Provisioning Pool Parameters
Item Value
Maximum Disk Group Quantity in a Thin Provisioning Pool 32
Maximum Disk Drive Quantity in a Disk Group 8
256 (= 32 x 8)
64TB
256TB
1,024TB
1GB
Item
Maximum Tiers
Maximum Disk Group Quantity in an Auto Tiering Pool
Maximum Disk Drive Quantity in a Disk Group
Maximum Disk Drive Quantity in an Auto Tiering Pool
Maximum Capacity of a Disk Group
Maximum Auto Tiering Pool Capacity per Pool
Maximum Auto Tiering Pool Capacity per System
Volumes Parameters
Value
3
32
8
256 (= 32 x 8)
64TB
256TB
1,024TB
22 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Item
Maximum Volume Quantity in a Pool
Maximum Volume Quantity Per System
(include Snapshot Volumes)
Maximum Host Number Per Volume
Maximum Volume Capacity in Thick Provisioning Pool
Maximum Volume Capacity in Thin Provisioning/Auto
Tiering Pool
LUN Parameters
Value
96
4,096
16
640TB
256TB
Item
Maximum Quantity of LUN
SSD Cache Parameters
Value
4,096
Item
Maximum SSD cache pool quantity per system (either dual controller or single controller)
Value
4
Maximum SSD quantity in an SSD cache pool
Maximum capacity of an SSD cache pool
8
32TB
Maximum quantity of volume shared in an SSD cache pool 32
4 Maximum dedicated spare SSD quantity in an SSD cache pool
Snapshot Parameters
Item
Maximum Snapshot Quantity per Volume
Maximum Volume Quantity for Snapshot
Value
64
64
4,096 (= 64 x 64)
128TB
Maximum Snapshot Quantity per System
Maximum Snapshot Space Capacity of a Thin Provisioning
Volume
Local Clone Parameters
Item
Maximum Clone Task Quantity per Volume
(Maximum Clone Pairs per Source Volume)
Value
1
Maximum Clone Task Quantity per System
Remote Replication Parameters
Item
Maximum Replication Task Quantity per Volume
(Maximum Replication Pairs per Source Volume)
Maximum Replication Task Quantity per System
64
Value
1
32
Prepare for Installation
23
Maximum Traffic Shaping Quantity per System
Maximum iSCSI Multi-path Quantity in Replication Task
Maximum iSCSI Multiple Connection Quantity per
Replication Task Path
8
2
4
2.3. Configuration Example
The following is an example of finished configuration worksheet just for reference.
Table 2-3 Configuration Worksheet Example
1. Initial Configuration
Item
System Name:
The maximum length of the system name is 32 characters.
Valid characters are [ A~Z | a~z | 0~9 | -_ ].
Admin Password:
The maximum length of the password is 12 characters.
Valid characters are [ A~Z | a~z | 0~9 | ~!@#$%^&*_-
+=`|\(){}[]:;”’<>,.?/ ].
NTP Server:
FQDN (Fully Qualified Domain Name)or IP address of NTP
(Network Time Protocol) server.
Time Zone:
Depending on your location.
2. Management Port Setting
Item
Management Port IP Address on Controller 1:
IP address, subnet mask, and gateway of the management port on controller 1.
DNS Server Address:
IP address of DNS (Domain Name System) server.
Management Port IP Address on Controller 2: (optional)
IP address, subnet mask, and gateway of the management port on controller 2.
3. Notification Setting
Item
Value
XCubeSAN
1234 pool.ntp.org
(GM +08:00) Taipei
Value
IP: 192.168.1.234
SM: 255.255.255.0
GW: 192.168.1.254
8.8.8.8
IP: 192.168.1.235
SM: 255.255.255.0
GW: 192.168.1.254
Value
24 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Email-from Address:
Email-from address to send event notification.
Email-to Addresses:
Email-to addresses to receive event notification [email protected]
Email-to Address 1: [email protected]
Email-to Address 2: [email protected]
Email-to Address 3: [email protected] smtp.company.com SMTP Server:
Network name or IP address of SMTP (Simple Mail
Transfer Protocol) server.
Syslog Server: (optional)
FQDN or IP address of syslog server.
SNMP Trap Addresses: (optional)
FQDNs or IP addresses of SNMP (Simple Network
Management Protocol) trap. syslog.company.com
SNMP Trap Address 1: snmp1.company.com
SNMP Trap Address 2: snmp2.company.com
SNMP Trap Address 3: snmp3.company.com
4. iSCSI Port Configuration
Item
Onboard iSCSI Port IP Addresses:
IP address, subnet mask, and gateway of the iSCSI ports.
Onboard 2 x 10GBASE-T iSCSI (RJ45) ports
Value
Controller 1
IP Address
Subnet Mask
Gateway
Controller 2
IP Address
Subnet Mask
Gateway
Onboard LAN1
10.10.1.1
255.255.255.0
10.10.1.254
Onboard LAN1
10.10.3.1
255.255.255.0
10.10.3.254
Onboard LAN2
10.10.2.1
255.255.255.0
10.10.2.254
Onboard LAN2
10.10.4.1
255.255.255.0
10.10.4.254
Slot 1 iSCSI Port IP Addresses: (optional)
IP address, subnet mask, and gateway of the iSCSI ports.
4-port 10GbE iSCSI Host Card (SFP+) or
4-port 1GBASE-T iSCSI Host Card (RJ45)
Controller 1 Slot 1 LAN1 Slot 1 LAN2 Slot 1 LAN3 Slot 1 LAN4
Prepare for Installation
25
IP Address
Subnet Mask
Gateway
Controller 2
10.10.11.1
255.255.255.0
10.10.11.254
Slot 1 LAN1 Slot 1 LAN2 Slot 1 LAN3
IP Address
Subnet Mask
10.10.21.1
255.255.255.0
Gateway 10.10.21.254
Slot 2 iSCSI Port IP Addresses: (optional)
IP address, subnet mask, and gateway of the iSCSI ports.
4-port 10GbE iSCSI Host Card (SFP+) or
4-port 1GBASE-T iSCSI Host Card (RJ45)
Controller 1
IP Address
Subnet Mask
Slot 2 LAN1
10.10.31.1
255.255.255.0
Slot 2 LAN2 Slot 2 LAN3
Slot 1 LAN4
Slot 2 LAN4
Gateway
Controller 2
IP Address
Subnet Mask
10.10.31.254
Slot 2 LAN1
10.10.41.1
255.255.255.0
Slot 2 LAN2
Gateway 10.10.41.254
Entity Name:
The entity name is for a device or gateway that is accessible from the network. The maximum length of the entity name is 200 characters. Valid characters are [ a~z |
0~9 | -.: ]. iSNS IP Address: (optional)
IP address of iSNS (Internet Storage Name Server) server.
CHAP Username: (optional)
CHAP (Challenge-Handshake Authentication Protocol) username. The maximum length of the username is 223 characters. Valid characters are [ A~Z | a~z | 0~9 |
~!@#%^&*_-+=|(){}[]:;<>.?/ ].
Slot 2 LAN3 Slot 2 LAN4
Iqn.2004-08.com.qsan
10.1.1.1 chap1
CHAP Password: (optional)
CHAP (Challenge-Handshake Authentication Protocol) password. The length of the password is between12 to 16 characters. Valid characters are [ A~Z | a~z | 0~9 |
~!@#$%^&*_-+=`|\(){}[]:;”’<>,.?/ ].
5. Fibre Channel Port Configuration
123456789012
26 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Item Value
Slot 1 Fibre Channel: (optional)
Link speed and topology of the fibre channel ports.
Topology support: FC-AL, point-to-point, Fabric (16Gb Fibre
Channel only supports Point-to-Point topology)
4-port 16Gb Fibre Channel Host Card (SFP+)
Controller 1
Link Speed
Slot 1 FC1
Auto
Slot 1 FC2 Slot 1 FC3
Topology
Controller 2
Link Speed
Topology
Point-to-Point
Slot 1 FC1 Slot 1 FC2
Auto
Point-to-Point
6. Pool Configuration
Item
Pool Type:
Thick Provisioning, Thin Provisioning, or Auto Tiering (Thin
Provisioning Enabled).
Pool Name:
The maximum length of the pool name is 16 characters.
Valid characters are [ A~Z | a~z | 0~9 | -_<> ].
Disks:
Disk type, disk quantity, and the capacity.
Slot 1 FC3
Value
Auto Tiering
PL1
SSD: 4x 100GB
SAS: 4x 600GB
NL-SAS: 4x 4TB
RAID 5 RAID Level:
RAID level 0, 1, 3, 5, 6, 0+1, 10, 30, 50, 60, and N-way mirror
Raw Capacity:
Sum of disk capacity.
Estimate Capacity:
Estimate capacity according to the RAID level.
Slot 1 FC4
Slot 1 FC4
18.8TB (= 100GB x 4 +
600GB x 4 + 4TB x 4)
14.1TB (= 100GB x 3 +
600GB x 3 + 4TB x 3)
7. Volume Configuration
Item
Volume Name:
The maximum length of the volume name is 32 characters.
Valid characters are [ A~Z | a~z | 0~9 | -_<> ].
Capacity:
Required capacity of the volume.
Volume Type:
Value
V1-PL1
8TB
RAID Volume
Prepare for Installation
27
RAID Volume or Backup Volume
8. LUN Mapping Configuration
Item
Protocol: iSCSI or FCP.
Volume Name:
Select one of created volumes.
Allowed Hosts: iSCSI IQN or Fibre Channel WWNN for access control.
Wildcard (*) for access by all hosts.
Target: iSCSI Target or Fibre Channel Target
LUN:
Support LUN (Logical Unit Number) from 0 to 255.
Permission:
Read-only or Read-write.
9. SSD Cache Configuration
Item
SSD Cache Pool Name:
The maximum length of the pool name is 16 characters.
Valid characters are [ A~Z | a~z | 0~9 | -_<> ].
Cache Type:
Read Cache (NRAID+) or Read-write Cache (RAID 1 or
NRAID 1+).
I/O Type:
Database, File System, or Web Service.
SSDs:
SSD quantity and the capacity.
Raw Capacity:
Sum of disk capacity.
10. Snapshot Configuration
Item
Volume Name:
Select one of created volumes.
Snapshot Space:
Reserved snapshot space for the volume.
Snapshot Name:
Value iSCSI
V1-PL1
*
0
LUN 0
Read-write
Value
SCPL1
Read Cache
Database
SSD: 2x 400GB
800GB
Value
V1-PL1
1.6TB
Snap-V1-PL1
28 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
The maximum length of the snapshot name is 32 characters. Valid characters are [ A~Z | a~z | 0~9 | -_<> ].
Schedule Snapshots: (optional)
Define the cycle of snapshots.
11. Local Clone Configuration
Item
Source Volume Name:
Select one of created volume for source.
Source Volume Capacity:
Check the capacity of source volume.
Target Volume Name:
Select one of created volume for target.
Target Volume Capacity:
Check the capacity of target volume.
Schedule Local Clones: (optional)
Define the cycle of local clones.
12. Remote Replication Configuration
Item
Source Volume Name:
Select one of created volume for source.
Source Volume Capacity:
Check the capacity of source volume.
Daily 00:00
Value
V1-PL1
8TB
T1-PL1
8TB
Daily 01:00
Value
V1-PL1
8TB
Source iSCSI Port: iSCSI port of source unit. It can be auto or dedicated iSCSI port.
Target iSCSI Port IP Addresses: iSCSI port IP addresses of target unit.
Target Controller 1
Auto
Controller 2 (optional)
IP Address 10.10.100.1
Target CHAP Username: (optional) characters. Valid characters are [ A~Z | a~z | 0~9 |
~!@#%^&*_-+=|(){}[]:;<>.?/ ].
10.10.101.1 chap2
CHAP (Challenge-Handshake Authentication Protocol) username. The maximum length of the username is 223
Target CHAP Password: (optional)
CHAP (Challenge-Handshake Authentication Protocol) password. The length of the password is between 12 to 16
123456789012
Prepare for Installation
29
characters. Valid characters are [ A~Z | a~z | 0~9 |
~!@#$%^&*_-+=`|\(){}[]:;”’<>,.?/ ].
Target Volume Name:
Select one of created volume for target.
Target Volume Capacity:
Check the capacity of target volume.
Schedule Remote Replications: (optional)
Define the cycle of remote replications.
Traffic Shaping for Peak Hour: (optional)
Limit the transfer rate at peak hour.
Traffic Shaping for Off-peak Hour: (optional)
Limit the transfer rate at off-peak hour.
Off-peak Hour: (optional)
Define the off-peak hours.
RT1-PL1
8TB
Daily 02:00
100MB
500MB
Mon. ~ Fri. PM10:00 ~
AM06:59
Sat. ~ Sun. AM00:00 ~
PM23:59
30 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
3. Getting Started
After completing the configuration planning, it’s time to power on the system, find your system and log into SANOS (SAN operating system). This chapter explains how to discover the SAN storage system and how to sign into SANOS.
3.1. Power on the Storage System
Before you power on the system, we assume that you have followed the following hardware installation document to finish the hardware installation.
XCubeSAN QIG (Quick Installation Guide)
XCubeSAN Hardware Owner’s Manual
TIP:
Please double check all the cables (including power cords, Ethernet, fibre channel, and SAS cables) are connected properly, especially network cable connects to the management port. If everything is ready, now you can power on the system.
3.2. Discover the SAN Storage System
The default setting for the management IP address is DHCP. For users who are going to install at the first time, we provide the QFinder Java utility to search for QSAN products on the network and aid quick access to the login page of the SANOS web interface.
3.2.1. QFinder Utility
QFinder utility provides to search QSAN products on LAN. You can discover the management IP addresses of the storage systems via this utility. Please download QFinder utility from the following website. https://qsan.com/QFinder
Getting Started
31
In addition, QFinder is a java based program.
It is also a highly portable utility. To execute this program, JRE (Java Runtime Environment) is required. You can visit the following websites to download and install JRE. http://www.java.com/en/download/
After JRE is installed, run the QFinder.jar program. The SAN storage system in your network will be detected and listed in the table.
Function Icons Information Area
Figure 3-1 QFinder Utility
Using the example in Figure 3-2, the default setting of the management port IP address is gotten from the DHCP server, e.g., 192.168.30.234. The default system name is the model name plus the last 6 digits of serial number, e.g., XS5216-D40000. Double clicking the selected entry will automatically bring up the browser and display the login page.
32 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
TIP:
QFinder utility works in the following network environments:
Both the management port of the SAN storage system and the management computer are both on the same subnet domain of the
LAN.
The LAN works with or without DHCP server.
If the LAN doesn’t have a DHCP server, it still can work on zeroconfiguration networking. The management port will be assigned a fix
IP address: 169.254.1.234/16. So you can configure the IP address of your management computer to the same subnet domain of the storage system, e.g.: 169.254.1.1/16. Then open a browser and enter http://169.254.1.234
to go into the login page. For more information about zero configuration, please refer to: https://en.wikipedia.org/wiki/Zero-configuration_networking
3.3. Initial Setup
The Initial configuration wizard will guide the first time user to initialize and setup the system quickly. After discovering the storage system, please follow the steps to complete the initial configuration.
Default System Name
Figure 3-2 Login Page of web UI
1. To access the SANOS web interface, you have to enter a username and password. The initial defaults for administrator login are:
Getting Started
33
。 Username: admin
。 Password: 1234
TIP:
For existing users who are experienced, the Initial configuration wizard will not be shown again when you login next time, unless the system is
Reset to Defaults. You may skip this section to start the operations of web UI in next chapter.
You can execute Reset to Defaults function in SYSTEM SETTINGS ->
Figure 3-3 Initial Configuration Step 1
2. Enter a System Name, for security reason, it’s highly recommended to change system name. The maximum length of the system name is 32 characters. Valid characters are
[ A~Z | a~z | 0~9 | -_ ].
34 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
3. Change Admin Password, The maximum length of the password is 12 characters. Valid characters are [ A~Z | a~z | 0~9 | ~!@#$%^&*_-+=`|\(){}[]:;"'<>,.?/ ].
4. Set the local Date and Time. Date and time can be set by manually or synchronized with a NTP (Network Time Protocol) server.
5. Select a Time Zone depending on your location.
6. Click the Next button to proceed.
Figure 3-4 Initial Configuration Step 2
7. Assign an IP address for the management port by DHCP, BOOTP, or Static IP Address.
INFORMATION:
DHCP: The Dynamic Host Configuration Protocol is a standardized network protocol used on IP (Internet Protocol) networks for dynamically distributing network configuration parameters, such as IP addresses for interfaces and services. With DHCP, computers request IP addresses and networking parameters automatically from a DHCP server, reducing the
Getting Started
35
need for a network administrator or a user to configure these settings manually.
BOOTP: Similar to DHCP, the Bootstrap Protocol is also a computer networking protocol used in Internet Protocol networks to automatically assign an IP address to network devices from a configuration server.
While some parts of BOOTP have been effectively superseded by the
DHCP, which adds the feature of leases, parts of BOOTP are used to provide service to the DHCP protocol. DHCP servers also provide legacy
BOOTP functionality.
8. Assign a DNS Server Address. DNS (Domain Name System) provides a means to translate FQDN (Fully Qualified Domain Name) to IP address. Some notification services require DNS settings.
9. Click Next button to proceed.
Figure 3-5 Initial Configuration Step 3
36 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
10. Verify all items, and then click the Finish button to complete the initial configuration.
You have to login with the new IP address of the management port and new admin password next time.
Getting Started
37
4. SANOS User Interface
This chapter illustrates the web user interface of SANOS and provides a brief introduction to the SANOS 4.0 desktop function menus.
4.1. Accessing the Management Web UI
To access the management web user interface, open a supported web browser and enter the management IP address or Hostname of the system. The login panel is displayed, as shown in Figure 4-1.
Figure 4-1 Login Page of web UI
To access the web user interface, you have to enter a username and password.
Username: admin
Password: <Your Password>
TIP:
Supported web browsers:
Google Chrome 45 or later.
Mozilla Firefox 45 or later.
Microsoft Internet Explorer 10 or later.
Apple Safari 8 or later.
38 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
4.2. SANOS 4.0 Desktop Panel
When the password has been verified, the home page is displayed. As shown in Figure 4-2, the SANOS 4.0 desktop panel has three main sections:
Function Menus and Submenus
Function Tabs
Main Working Area
Function Tabs Main Working Area
Function
Menus & Submenus
Figure 4-2 SANOS 4.0 Desktop Panel
All function menus are listed at the left side of the window. They are grouping by function icon and uppercase letter such as DASHBOARD, SYSTEM SETTINGS …etc. The following is the second level function submenus such as General Settings, Management Port …etc. This provides an efficient and quick mechanism for navigation.
Click a second level function submenu will appear the related user interface in main working area. Some function submenus have their sub functions and display at the top of function tabs. You may see all functions in Table 4-1.
The top-right corner displays a Logout button.
SANOS User Interface
39
TIP:
For security reasons, always logout when you finish working in SANOS
4.3. Function Menus
The following table shows all function menus and their function tabs for reference.
Table 4-1 SANOS 4.0 Function Menus
Function Menus Function Tabs
DASHBOARD
Dashboard
Hardware Monitoring
SYSTEM SETTING
Dashboard
Hardware Monitoring
General Settings
Management Port
Power Settings
Notifications
Maintenance
HOST CONNECTIVITY
Overview Overview iSCSI Ports
Fibre Channel Ports
STORAGE MANAGEMENT
iSCSI Ports | iSCSI Settings | iSCSI Targets |
CHAP Accounts | Active Sessions
Fibre Channel Ports
Disks
Pools
Volumes
LUN Mappings
SSD Cache
DATA BACKUP
General Settings
Management Port
Boot Management | Cache to Flash | UPS
Email | Alert | SNMP
System Information | Update | Firmware
Synchronization | System Identification | Reset to Defaults | Configuration Backup | Volume
Restoration | Reboot and Shutdown
Disks
Pools | Auto Tiering
Volumes
LUN Mappings
SSD Cache Pools | SSD Cache Statistics
40 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Snapshots
Remote Replications
VIRTUALIZATION
VMware
Microsoft
Citrix
MONITORING
Log Center
Enclosures
Performance
Snapshots
Remote Replications
VMware
Hyper-V
Citrix
Event Logs
Hardware Monitoring | SES
Disk | iSCSI | Fibre Channel
The following describes a brief introduction of desktop function menus.
4.3.1. Dashboard Menu
The DASHBOARD menu provides access to function menus of Dashboard and Hardware
Monitoring.
Dashboard
Select Dashboard function menu provides summary of the overall system. For more
information, please refer to the chapter 5.1, Dashboard section in the Dashboard chapter.
Hardware Monitoring
Select Hardware Monitoring function menu provides summary enclosure information. For
more information, please refer to the chapter 5.2, Hardware Monitoring section in the
Dashboard chapter.
4.3.2. System Settings Menu
The SYSTEM SETTINGS menu provides access to function menus of General, Management
Port, Power, Notification, and Maintenance.
SANOS User Interface
41
General Settings
Select General Settings function menu to setup general system settings. For more
information, please refer to the chapter 6.1, General System Settings section in the System
Settings chapter.
Management Port
Select Management Port function menu to setup IP address and DNS settings of
management port. For more information, please refer to the chapter 6.2, Management Port
Settings section in the System Settings chapter.
Power Settings
Select Power Settings function menu to setup boot management which includes auto shutdown, wake on LAN, and wake on SAS …etc., view the status of cache to flash, and
setup UPS for power outage. For more information, please refer to the chapter 6.3, Power
Settings section in the System Settings chapter.
Notifications
Select Notifications function menu to setup notification settings of email, syslog, popup alert, LCM alert, and SNMP trap. For more information, please refer to the chapter 6.4,
Notification Settings section in the System Settings chapter.
Maintenance
Select Maintenance function menu to view system information, update system firmware and check firmware synchronization, system identification, reset to system defaults, import and export configuration files, restore volumes if an accidental delete occurs, reboot and
shutdown. For more information, please refer to the chapter 6.5, Maintenance section in the
System Settings chapter.
4.3.3. Host Configuration Menu
The HOST CONFIGURATION menu provides access to function menus of Overview, iSCSI
Port, and Fibre Channel Ports which is visible when fibre channel host cards are installed.
42 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Overview
Select Overview function menu to view all host connectivity which includes status and
settings of all host cards. For more information, please refer to the chapter 7.2, Host
Connectivity Overview section in the Host Configuration chapter.
iSCSI Ports
Select iSCSI Ports function menu to setup iSCSI port settings, including iSCSI port IP address, link aggregation, VLAN ID, jumbo frame, iSNS server, iSCSI targets, CHAP accounts, and view iSCSI active sessions. For more information, please refer to the chapter 7.3,
Configure for iSCSI Connectivity section in the Host Configuration chapter.
Fibre Channel Ports
Select Fibre Channel function menu to setup fibre channel port settings, including link speed, topology, target configuration, and clear counters of fibre channel statistics. For
more information, please refer to the chapter 7.4, Configure for Fibre Channel Connectivity
section in the Host Configuration chapter.
4.3.4. Storage Management Menu
The STORAGE MANAGEMENT menu provides access to function menus of Disks, Pools,
Volumes, and Mappings.
Disks
Select Disks function menu to view status of disk drives and S.M.A.R.T. (Self-Monitoring
Analysis and Reporting Technology) information. For more information, please refer to the
chapter 8.3, Working with Disk Drives section in the Storage Management chapter.
Pools
Select Pools function menu to configure storage pools. For more information, please refer
to the chapter 8.4, Configuring Storage Pools section in the Storage Management chapter.
Volumes
Select Volumes function menu to configure volumes and manage clone tasks. For more
information, please refer to the chapter 8.5, Configuring Volumes section in the Storage
Management chapter and the chapter 12.2, Managing Local Clones section in the Data
Backup chapter.
SANOS User Interface
43
LUN Mappings
Select LUN Mappings function menu to configure LUN mappings. For more information,
please refer to the chapter 8.6, Configuring LUN Mappings section in the Storage
Management chapter.
SSD Cache
Select SSD Cache function menu to configure SSD cache pools. For more information,
please refer to the chapter 10.3, Configuring SSD Cache section in SSD Cache chapter.
4.3.5. Data Backup Menu
The DATA BACKUP menu provides access to function menus of Snapshots and
Replications.
Snapshots
Select Snapshots function menu to manage snapshots, including setting up snapshot space, taking snapshots, rollback to a snapshot, and scheduling snapshots. For more information,
please refer to the chapter 12.1, Managing Snapshots section in the Data Backup chapter.
Replications
Select Replications function menu to manage remote replications, including setup replication tasks, schedule replication tasks, configure shaping settings. For more
information, please refer to the chapter 12.3, Managing Remote Replications section in the
Data Backup chapter.
4.3.6. Virtualization Menu
The VIRTUALIZATION menu provides access to function menus of VMware, Microsoft, and
Citrix.
VMware
Provide information of supported VMware integration of storage, such as VAAI (VMware vSphere Storage APIs for Array Integration).
44 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Microsoft
Provide information of supported Microsoft Windows feature of storage, such as ODX
(Offloaded Data Transfer).
Citrix
Provide information of Citrix XenServer platform and its compatible version.
4.3.7. Monitoring Menu
The MONITORING menu provides access to function menus of Log Center, Enclosure, and
Performance.
Log Center
Select Log Center function menu to view event logs, download event logs, or clear event logs. Also mute buzzer if system happens alerts. For more information, please refer to the
chapter 13.1, Log Center section in the Monitoring chapter.
Enclosures
Select Enclosures function menu to view hardware monitor status and enable SES (SCSI
Enclosure Services). For more information, please refer to the chapter 13.2, Hardware
Monitor section in the Monitoring chapter.
Performance
Select Performance function menu to view system performance. For more information,
please refer to the chapter 13.3, Performance Monitor section in the Monitoring chapter.
4.4. Accessing the Management USB LCM
Optionally, we provide a portable USB LCM (LCD Control Module) for simple management.
To access the management USB LCM, plug it into the USB port of the right ear in the front panel.
SANOS User Interface
45
Figure 4-3 Portable USB LCM
INFORMATION:
For the USB port in front panel, please refer to the chapter 2, System
Components Overview in the XCubeSAN Hardware Owner’s Manual .
After plugging the USB LCM into the system, the LCD screen shows the management port IP address and the system model name.
192.168.1.234
QSAN XS5216D ←
Figure 4-4 USB LCM Screen
To access the LCM options, use the ENT (Enter) button, ESC (Escape) button, (up) and
(down) to scroll through the functions. MUTE button to mute the buzzer when the system alarms. If there are event logs occurred, events will be displayed on the first line of the LCM.
TIP:
The event alert settings can be changed, please refer to the chapter 6.4.2,
Allert Settings section in the System Settings chapter.
This table describes the function of each item.
Table 4-2
Function
System Info.
USB LCM Function List
Description
Display system information including firmware version and memory size.
46 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Reset/Shutdown Reset or shutdown the system.
View IP Setting Display current IP address, subnet mask, and gateway.
Change IP
Config
Set IP address, subnet mask, and gateway. There are three options of
DHCP, BOOTP, or static IP address.
Enc.
Management
Show the enclosure data of disk drive temperature, fan status, and power supply status.
Reset to Default Reset the system to default settings. The default settings are:
Reset Management Port IP address to DHCP, and then fix IP address: 169.254.1.234/16.
Reset admin’s Password to 1234.
Reset System Name to model name plus the last 6 digits of serial number. For example: XS5216-123456.
Reset IP addresses of all iSCSI Ports to 192.168.1.1,
192.168.2.1, … etc.
Reset link speed of all Fibre Channel Ports to Automatic.
Clear all access control settings of the host connectivity.
INFORMATION:
DHCP: The Dynamic Host Configuration Protocol is a standardized network protocol used on IP (Internet Protocol) networks for dynamically distributing network configuration parameters, such as IP addresses for interfaces and services. With DHCP, computers request IP addresses and networking parameters automatically from a DHCP server, reducing the need for a network administrator or a user to configure these settings manually.
BOOTP: Similar to DHCP, the Bootstrap Protocol is also a computer networking protocol used in Internet Protocol networks to automatically assign an IP address to network devices from a configuration server.
While some parts of BOOTP have been effectively superseded by the
DHCP, which adds the feature of leases, parts of BOOTP are used to provide service to the DHCP protocol. DHCP servers also provide legacy
BOOTP functionality.
This table displays the LCM menu hierarchy for your reference when you operate USB LCM.
SANOS User Interface
47
Table 4-3
Menu
USB LCM Menu Hierarchy
L1 L2
<IP Addr>
QSAN <Model>
System Info. Firmware Version
<n.n.n>
RAM Size <nnnn>
MB
L3 L4
Reset /
Shutdown
Yes No
Reset
Shutdown
View IP Setting IP Config
<Static IP /
DHCP / BOOTP>
Change IP
Config
IP Address
<192.168.001.234>
IP Subnet Mask
<255.255.255.0>
IP Gateway
<xxx.xxx.xxx.xxx>
DHCP
BOOTP
Yes No
Yes No
Static IP IP Address
IP Subnet
Mask
IP Gateway
Adjust IP address
Adjust
Submask IP
Adjust
Gateway IP
Yes No
Enc.
Management
Phy. Disk Temp.
Cooling
Apply IP
Setting
Local
Slot <n>: <nn>
(C)
Local
FAN<n>:
48 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Reset to
Default
Power Supply
Yes No
<nnnnn> RPM
Local
PSU<n>:
<status>
SANOS User Interface
49
5. Dashboard
The DASHBOARD function menu provides submenus of Dashboard and Hardware
Monitoring. These pages may help user to quickly view the basic information and system health.
5.1. Dashboard
Select the Dashboard function submenu to show summary of overall system. It’s divided into four blocks in the main working area. There are system information, storage view, performance, and event logs.
Figure 5-1 Dashboard Function Submenu
Figure 5-2 Dashboard
50 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
System Information
The system information block displays the basic information of the system. In addition, clicking More… will pop up a window for more details.
Figure 5-3 System Information Block in the Dashboard
This table shows the column descriptions.
Table 5-1 System Information Block Descriptions
Column Name Description
System
Availability
Status
The status of system availability:
Dual Controller, Active/Active: Dual controllers and expansion units are in normal stage.
Dual Controller, Degraded: In dual controller mode, one controller or one of expansion unit fails or they have been plugged out.
Please replace or insert a good controller.
Dual Controller, Lockdown: In dual controller mode, the configurations of two controllers are different, including the CPU model, memory capacity, host cards, and controller firmware version. Please check the hardware configurations of two controllers or execute firmware synchronization.
Single Controller: Single controller mode.
System Health The status of system health:
Good : The system is good.
Fan Fault : A fan module has failed or is no longer connected..
PSU Fault : A power supply has failed or is no longer connected.
Dashboard
51
Temperature Fault : Temperature is abnormal.
Voltage Fault : Voltage values are out of range.
UPS Fault : UPS connection has failed.
Clicking More… will pop up a window of system information details.
Figure 5-4 System Information Popup Window
Click the Close button to exit the window.
52 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Storage View
The storage view block displays the basic information of storage. In addition, clicking pool column will pop up a window for more details.
Figure 5-5 System View Block in the Dashboard
The top table displays sum of disks, pools, volumes, LUN mappings, snapshots, clones, and remote replications in system. The following table displays all created pools and its basic information.
Table 5-2 Storage View Block Descriptions
Column Name Description
Name
Health
Pool name.
The health of the pool:
Good : The pool is good.
Failed : The pool is failed.
Degraded : The pool is not healthy and not completed. The reason could be lack of disk(s) or have failed disk.
Total Total capacity of the pool.
Percentage Used Mouse over the Blue block will display the percentage of used. And the Gray block will display the percentage of available.
Click any one of the pool names such as Pool-1 in Figure 5-4 will pop up the detailed pool information.
Dashboard
53
Figure 5-6 Pool Information Popup Window
This table shows the column descriptions.
Table 5-3 Volume Column Descriptions
Column Name Description
Name
Health
Volume name.
The health of the volume:
Optimal : the volume is working well and there is no failed disk in the RG.
Failed : the pool disk of the VD has single or multiple failed disks than its RAID level can recover from data loss.
Degraded : At least one disk from the RG of the Volume is failed or
54 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Capacity
Used plugged out.
Partially optimal : the volume has experienced recoverable read errors. After passing parity check, the health will become Optimal.
Total capacity of the volume.
Used capacity of the volume.
Used % Used percentage of the volume.
Snapshot space Used snapshot space / Total snapshot space. The first capacity is current used snapshot space, and the second capacity is reserved total snapshot space.
Snapshots The quantity of snapshot of the volume.
Table 5-4 Disk Group Column Descriptions
Column Name Description
No
Status
The number of disk group.
The status of the disk group:
Online : The disk group is online.
Offline : The disk group is offline.
Rebuilding : The disk group is being rebuilt.
Migrating : The disk group is being migrated.
Relocating : The disk group is being relocated.
Health The health of the disk group:
Good : The disk group is good.
Failed : The disk group fails.
Degraded : The pool is not healthy and not complete. The reason could be missing or failed disks.
Total
Free
Disks Used
Location
Total capacity of the disk group.
Free capacity of the disk group.
The quantity of disk drives in the disk group.
Disk location represents in Enclosure ID:Slot.
Table 5-5 Disk Column Descriptions
Column Name Description
Enclosure ID
Slot
Status
The enclosure ID.
The position of the disk drive.
The status of the disk drive:
Online : The disk drive is online.
Dashboard
55
Health
Capacity
Disk Type
Manufacturer
Model
Missing : The disk drive is missing in the pool.
Rebuilding : The disk drive is being rebuilt.
Transitioning : The disk drive is being migrated or is replaced by another disk when rebuilding occurs.
Scrubbing : The disk drive is being scrubbed.
Check Done : The disk drive has been checked the disk health.
The health of the disk drive:
Good : The disk drive is good.
Failed : The disk drive is failed.
Error Alert : S.M.A.R.T. error alerts.
Read Errors : The disk drive has unrecoverable read errors.
The capacity of the disk drive.
The type of the disk drive:
[ SAS HDD | NL-SAS HDD | SAS SSD | SATA SSD ]
[ 12.0Gb/s | 6.0Gb/s | 3.0Gb/s | 1.5Gb/s ]
The manufacturer of the disk drive.
The model name of disk drive.
Click the Close button to exit the window.
Performance
The performance block displays sum of throughput of all front-end ports for reference.
Figure 5-7 Performance Block in the Dashboard
56 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
TX : Green line is transmitting throughput performance, the unit of measure is MBps
(Mega Byte per Second).
RX : Amber line is receiving throughput performance data. The unit of measure is MBps
(Mega Byte per Second).
Event Logs
The event log block displays the recent 5 event logs. In addition, clicking More… will link to
Event Logs function tab of Log Center function submenu to view complete event logs.
Figure 5-8 Event Logs in the Dashboard
TIP:
The displaying types of the event logs are the same as the event level
filter in Event Logs, please refer to the chapter 13.3, Log Center section in
5.2. Hardware Monitoring
Select the Hardware Monitoring function submenu to show summary status of all enclosures. There are current voltage, temperature, status of fan module, and power supply.
In addition, clicking More… will pop up a window for more details.
Dashboard
57
Figure 5-9 Hardware Monitoring Function Submenu
Figure 5-10 Hardware Monitoring
There are many voltage and temperature sensors on the controller and backplane. If the statuses of every sensor are Good , the information in this page displays Good . On the other
58 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
hand, if anyone is not good, it will display Failed . So you may click More… to see details on what component has a fault.
Clicking More… of every enclosure will pop up a window of enclosure status in detail.
Figure 5-11 Head Unit Popup Window
Click the Close button to exit the window.
Dashboard
59
6. System Settings
This chapter describes the details of system settings. The SYSTEM SETTINGS function menu provides submenus of General Settings, Management Port, Power Settings,
Notification, and Maintenance.
6.1. General System Settings
Select the General Settings function submenu to show the information of system name, date and time, and login related settings. The system name, administrator’s password, user’s password, date and time, and the login options can be changed as required.
Figure 6-1 General Function Submenu
TIP:
Mouse over the icon can get online help.
60 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 6-2 General Settings
Options of System Settings
The options available in this tab:
System Name: Change the system name, highlight the old name and type in a new one.
Maximum length of the system name is 32 characters. Valid characters are [ A~Z | a~z |
0~9 | -_ ].
System Settings
61
Change Admin Password: Check it to change administrator password. The maximum length of the password is 12 alphanumeric characters. Valid characters are [ A~Z | a~z |
0~9 | ~!@#$%^&*_-+=`|\(){}[]:;"'<>,.?/ ].
Change User Password: Check it to change user password. The maximum length of the password is 12 alphanumeric characters. Valid characters are [ A~Z | a~z | 0~9 |
~!@#$%^&*_-+=`|\(){}[]:;"'<>,.?/ ].
Date and Time: Check Change Date and Time button and change the current date, time and time zone as required. Date and time can be set by manually or synchronized from a
NTP (Network Time Protocol) server.
Auto Logout: When the auto logout option is enabled, you will be logged out of the admin interface after the time specified. There are Disabled (default), 5 minutes, 30 minutes, and 1 hour options.
Login Lock: When the login lock is enabled, the system allows only one user to login to the web UI at a time. There are Disabled (default) and Enabled options.
When finished, click the Apply button to take effect.
6.2. Management Port Settings
Select the Management Port function submenu to show the information of the management ports. MAC address is displayed for reference and it is used on wake-on-LAN feature. IP address, DNS server, and service ports can be modified according to the management purpose.
Figure 6-3 Management Port Function Submenu
62 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 6-4 Management Port settings
Options of Management Port Settings
The options available in this tab:
Enable Dual Management Ports: This is for dual controller models. When the setting is enabled, both management ports of the controllers have their own IP addresses and
MAC addresses, and both are active. If the setting is disabled, only the management port of the master controller is active, the other one is on standby. Both controller management ports share the same IP address and MAC address. The management port fails over to the slave controller when the master controller goes offline, either planned or unplanned.
System Settings
63
INFORMATION:
For deployment of management ports, please refer to the chapter 4
Deployment Types and Cabling in the XCubeSAN Hardware Owner’s
Manual .
MAC Address: Display the MAC address of the management port.
IP Address: The option can change IP address for remote administration usage. There are three options for DHCP, BOOTP, or Static IP Address.
INFORMATION:
DHCP: The Dynamic Host Configuration Protocol is a standardized network protocol used on IP (Internet Protocol) networks for dynamically distributing network configuration parameters, such as IP addresses for interfaces and services. With DHCP, computers request IP addresses and networking parameters automatically from a DHCP server, reducing the need for a network administrator or a user to configure these settings manually.
BOOTP: Similar to DHCP, the Bootstrap Protocol is also a computer networking protocol used in Internet Protocol networks to automatically assign an IP address to network devices from a configuration server.
While some parts of BOOTP have been effectively superseded by the
DHCP, which adds the feature of leases, parts of BOOTP are used to provide service to the DHCP protocol. DHCP servers also provide legacy
BOOTP functionality.
DNS Server Address: DNS (Domain Name System) provides a means to translate FQDN
(Fully Qualified Domain Name) to IP address. Some notification services need DNS setting. Enter an IP address of DNS server here.
Service Ports: If the default port numbers of HTTP Port, HTTPS Port, and SSH Port are not allowed on your network environment, they can be changed here.
When finished, click the Apply button to take effect.
64 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
6.3. Power Settings
The Power Settings function submenu provides Boot Management, Cache to Flash, and
UPS function tabs to show information and configuration setup.
Figure 6-5 Power Settings Function Submenu
6.3.1. Boot Management Settings
Select the Boot Management function tab in the Power Settings function submenu to enable or disable options about boot.
Figure 6-6 Boot Management Settings
System Settings
65
Options of Boot Management Settings
The options available in this tab:
Enable Auto Shutdown: Check to enable auto shutdown. If enabling the setting, the system will shutdown automatically when the internal power levels or temperature are not within normal levels.
TIP:
For better protection and to avoid a single short period of abnormal voltage or temperature, enabling the setting could trigger an automatic shutdown. This is done using several sensors placed on key systems that the system checks every 30 seconds for present voltages or temperatures. The following items are trigger conditions.
The value of voltage or temperature is lower than Low Critical or higher than High Critical.
When one of these sensors reports above the threshold for three contifuous minutes, the system will be shutdown automatically.
For more information about critical values, please refer to the chapter
section in the Monitoring chapter.
Enable Wake-on-LAN: Check to enable wake-on-LAN, the system will accept a magic packet from management port to power on the system.
TIP:
To execute wake-on-LAN function, MAC address of management port is needed. For the information of MAC address, please refer to the chapter
6.2, Management Port Settings page.
Enable Wake-on-SAS: Check to enable wake-on-SAS. If Wake-on-SAS is enabled, and connect the XD5300 series expansion unit(s) with the proprietary Wake-on-SAS expansion cable(s), the XD5300 series will power on or shutdown in conjunction with the head unit.
66 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
INFORMATION:
For deployment of the head unit and expansion units, please refer to the chapter 4, Deployment Types and Cabling in the XCubeSAN Hardware
Owner’s Manual .
CAUTION:
Wake-on-SAS feature required genuine QSAN proprietary expansion cables connected between the head unit and expansion units. Please contact local sales for this accessory.
When finished, click the Apply button to take effect.
6.3.2. Cache to Flash Status
Select the Cache to Flash function tab in the Power function submenu to show information of power module and flash module for cache to flash protection. Power module indicates
BBM (Battery Backup Module) or SCM (Super Capacitor Module).
INFORMATION:
For hardware information about power module and flash module, please refer to the chapter 2.7.2, Features of the Cache-to-Flash Module section in the XCubeSAN Hardware Owner’s Manual .
Figure 6-7 Cache to Flash
System Settings
67
This table shows the column descriptions.
Table 6-1 Power Module Column Descriptions
Column Name Description
Status The status of power module:
Good : The power module is good.
Failed : The power module is failure.
Absent : The power module is absent.
Charging : The power module is charging.
Type The type of power module:
BBM (Battery Backup Module)
SCM (Super Capacitor Module)
Power Level
Temperature
The power level of the module.
The temperature of the power module..
Table 6-2 Flash Module Column Descriptions
Column Name Description
Status The status of flash module:
Good : The flash module is good.
Failed : The flash module is failure.
Absent : The flash module is absent.
Detecting : The flash module is detecting.
6.3.3. UPS Settings and Status
Select the UPS function tab in the Power function submenu to enable UPS (Uninterruptible
Power Supply) settings and watch the UPS status.
INFORMATION:
For deployment of UPS, please refer to the chapter 3, Installing the
System Hardware in the XCubeSAN Hardware Owner’s Manual .
68 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 6-8 UPS Status and Settings
This table shows the column descriptions.
Table 6-3 UPS Column Descriptions
Column Name Description
UPS Status The status of UPS:
On Line : The UPS is online.
On Battery : The UPS is on battery.
Low Battery : The voltage of the battery is low.
High Battery : The voltage of the battery is high.
Replace Battery : The battery needs to be replaced.
Charging: The battery is charging.
Discharging : The battery is discharging.
Bypass Mode : The power circuit bypasses the UPS battery, so no battery protection is available. It may happen to check if the UPS operates properly during a power loss. Or the UPS is offline for
System Settings
69
UPS Battery
Level
UPS
Manufacturer
UPS Model maintenance.
Offline : UPS is offline and is not supplying power to the load.
Overloaded : UPS is overloaded. You plugged more equipments into the UPS than it was designed to handle.
Forced Shutdown : Forced shutdown the UPS.
The battery level of the UPS.
The manufacturer of the UPS.
The model of the UPS.
Options of UPS Settings
The options available in this tab:
Enable UPS Support: Check to enable UPS supported. Now we support network UPS via
SNMP, Serial UPS with COM port, and USB UPS.
Communication Type: Now we support network UPS via SNMP, Serial UPS with COM port, and USB UPS.
Shutdown battery Level: If the power is shortage, the system will execute shutdown process when reaching the UPS battery level.
If Communication Type selects SNMP:
SNMP IP Address: Enter the IP address of the network UPS via SNMP.
SNMP Version: Select SNMP supported versions: v1, v2c, or v3. Please enter community if select SNMP v1 or v2c. If select SNMP v3, it needs more options for authentication.
Please enter a username, check to use authentication if necessary, select an authentication protocol and enter an authentication password, check to use privacy if necessary, the privacy protocol supports DES, and enter a privacy password.
If Communication Type selects Serial:
UPS Manufacturer: Select the UPS manufacture.
UPS Model: Select the UPS model.
When finished, click the Apply button to take effect.
70 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
INFORMATION:
Authentication Protocol:
MD5: The MD5 algorithm is a widely used hash function producing a
128-bit hash value. It can still be used as a checksum to verify data integrity, but only against unintentional corruption.
SHA: SHA (Secure Hash Algorithm) is a cryptographic hash function which is a mathematical operation run on digital data; by comparing the computed "hash" (the output from execution of the algorithm) to a known and expected hash value, a person can determine the data's integrity.
Privacy Protocol:
DES: The DES (Data Encryption Standard) is a symmetric-key algorithm for the encryption of electronic data.
6.4. Notification Settings
The Notifications function submenu provides Email, Alert and SNMP function tabs to show information and configuration setup.
Figure 6-9 Notifications Function Submenu
6.4.1. Email Settings
Select the Email function tab in the Notifications function submenu to be used to enter up to three email addresses for receiving the event notifications. Fill in the necessary fields and then click Send Test Email button to test whether it is available. Some email servers will check the email-from address and need the SMTP relay settings for authentication.
System Settings
71
TIP:
Please make sure the IP address of DNS server is well-setup in
Management Port tab. So the event notification emails can be sent
successfully. Please refer to the Management Port Settings section for
more details.
Figure 6-10 Email Settings
You can also select which levels of event logs which you would like to receive. The default setting only includes Warning and Error event logs. When finished, click the Apply button to take effect.
6.4.2. Alert Settings
Select the Alert function tab in the Notifications function submenu to be used to setup alerts via the Syslog protocol, the pop-up alerts and alerts on the front display. The device buzzer is also managed here.
72 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 6-11 Alert Settings
Options of Alert Settings
The options available in this tab:
Syslog Server Settings: Fill in the host address and the facility for syslog service. The default UDP port is 514. You can also check the alert levels here. Most LINUX/UNIX systems built in syslog daemon.
Admin Interface and LCM Alerts: You can check or uncheck the alert levels which you would like to have pop-up message in the Web UI and show on LCM.
Device Buzzer: Check it to enable the device buzzer. Uncheck it to disable device buzzer.
When finished, click the Apply button to tack effect.
TIP:
The device buzzer features are listed below:
The buzzer alarms 1 second when system boots up successfully.
The buzzer alarms continuously when there is error occurred. The alarm will be stopped after error resolved or be muted.
The alarm will be muted automatically when the error is resolved. For example, when a RAID 5 pool is degraded and alarm rings immediately, user replaces one disk drive for rebuild. When the rebuild process is done, the alarm will be muted automatically.
System Settings
73
6.4.3. SNMP Settings
Select the SNMP function tab in the Notifications function submenu to be used to setup
SNMP (Simple Network Management Protocol) traps for alerting with event logs and also setup SNMP server settings for client monitoring.
Figure 6-12 SNMP Settings
Options of SNMP Trap Settings
The options available in this tab:
Enable SNMP Trap: Check to enable SNMP trap to send system event logs to SNMP trap agent. The default SNMP trap port is 162. You can check or uncheck the alert levels which you would like to receive. And then fill in up to three SNMP trap addresses for receiving the event notifications.
74 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Options of SNMP Server Settings
The options are available on this tab:
SNMP Version: Select SNMP supported versions: v1/ v2, or v3. If select SNMP v3, it needs more options for authentication. Please enter a username, select an authentication protocol and enter an authentication password, check to use privacy if necessary, select a privacy protocol, and enter a privacy password.
When finished, click the Apply button to take effect.
INFORMATION:
Authentication Protocol:
MD5: The MD5 algorithm is a widely used hash function producing a
128-bit hash value. It can still be used as a checksum to verify data integrity, but only against unintentional corruption.
SHA: SHA (Secure Hash Algorithm) is a cryptographic hash function which is a mathematical operation run on digital data; by comparing the computed "hash" (the output from execution of the algorithm) to a known and expected hash value, a person can determine the data's integrity.
Privacy Protocol:
DES: The DES (Data Encryption Standard) is a symmetric-key algorithm for the encryption of electronic data.
AES: The AES (Advanced Encryption Standard) is a specification for the encryption of electronic data.
It supersedes the DES.
Options of SNMP MIB Files
The options available in this tab:
Download SNMP MIB File: Click Download button to save the SNMP MIB file which can be imported to the SNMP client tool to get system information. You can view fan, voltage, and system status via SNMP MIB.
Download iSCSI MIB File: Click Download button to save the iSCSI MIB file which can be imported to the SNMP client tool to get network information. You can view iSCSI traffic via iSCSI MIB.
System Settings
75
6.5. Maintenance
The Maintenance function submenu provides System Information, Upgrade, Firmware
Synchronization (This option is only visible when dual controllers is installed.), System
Identification, Reset to Defaults, Configuration Backup, Volume Restoration, and Reboot /
Shutdown function tabs. The Volume Restoration function will be described in the Volume
Restoration section in the Advanced Volume Administration chapter, the others are
described in the following sections.
Figure 6-13 Maintenance Function Submenu
6.5.1. System Information
Select the System Information function tab in the Maintenance function submenu to display all system information.
76 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 6-14 System Information
System Settings
77
This table shows the descriptions.
Table 6-4 Status in System Information Column Descriptions
Column Name Description
System
Availability
Status
The status of system availability:
Dual Controller, Active/Active: Dual controllers and expansion units are in normal stage.
Dual Controller, Degraded: In dual controller mode, one controller or one of expansion unit fails, or they have been plugged out.
Please replace or insert a good controller.
Dual Controller, Lockdown: In dual controller mode, the configurations of two controllers are different, including the CPU model, memory capacity, host cards, and controller firmware version. Please check the hardware configurations of two controllers or execute firmware synchronization.
Single Controller: Single controller mode.
Options of System Information
The options available in this tab:
Download Service Package: Click button to download system information for service.
CAUTION:
If you try to increase the system memory and running in dual controller mode, please make sure both controllers have the same DIMM on each corresponding memory slot. Failing to do so will result in controller malfunction, which will not be covered by warranty.
6.5.2. Firmware Update
Select the Update function tab in the Maintenance function submenu to be used to update controller firmware, expansion unit firmware, change operation mode, and active licenses.
78 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
TIP:
Before upgrading, we recommend you to export your system configurations first in the Configuration Backup function tab. Please refer
to the Configuration Backup section for more details.
Figure 6-15 Firmware Update
Options of Firmware Update
The options available in this tab:
Head Unit Firmware Update: Please prepare new controller firmware file named
“xxxx.bin” in local hard drive, click the Choose File button to select the firmware file.
Then Click the Apply button , it will pop up a warning message, click the OK button to start upgrading the firmware.
System Settings
79
When upgrading, there is a progress bar running. After finished upgrading, the system must reboot manually to make the new firmware take effect.
Expansion Unit Firmware Update: To upgrade expansion unit firmware, first select an expansion unit. Then other steps are the same as the head unit firmware update. After finished upgrading, the expansion unit must reboot manually to make the new firmware take effect.
Change the Operation Mode: This option can be modified to operate in dual-controller or single-controller mode here. If the system installed only one controller, switch this mode to Single Controller, and then click the Apply button. After changing the operation mode, the system must reboot manually to take effect.
Figure 6-16 Enable Licenses
SSD Cache License: This option enables SSD cache. Download Request License file and send to your local sales to obtain a License Key. After getting the license key, click
Choose File button to select it, and then click the Apply button to enable. Each license key is unique and dedicated to a specific system. If you have already enabled, this option will be invisible. After enabling the license, the system must reboot manually to take effect.
Auto Tiering License: This option enables auto tiering. The operation is the same as
SSD cache.
80 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
TIP:
After enabling the licenses, the functions cannot be disabled and these license options will be hidden.
6.5.3. Firmware Synchronization
Select the Firmware Synchronization function tab in the Maintenance function submenu to be used on dual controller systems to synchronize the controller firmware versions when the firmware of the master controller and the slave controller are different. The firmware of slave controller is always changed to match the firmware of the master controller. It doesn’t matter if the firmware version of slave controller is newer or older than that of the master.
Normally, the firmware versions in both controllers are the same.
Figure 6-17 Both Firmware Versions are Synchronized
If the firmware versions between two controllers are different, it will display the warning message. Click the Apply button to synchronize and force a reboot.
TIP:
This tab is only visible when the dual controllers are installed. A single controller system does not have this option.
6.5.4. System Identification
Select the System Identification function tab in the Maintenance function submenu to turn on or turn off the UID (Unique Identifier) LED control mechanism. The system UID LED helps users to easily identify the system location within the rack.
When the UID LEDs are turn off, click the OK button to turn on the UID LEDs which are light blue color, located on the right panel of front view and both controllers of rear view.
System Settings
81
Figure 6-18 Turn on the UID (Unique Identifier) LED
When the UID LEDs are steady on, click the OK button to turn off the UID LEDs.
INFORMATION:
For the front and rear view about the UID LEDs, please refer to chapter 2,
System Components Overview in the XCubeSAN Hardware Owner’s
Manual .
6.5.5. Reset to Factory Defaults
Select the Reset to Defaults function tab in the Maintenance function submenu to allow users to reset the system configurations back to the factory default settings and clean all configurations of the expansion enclosure ID.
Figure 6-19 Reset to Defaults
Options of Resets to Factory Defaults
The options available in this tab:
82 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Reset to Defaults: Click the Reset button to progress reset to defaults and force a reboot. The default settings are:
。 Reset Management Port IP address to DHCP, and then fix IP address:
169.254.1.234/16.
。 Reset admin’s Password to 1234.
。 Reset System Name to model name plus the last 6 digits of serial number. For example: XS5216-123456.
。 Reset IP addresses of all iSCSI Ports to 192.168.1.1, 192.168.2.1, … etc.
。 Reset link speed of all Fibre Channel Ports to Automatic.
。 Clear all access control settings of the host connectivity.
CAUTION:
Process the Reset to Defaults function will force a reboot.
Clean Expansion Enclosure ID: Click the Clean button to clean all configurations of expansion enclosure ID. A clean will cause the system shutdown, and then you have to start manually.
INFORMATION:
The XCubeDAS XD5300 series features the seven-segment LED display for users to easily identify a specific XCubeDAS system. The enclosure ID is assigned by head unit (XCubeSAN series) automatically. The seven segment LED display supports up to ten XCubeDAS systems, and the numbering rule will start from 1 to A. For dual controller models, both controllers will display the same enclosure ID. After the XD5300 had been assigned the enclosure ID, head unit will assign the same enclosure ID when the system reboots or goes shutdown. For hardware information enclosure ID, please refer to the chapter 2.7, Seven-segment LED Display section in the XCubeDAS Hardware Owner’s Manual .
CAUTION:
Process the Clean Expansion Enclosure ID function will force the system shutdown to clean all configurations of expansion enclosure ID.
System Settings
83
6.5.6. Configuration Backup
Select the Configuration Backup function tab in the Maintenance function submenu to be used to either save system configuration (export) or apply a saved configuration (import).
Figure 6-20 Configuration Backup
While the volume configuration settings are available for exporting, to prevent conflicts and overwriting existing data, they cannot be imported.
Options of Configuration Backup
The options available in this tab:
Import: Import all system configurations excluding volume configuration.
Export: Export all configurations to a file.
CAUTION:
The Import option will import all system configurations excluding volume configuration. The current system configurations will be replaced.
84 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
6.5.7. Reboot and Shutdown
Select the Reboot / Shutdown tab in the Maintenance function submenu to be used to reboot or shutdown the system. Before powering off the system, it is highly recommended to execute Shutdown function to flush all data from memory cache into the disk drives. The step is important for data protection.
Figure 6-21 Reboot and Shutdown
The Reboot function has three options, reboot both controllers, controller 1 only or controller 2 only.
Figure 6-22 Reboot Options
System Settings
85
7. Host Configuration
This chapter describes host protocol concepts and the details of host configuration. The
HOST CONFIGURATION function menu provides submenus of Overview, iSCSI Ports, and
Fibre Channel Ports.
TIP:
The Fibre Channel Ports function menu will appear only when the system has a fibre channel host card installed.
7.1. Host Protocol Technology
This section describes the host protocol technology of iSCSI (Internet SCSI) and fibre channel.
7.1.1. iSCSI Technology
iSCSI (Internet SCSI) is a protocol which encapsulates SCSI (Small Computer System
Interface) commands and data in TCP/IP packets for linking storage devices with servers over common IP infrastructures. iSCSI provides high performance SANs over standard IP networks like LAN, WAN or the Internet.
IP SANs are true SANs (Storage Area Networks) which allow several servers to attach to an infinite number of storage volumes by using iSCSI over TCP/IP networks. IP SANs can scale the storage capacity with any type and brand of storage system. In addition, it can be used by any type of network (Ethernet, Fast Ethernet, Gigabit Ethernet, and 10 Gigabit Ethernet) and combination of operating systems (Microsoft Windows, Linux, Solaris, Mac, etc.) within the SAN network. IP-SANs also include mechanisms for security, data replication, multi-path and high availability.
Storage protocols, such as iSCSI, have “two ends” in the connection. These ends are initiator and target. In iSCSI, we call them iSCSI initiator and iSCSI target. The iSCSI initiator requests or initiates any iSCSI communication. It requests all SCSI operations like read or write. An initiator is usually located on the host side (either an iSCSI HBA or iSCSI SW initiator).
86 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
The target is the storage device itself or an appliance which controls and serves volumes.
The target is the device which performs SCSI command or bridge to an attached storage device.
Each iSCSI node, that is, an initiator or target, has a unique IQN (iSCSI Qualified Name). The
IQN is formed according to the rules that were adopted for Internet nodes. The IQNs can be abbreviated by using a descriptive name, which is known as an alias. An alias can be assigned to an initiator or a target.
Table 7-1
Item iSCSI Parameters
Maximum iSCSI Target Quantity per System
Maximum Initiator Address Quantity per System
Maximum Host Quantity for Single Controller System
Maximum Host Quantity for Dual Controller System
Maximum iSCSI Session Quantity for Single Controller
Maximum iSCSI Session Quantity for Dual Controller
Maximum iSCSI Connection Quantity for Single Controller
Maximum iSCSI Connection Quantity for Dual Controller
Maximum CHAP Accounts per System
Value
256
256
512
1,024
1,024
2,048
4,096
8,192
64
7.1.2. Fibre Channel Technology
Fibre channel is an extremely fast system interface. It was initially developed for use primarily in the supercomputing field, but has become the standard connection type for SAN in enterprise storage.
The target is the storage device itself or an appliance which controls and serves volumes or virtual volumes. The target is the device which performs SCSI commands or bridges to an attached storage device.
Fibre Channel is the traditional method used for data center storage connectivity. The
XCubeSAN supports fibre channel connectivity at speeds of 4, 8, and 16 Gbps. Fibre
Channel Protocol is used to encapsulate SCSI commands over the fibre channel network.
Each device on the network has a unique 64 bit WWPN (World Wide Port Name).
Host Configuration
87
Table 7-2
Item
Fibre Channel Parameters
Maximum Host Quantity for Single Controller
Maximum Host Quantity for Dual Controller
Value
256
512
7.2. Host Connectivity Overview
XCubeSAN provides different type of host connectivity according to the system configuration, it could be base system or host cards installed system. The base system has two 10GbE iSCSI ports onboard per controller. The host cards are installed the same type on both controllers. Currently host card has three types, 1GbE iSCSI (RJ45), 10GbE iSCSI
(SFP+), and 16Gb FC (SFP+), for selection according to system infrastructure.
Select Overview function submenu to display all the host connectivity in system. The status and information of all host ports are listed.
Figure 7-1 Overview Function Submenu
The following is an example to show onboard 10GbE LAN port, 1GbE iSCSI host card at slot
2 and 16Gb FC host cards at slot 1.
88 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 7-2 Host Port Overview
The columns display information of Location, Name, Status, and MAC address for iSCSI or
WWPN (World Wide Port Name) for fibre channel.
INFORMATION:
Fibre Channel / iSCSI Host Card at Slot 1:
HQ-16F4S2, 4 x 16Gb FC (SFP+) ports
HQ-10G4S2, 4 x 10GbE iSCSI (SFP+) ports
HQ-01G4T, 4 x 1GBASE-T iSCSI (RJ45) ports iSCSI Host Card at Slot 2 (20Gb bandwidth):
HQ-10G4S2, 4 x 10GbE iSCSI (SFP+) ports
HQ-01G4T, 4 x 1GBASE-T iSCSI (RJ45) ports
Host Configuration
89
INFORMATION:
For hardware information about host cards, please refer to the chapter
3.3, Installing the Optional Host Cards section in the XCubeSAN Hardware
Owner’s Manual .
CAUTION:
If you change the configuration of the host cards including insert or remove them, it has to execute Reset to Defaults function in SYSTEM
SETTINGS -> Maintenance -> Reset to Defaults. Before that, you can keep the system configurations by export and import the configuration file in
SYSTEM SETTINGS -> Maintenance -> Configuration Backup. Please
refer to the chapter 6.5.5, Reset to Factory Default section and the chapter
6.5.5, Configuration Backup section for more details.
7.3. Configure iSCSI Connectivity
The iSCSI Ports provides iSCSI Ports, iSCSI Settings, iSCSI Targets, CHAP Accounts, and
Sessions function tabs to configure iSCSI ports.
Figure 7-3 iSCSI Ports Function Submenu
7.3.1. Configure iSCSI Ports
Select the iSCSI Ports function submenu to show information of iSCSI ports where they are located (onboard or host cards). The iSCSI port properties can be configured by clicking the functions button to the left side of the specific iSCSI port.
90 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 7-4 List iSCSI Ports
The columns display information of Location, Name, Status, LAG (Link Aggregation), VLAN
ID (Virtual LAN ID), IP address, Gateway IP address, Jumbo Frame status, and MAC address.
Set IP Address
Click ▼ -> Set IP Address which can assign an iSCSI IP address of the iSCSI data port.
There are two options: Use DHCP to acquire an IP address automatically or specify a Static
IP Address to set the IP address manually.
INFORMATION:
DHCP: The Dynamic Host Configuration Protocol is a standardized network protocol used on IP (Internet Protocol) networks for dynamically distributing network configuration parameters, such as IP addresses for interfaces and services. With DHCP, computers request IP addresses and networking parameters automatically from a DHCP server, reducing the need for a network administrator or a user to configure these settings manually.
Host Configuration
91
Figure 7-5 iSCSI IP Address Settings
Set Link Aggregation
Click ▼ -> Set Link Aggregation, the default mode of each iSCSI data port is connected without any bonding. Two bonding methods, Trunking and LACP (Link Aggregation Control
Protocol), can be selected. At least two iSCSI data ports must be checked for iSCSI link aggregation.
Figure 7-6 Set Link Aggregation
92 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
INFORMATION:
Trunking: Sometimes called “Port Trunking” configures multiple iSCSI ports to be grouped together into one in order to increase the connection speed beyond the limit of a single iSCSI port.
LACP: The Link Aggregation Control Protocol is part of IEEE 802.3ad that allows bonding several physical ports together to form a single logical channel. LACP allows a network switch to negotiate an automatic bundle by sending LACP packets to the peer. LACP can increase bandwidth usage and automatically perform failover when the link status fails on a port.
Set VLAN ID
Click ▼ -> Set VLAN ID, VLAN (Virtual LAN) is a logical grouping mechanism implemented on switch device. VLANs are collections of switching ports that comprise a single broadcast domain. It allows network traffic to transfer more efficiently within these logical subgroups.
Please consult your network switch user manual for VLAN setting instructions. Most of the work is done at the switch. Please make sure that your VLAN ID of iSCSI port matches that of switch port. If your network environment supports VLAN, you can use this function to change the configurations. Fill in VLAN ID and Priority settings to enable VLAN.
Figure 7-7 Set VLAN ID
Host Configuration
93
INFORMATION:
VLAN ID: VLAN ID is a number ranges from 2 to 4094. Three numbers (0,
1, and 4095) are reserved for special purposes.
Priority: The PCP (Priority Code Point) is a number ranges from 0 to 7 and reserved for QoS (Quality of Service). The definition is compliant with IEEE
802.1p protocol and 0 is the default value. In normal cases, you don't need to set this value.
Set Default Gateway
Click ▼ -> Set Default Gateway to set the gateway of the IP address as default gateway.
There can be only one default gateway.
Remove Default Gateway
To remove the default gateway, click ▼ -> Remove Default Gateway.
Set Jumbo Frames
Click ▼ -> Set Jumbo Frames to set the MTU (Maximum Transmission Unit) size. The jumbo frame size could be set as 4000 or 9000 bytes. Jumbo Frame is disabled by default.
Figure 7-8 Set Jumbo Frame
CAUTION:
If the VLAN ID or jumbo frames are set, the related switching hub and
HBA on host must be set, too. Otherwise, the LAN connection cannot work properly.
94 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Ping Host
Click ▼ -> Ping Host, it can verify the port connection from a target to the corresponding host data port. Input the host’s IP address and click Start button. The system will display the ping result. Click Stop button will stop ping activity.
Figure 7-9 Ping Host
Reset Port
Click -> Reset Port, which is generally used to recover from a port malfunction.
7.3.2. Configure iSCSI Settings
Select the iSCSI Settings function tab in the iSCSI Ports function submenu to provide to setup entity name of the system and iSNS (Internet Storage Name Service) server. The entity name is default in IQN (iSCSI Qualified Name) format and could be modified for management purpose. The iSNS IP is used by iSNS protocol for automated discovery, management and configuration of iSCSI devices on a TCP/IP network. To use iSNS, an iSNS server must be added to the SAN. The iSNS server IP address must be added to the storage system for iSCSI initiator service to send queries.
Host Configuration
95
Figure 7-10 Entity Name and iSNS Settings
Options of iSCSI Settings
The options available in this tab:
Entity Name: Change the entity name, highlight the old name and type in a new one. The maximum length of entity name is 200 characters. Valid characters are [ a~z | 0~9 | -.: ].
iSNS IP Address: The option can change iSNS IP address for internet storage name service.
When finished, click the Apply button to effect changes.
INFORMATION:
iSNS: The iSNS protocol allows automated discovery, management, and configuration of iSCSI devices on a network.
7.3.3. Configure iSCSI Targets
Select the iSCSI Targets function tab in the iSCSI Ports function submenu to show the iSCSI target information for iSCSI initiator. The iSCSI target properties can be configured by clicking the functions button to the left side of the specific iSCSI target.
96 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 7-11 iSCSI Targets
Change Authentication Mode
Click ▼ -> Authentication Method to enable CHAP (Challenge Handshake Authentication
Protocol) authentication method used in point-to-point for user login. It’s a type of authentication in which the authentication server sends the client a key to be used for encrypting the username and password. CHAP enables the username and password to transmit in an encrypted form for protection.
Figure 7-12 Authentication Method
Host Configuration
97
TIP:
A CHAP account must be added before you can use this authentication
method. Please refer to the chapter 7.3.4, Configure iSCSI CHAP
Accounts section to create an account.
After enabling the CHAP authentication mode, the host initiator should be set with the same CHAP account. Otherwise, the host cannot connect to the volume.
Change CHAP Users
Click ▼ -> Change CHAP Users, and then select the CHAP users that you would like to have access to this target. CHAP users can be more than one, but it must be at least one CHAP to enable on the target.
Figure 7-13 Change CHAP Users
Change Target Name
Click ▼ -> Change Target Name to change iSCSI target name if necessary. The maximum length of the target name is 223 characters. Valid characters are [ a~z | 0~9 | -.: ].
98 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 7-14 Change Target Name
Change Network Portal
Click ▼ -> Change Network Portal, and then select the network ports that you would like to be available for this iSCSI target.
Figure 7-15 Change Network Portal
Change Alias
Click ▼ -> Change Alias to add or change the alias name. To remove an alias, clear out the current name. The maximum length of the alias name is 223 characters. Valid characters are [ a~z | 0~9 | -.: ].
Host Configuration
99
Figure 7-16 Change Alias
7.3.4. Configure iSCSI CHAP Accounts
Select the CHAP Accounts function tab in the iSCSI Ports function submenu to manage the
CHAP accounts on the system.
Create a CHAP user
Here is an example of creating a CHAP user.
1. Select the CHAP Account function tab of the iSCSI Ports function submenu, click the
Create CHAP User button.
Figure 7-17 Create a CHAP User
100 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
2. Enter Username of CHAP user. The maximum length of the username is 223 characters.
Valid characters are [ A~Z | a~z | 0~9 | ~!@#%^&*_-+=|(){}[]:;<>.?/ ].
3. Enter Password (CHAP secret) and Confirm Password. The length of the password is between 12 to 16 characters. Valid characters are [ A~Z | a~z | 0~9 | ~!@#$%^&*_-
+=`|\(){}[]:;”’<>,.?/ ].
4. Select the targets using this CHAP user.
5. Click the OK button to create a CHAP user.
List CHAP Users
Here is an example of list CHAP users. The CHAP account properties can be configured by clicking the functions button to the left side of the CHAP account.
Figure 7-18 List CHAP Users
Modify CHAP User
Click ▼ -> Modify CHAP User to modify the selected CHAP user information. To change the targets that this user has access to, please go to iSCSI Targets tab, click on the option menu, and select Change CHAP Users. For more information, please refer to the chapter
7.3.3, Configure for iSCSI Targets section.
Host Configuration
101
Figure 7-19 Modify a CHAP User
Here is an example of modifying the CHAP user.
1. Enter Username of CHAP user. The maximum length of the username is 223 characters.
Valid characters are [ A~Z | a~z | 0~9 | ~!@#%^&*_-+=|(){}[]:;<>.?/ ].
2. Enter Password (CHAP secret) and Confirm Password. The length of the password is between 12 to 16 characters. Valid characters are [ A~Z | a~z | 0~9 | ~!@#$%^&*_-
+=`|\(){}[]:;”’<>,.?/ ].
3. Click the OK button to modify the CHAP user.
Delete CHAP User
Click ▼ -> Delete CHAP User to delete the selected CHAP user.
7.3.5. Active Sessions
Select the Sessions function tab in the iSCSI Ports function submenu to shows all currently active iSCSI sessions and their connection information.
Figure 7-20 Active Sessions
This table shows the column descriptions.
102 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Table 7-3 Active Sessions Column Descriptions
Column Name Description
TSIH TSIH (Target Session Identifying Handle) is used for this active session.
Initiator Name
Target Name
InitialR2T
Immed. data
It displays the host computer name.
It displays the controller name.
InitialR2T (Initial Ready to Transfer) is used to turn off either the use of a unidirectional R2T command or the output part of a bidirectional command. The default value is Yes.
Immed. data (Immediate Data) sets the support for immediate data between the initiator and the target. Both must be set to the same setting. The default value is Yes.
MaxDataOutR2T MaxDataOutR2T (Maximum Data Outstanding Ready to Transfer) determines the maximum number of outstanding ready to transfer per task. The default value is 1.
MaxDataBurstLe n
MaxDataBurstLen (Maximum Data Burst Length) determines the maximum SCSI data payload. The default value is 256kb.
DataSeginOrder DataSeginOrder (Data Sequence in Order) determines if the PDU
(Protocol Data Units) are transferred in continuously non-decreasing sequence offsets. The default value is Yes.
DataPDU InOrder DataPDU InOrder (Data PDU in Order) determines if the data PDUs within sequences are to be in order and overlays forbidden. The default value is Yes.
List Connection Details
Click ▼ -> Connection Details which will list all connection(s) of the selected session.
Disconnect Session
Click ▼ -> Disconnect will disconnect the selected session, click the OK button to confirm.
Host Configuration
103
7.4. Configure Fibre Channel Connectivity
Select the Fibre Channel Ports function submenu to show information of installed fibre channel ports. The fibre channel port properties can be configured by clicking the functions button to the left side of the specific fibre channel port.
Figure 7-21 Fibre Channel Ports Function Submenu
Figure 7-22 List Fibre Channel Ports
The columns display information of Location, Name, Status, Topology, WWNN (World Wide
Node Name), WWPN (World Wide Port Name), and some statistical information.
7.4.1. Configure Fibre Channel Link Speed
Click ▼ -> Change Link Speed to change the link speed of fibre channel.
104 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 7-23 Change Link Speed
Options of Change Link Speed
The option available in this tab:
Link Speed: Set the link speed of fibre channel. The options are Automatic (default), 4
Gb/s, 8 Gb/s, and 16 Gb/s. Recommend to set it as Automatic to detect the data rate automatically.
7.4.2. Configure Fibre Channel Topology
Click ▼ -> Change Topology to change the topology of fibre channel.
Figure 7-24 Change Topology
Options of Change Topology
The option available in this tab:
Topology: Set the topology fibre channel. The option is Point-to-Point for 16 Gb/s fibre channel, Point-to-Point and Loop modes for 4 Gb/s, 8 Gb/s fibre channel. Set it appropriately according to your fibre channel environment.
Host Configuration
105
INFORMATION:
Point-to-Point (FC-P2P): Two devices are connected directly by FC interface. This is the simplest topology with limited connectivity and supports 4 Gb/s, 8 Gb/s, and 16 Gb/s fibre channel speed.
Loop (FC-AL, Arbitrated Loop): All devices are connection in loop or ring, similar to token ring networking. Add or remove any device will affect activities on the loop. The failure of any device will cause ring broken.
Fibre Channel hub connects multiple devices together and may bypass the failed ports. A loop may also be made by cabling each port to the next in a ring. Loop mode supports 4 Gb/s and 8 Gb/s fibre channel speed only.
CAUTION:
If the link speed and topology are set, the related fibre channel switch and
HBA on host must be set, too. Otherwise, the connection cannot work properly.
7.4.3. Configure Fibre Channel Targets
Click ▼ -> Target Configuration to set multi-target configurations which are accessible by the host.
106 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 7-25 Target Configuration
CAUTION:
Point-to-Point connection mode does not support multi-target.
7.4.4. Clear Fibre Channel Counters
Click Clear All Counters button to clear all counters of fibre channels.
Click ▼ -> Clear Counters to clear the counters of the selected fibre channel.
Host Configuration
107
8. Storage Management
This chapter describes storage technologies of RAID, storage pool, volume, and LUN mapping. Also includes the details of storage management operations and examples. The
STORAGE MANAGEMENT function menu provides submenus of Disks, Pools, Volume, and
LUN Mappings. In this chapter, we will focus on the fundamental storage architecture called thick provisioning. The advanced storage features and functions will be described in the following.
Thin Provisioning feature is described in the Thin Provisioning chapter.
Auto Tiering feature is described in the Auto Tiering chapter.
SSD Cache feature is described in the SSD Cache chapter.
The data backup functions are described in the following sections.
Snapshot function is described in the Managing Snapshots section.
Local Clone function is described in the Manage Local Clones section.
Remote Replication function is described in the Managing Remote Replications section.
8.1. RAID Technology
RAID is th e abbreviation of Redundant Array of Independent Disks. The basic idea of RAID is to combine multiple drives together to form one large logical drive. This RAID drive obtains performance, capacity and reliability than a single drive. In point view of the host, the operating system in host detects the RAID drive as a single storage device.
The disk drives in storage are referred to as members of the array. Each array has a RAID level. RAID levels provide different degrees of redundancy and performance. They also have different restrictions regarding the quantity of member disk drives in the array. The following describe the features of RAID levels.
RAID 0 (Striping, no redundancy)
RAID 0 consists of striping, without mirroring or parity. It has more than two disks in parallel, as a large-capacity disk. The capacity of a RAID 0 volume is the sum of the capacities of the disks. There is no added redundancy for handling disk failures. Thus, failure of one disk causes the loss of the entire RAID 0 volume. Striping distributes the contents roughly equally among all member disks, which makes concurrent read or write operations on the multiple disks and results in performance improvements. The concurrent operations make
108 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
the throughput of most read and write operations equal to the throughput of one disk multiplied by the quantity of disk drives. Increased throughput is the big benefit of RAID 0, at the cost of increased vulnerability of data due to drive failures.
RAID 1 (Mirroring between two disks)
RAID 1 consists of data mirroring, without parity or striping. Data is written identically to two drives, thereby producing a mirrored set of drives. Thus, any read request can be serviced by any member drive. Write throughput is always slower because every drive must be updated, and the slowest drive limits the write performance. The array continues to operate as long as at least one drive is functioning.
N-way Mirror (Mirroring between N disks)
It’s an extension of RAID 1. Data is written identically to N drives, thereby producing an Nway mirrored set of drives
RAID 3 (Striping, can survive one disk drive fault, with parity on a dedicated disk drive)
RAID 3 consists of byte-level striping with dedicated parity. All disk spindle rotation is synchronized and data is striped such that each sequential byte is on a different drive. Parity is calculated across corresponding bytes and stored on a dedicated parity drive. The data disperses on a different hard drive, even if you want to read short information, it may need all the hard drives to work. So this is more suitable for large amounts of read data requests.
RAID 5 (Striping, can survive one disk drive fault, with interspersed parity over the member disk drives)
RAID 5 consists of block-level striping with distributed parity. It requires at least three disk drives. Upon failure of a single drive, subsequent reads can be calculated from the distributed parity such that no data is lost. RAID 5 is seriously affected by the general trends regarding array rebuild time and the chance of disk drive failure during rebuild. Rebuilding an array requires reading all data from all disks, opening a chance for a second disk drive failure and the loss of the entire array.
RAID 6 (Striping, can survive two disk drive faults, with interspersed parity over the member disk drives)
Storage Management
109
RAID 6 consists of block-level striping with double distributed parity. It requires a minimum of four disks. Double parity provides fault tolerance up to two failed disk drives. This makes larger RAID groups more practical, especially for high-availability systems, as large-capacity drives take longer to restore. As with RAID 5, a single drive failure results in reduced performance of the entire array until the failed drive has been replaced. With a RAID 6 array, using drives from multiple sources and manufacturers, it is possible to mitigate most of the problems associated with RAID 5. The larger the drive capacities and the larger the array size, the more important it becomes to choose RAID 6 instead of RAID 5.
RAID 0+1 (RAID 1 on top of RAID 0)
RAID 0+1 creates a second striped set to mirror a primary striped set. The array continues to operate with one or more drives failed in the same mirror set, but if drives fail on both sides of the mirror, the data on the RAID system is lost.
RAID 10 (RAID 0 on top of RAID 1)
RAID 10 creates a striped set from a series of mirrored disk drives. The array can sustain multiple drive losses so long as no mirror loses all its drives.
RAID 30 (RAID 3 on top of RAID 0)
RAID 30 is the combination of RAID 3 and RAID 0, do RAID 3 first, further RAID 0. It is composed of multiple sets of RAID 3 stripe access to each other. Because RAID 30 is based on RAID 3 which requires at least three disk drives, therefore RAID 30 is constituted a plurality RAID 3, at least six disk drives. RAID 30 can still operate when appearing a damaged disk drive in disk group of RAID 3. But if any one group of RAID 3 appears two or two or more disk drives damaged, the entire RAID 30 will fail.
RAID 50 (RAID 5 on top of RAID 0)
RAID 50 is the combination of RAID 5 and RAID 0, do RAID 5 first, further RAID 0. The concept is the same as RAID 30. RAID 50 requires at least six disk drives. Since RAID 50 constitutes stripe of multiple disk group of RAID 5, it has higher performance than RAID 5, but capacity utilization is lower than RAID 5.
110 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
RAID 60 (RAID 6 on top of RAID 0)
RAID 60 is the combination of RAID 6 and RAID 0, do RAID 6 first, further RAID 0. In other words, it accesses stripes for more than two groups of RAID 6. RAID 6 needs to have at least four disk drives, so the minimum requirement of RAID 60 is eight disk drives.
RAID 60 can tolerate maximum two damage disk drives in any disk group of RAID 6, while it still maintains the operating, but as long as it damages three drives in any group of RAID 6, the entire RAID 60 will fail. Of course, the probability of this case is quite low.
Compared to a simple RAID 6, RAID 60 binds stripes through multiple of RAID 6. Therefore it has higher performance. However, high threshold usage and the capacity utilization rate are major problems.
The following is the summary of the RAID levels.
Table 8-1 RAID Level Summary - 1
RAID 0 RAID 1
Min. Drives 1
Data Protection No protection
2
One drive failure
One drive failure
4
Two drive failure
N-1 drive failure
Very Good Very Good Very Good Very Good Very Good Read
Performance
Write
Performance
Excellent Good
RAID 3
RAID 5
3
Good
RAID 6 N-way
Mirror
3
Fair to Good Fair
Capacity
(If drive qty. =
N, drive capacity = M)
Capacity
Utilization
(Min.~26 drives)
Typical
Applications
N x M
(e.g., 8 drives x 1TB
= 8TB)
100%
(e.g., 8/8 =
100%)
High end workstation,
Data logging,
Real-time rendering,
N/2 x M
(e.g., 8 drives / 2 x
1TB = 4TB)
50%
(e.g., 4/8 =
50%)
Operating system,
Transaction database
(N-1) x M
(e.g., (8 drives -1) x
1TB = 7TB)
67%~96%
(e.g., 7/8 =
88%)
Data warehouseing, Web serving,
Archiving
(N-2) x M
(e.g., (8 drives -2) x
1TB = 6TB)
50%~92%
(e.g., 6/8 =
75%)
Data archive,
Backup to disk, High
Availability solution,
(N/N) x M
(e.g., 8 drives / 8 x
1TB = 1TB)
4%~33%
(e.g., 1/8 =
13%)
Operating system,
Transaction database
Storage Management
111
Very transitory data
Server with large capacity
Requirement
Table 8-2
RAID Level
Min. Drives
RAID Level Summary - 2
RAID 0+1
4
Data Protection One drive failure in each subarray
RAID 10
4
One drive failure in each subarray
Excellent Excellent Read
Performance
Write
Performance
Very Good Very Good Good
RAID 30
6
One drive failure in each subarray
RAID 50
6
One drive failure in each subarray
RAID 60
8
Two drive failure in each subarray
Very Good Very Good Very Good
Good Fair to Good
Capacity
(If drive qty. =
N, drive capacity = M)
Capacity
Utilization
(Min.~26 drives)
Typical
Applications
N/2 x M
(e.g., 8 drives / 2 x
1TB = 4TB)
50%
(e.g., 4/8 =
50%)
Fast
Database,
Application
Server
N/2 x M
(e.g., 8 drives / 2 x
1TB = 4TB)
50%
(e.g., 4/8 =
50%)
Fast
Database,
Application
Server
(N-2) x M
(e.g., (8 drives -2) x
1TB = 6TB)
67%~92%
(e.g., 6/8 =
75%)
Large database,
File server, application server
(N-2) x M
(e.g., (8 drives -2) x
1TB = 6TB)
67%~92%
(e.g., 6/8 =
75%)
Large database,
File server, application server
(N-4) x M
(e.g., (8 drives -4) x
1TB = 4TB)
50%~85%
(e.g., 4/8 =
50%)
Data archive,
Backup to disk, High
Availability solution,
Server with large capacity
Requirement
112 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
SANOS supports hot spare drives. When a member disk drive in array fails, the system automatically replaces the failed member with a hot spare drive and rebuilds the array to restore its redundancy. Candidate and spare drives can be manually exchanged with array
members. For more information, please refer to the chapter 14.2, Rebuild section in the
8.2. SANOS Storage Architecture
This section describes the SANOS storage architecture including pool, volume, LUN mapping, and hot spares.
8.2.1. Pool Technology
A storage pool is a collection of disk drives. It has three pool types which are listed in the following.
Thick or fat provisioning pool, we use the term of thick provisioning pool in the following.
Thin provisioning pool
Auto tiering pool
We describe thick provisioning pool here, and introduce thin provisioning in the Thin
chapter and auto tiering in the Auto Tiering chapter.
A storage pool is grouped to provide capacity for volumes. Volumes are then allocated out of the storage pool and are mapped to LUN which can be accessed by a host system. The following is the storage architecture of a thick provisioning pool.
Storage Management
113
Figure 8-1 Storage Architecture of Thick Provisioning Pool
Disk drives can be added to a thick provisioning pool at any time to increase the capacity of the pool, no matter the disk drives are from the head SAN system or from expansion enclosures. Or disk drives can be added to upgrade the RAID level in a thick provisioning pool. These operations are called migrate. For example, it can migrate RAID 5 to RAID 6 by adding disk drives because RAID 6 needs one more disk drive for parity.
A thick provisioning pool contains up to 64 disk drives. For more information about pool
operation, please refer to the chapter 8.4, Configuring Storage Pools section.
Table 8-3
Item
Thick Provisioning Pool Parameters
Maximum Disk Drive Quantity in a Thick Provisioning Pool
(Include Dedicated Spares)
Maximum Pool Quantity per System
Maximum Dedicated Spare Quantity in a Pool
Value
64
64
8
8.2.2. Volume Technology
A volume is a logical disk that is presented to a host system. The capacity is provided by pool. Each pool can be divided into several volumes. The volumes in one pool share the same RAID level, but may have different volume capacity.
114 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 8-2 Volume in Storage Architecture
Volume is a basic unit of data backup. Base on the volume, SANOS provides snapshot, local clone, and remote replication functions. For more information about data backup, please
refer to the chapter 12, Data Backup chapter.
A pool contains up to 96 volumes and a system can contain up to 4,096 volumes including snapshot volumes. For more information about volume operation, please refer to the
chapter 8.5, Configuring Volumes section.
Table 8-4
Item
Volumes Parameters
Maximum Volume Quantity in a Pool
Maximum Volume Quantity Per System
(include Snapshot Volumes)
Maximum Host Number Per Volume
Maximum Thin Volume Capacity
Value
96
4,096
16
256TB
8.2.3. LUN Technology
LUN (Logical Unit Number) is a number used to identify a logical unit, which is a device addressed by the SCSI protocol or Storage Area Network protocols which encapsulate SCSI, such as Fibre Channel or iSCSI.
Storage Management
115
The LUN is not the only way to identify a logical unit. There is also the SCSI device ID, which identifies a logical unit uniquely in the world. Labels or serial numbers stored in a logical unit's storage volume often serve to identify the logical unit. However, the LUN is the only way for an initiator to address a command to a particular logical unit, so initiators often create, via a discovery process, a mapping table of LUN to other identifiers.
There is one LUN which is required to exist in every target: LUN 0. The logical unit with LUN
0 is special in that it must implement a few specific commands, most notably report LUNs, which is how an initiator can find out all the other LUNs in the target. LUN 0 does not provide any other services, such as a storage volume. Many SCSI targets contain only one logical unit (so its LUN is necessarily 0). Others have a small number of logical units that correspond to separate physical devices and have fixed LUNs. A large storage system may have up to thousands of logical units, defined logically, by administrative command, and the administrator may choose the LUN or the system may choose it.
Figure 8-3 LUM Mappings in Storage Architecture
Volumes are then allocated out of the storage pool and are mapped to LUN which can be accessed by a host system. In Figure 8-3, a LUN can be accessed by cluster with two or more hosts via iSCSI. Or can be accessed by MPIO to provide HA (High Availability) architecture and increase the access bandwidth. The snapshot volumes can also be mapped to a LUN. The same, the read-only snapshot LUN can be read by the host, and the writable snapshot LUN can be read/write.
116 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
A SANOS system can contain up to 4,096 LUNs which includes the total of volume LUN mappings and snapshot volume LUN mappings.
Table 8-5
Item
LUN Parameters
Maximum Quantity of LUN
Value
4,096
For more information about LUN mapping operation, please refer to the chapter 8.6,
Configure LUN Mappings section.
8.2.4. Hot Spares
A hot spare disk is a disk used to automatically or manually replace a failing or failed disk in a RAID configuration. The hot spare disk reduces the MTTR (Mean Time To Recovery) for the RAID redundancy group, thus reducing the probability of a second disk failure and the resultant data loss that would occur in any singly redundant RAID (e.g., RAID 1, RAID 5, or
RAID 10). Typically, a hot spare is available to replace some different disks and systems employing a hot spare normally require a redundant group to allow time for the data to be generated onto the spare disk. During this time the system is exposed to data loss due to a subsequent failure, and therefore the automatic switching to a spare disk reduces the time of exposure to that risk compared to manual discovery and implementation.
If one disk drive of the pool fails or has been removed from any singly redundant RAID, the pool status will change to degraded mode. At the moment, the SANOS system will search the spare disk to execute pool/volume/data rebuild into a healthy RAID drive.
There are three types of spare disk drive which can be set in function menu Disks:
Dedicated Spare: Set a spare disk drive to a dedicated pool.
Local Spare: Set a spare disk drive to the pools which located in the same enclosure.
The enclosure is either in head unit or in one of expansion units.
Global Spare: Set a spare disk drive to all pools which located whether in head unit and expansion unit.
Storage Management
117
Figure 8-4 Hot Spares
When a member disk of the pool fails, the system will first search for a dedicated spare disk for the pool, if not present, search for a local spare instead, then eventually global spare disk.
For more information about spare disk operation, please refer to the chapter 8.3.2,
8.3. Working with Disk Drives
Select the Disks function submenu to display the status of the disk drives, set hot spares, check disk health, and update disk firmware. The entire SAN storage system can contain up to 10 expansion enclosures and 446 disk drives.
Figure 8-5 Disks Function Submenu
118 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
INFORMATION:
For deployment of the SAN system and expansion enclosures, please refer to the chapter 4, Deployment Types and Cabling in the XCubeSAN
Hardware Owner’s Manual .
Table 8-6
Item
Enclosure and Disk Parameters
Maximum Enclosure Quantity in a System
Maximum Disk Quantity in a System
Value
10
446
8.3.1. List Disks
The drop-down lists at the top enable you to select the enclosure from head unit (SAN system) or expansion units (expansion enclosures). The disk properties can be configured by clicking the functions button to the left side of the specific disk drive.
TIP:
Enclosure format: Enclosure ID ([Head Unit | Expansion Unit]: Model
Name). For example: 0 (Head Unit: XS5216), 1 (Expansion Unit: XD5316)
Storage Management
119
Figure 8-6 List Disks
This table shows the column descriptions.
Table 8-7 Disk Column Descriptions
Column Name Description
Slot
Status
The position of the disk drive.
The status of the disk drive:
Online : The disk drive is online.
Rebuilding : The disk drive is being rebuilt.
Transitioning : The disk drive is being migrated or is replaced by another disk when rebuilding occurs.
Scrubbing : The disk drive is being scrubbed.
Check Done : The disk drive has been checked the disk health.
Health The health of the disk drive:
Good : The disk drive is good.
Failed : The disk drive is failed.
120 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Capacity
Disk Type
Usage
Pool Name
Manufacturer
Model
Error Alert : S.M.A.R.T. error alerts.
Read Errors : The disk drive has unrecoverable read errors.
The capacity of the disk drive.
The type of the disk drive:
[ SAS HDD | NL-SAS HDD | SAS SSD | SATA SSD ]
[ 12.0Gb/s | 6.0Gb/s | 3.0Gb/s | 1.5Gb/s ]
The usage of the disk drive:
Free : This disk drive is free for use.
RAID : This disk drive has been set to a pool.
SSD Cache : This SSD has been set to an SSD cache pool.
Dedicated Spare : This disk drive has been set as dedicated spare of a pool.
Local Spare : This disk drive has been set as local spare of the enclosure.
Global Spare : This disk drive has been set as global spare of whole system.
SSD Spare : This SSD has been set as dedicated SSD spare of an
SSD cache pool.
Which pool the disk drive belongs to.
The manufacturer of the disk drive.
The model name of disk drive.
8.3.2. Operations on Disks
The options available in this tab:
Disk Health Check
Click Disk Health Check button to check the health of the selected disks. Also select the quantity of bad block to stop disk health check. And then click OK button to proceed. It cannot check disks which are currently in use.
Storage Management
121
Figure 8-7 Disk Health Check
Disk Check Report
Click Disk Check Report button to download the disk check report. It’s available after executing Disk Health Check.
Set Free Disk
Click ▼ -> Set Free Disk to make the selected disk drive be free for use.
Set Spare Disk
Click ▼ -> Set Global Spare to set the selected disk drive to global spare of all pools which located whether in head unit and expansion unit.
Click ▼ -> Set Local Spare to set the selected disk drive to local spare of the pools which located in the same enclosure. The enclosure is either in head unit or in one of expansion units.
Click ▼ -> Set Dedicated Spare to set the disk drive to dedicated spare of the selected pool.
For more information about hot spares, please refer to the chapter 8.2.4, Hot Spares section
and the chapter 14.2, Rebuild section in the Troubleshooting chapter. Here is an example
how to set a spare disk to a dedicated pool.
1. Select a free disk, and then click ▼ -> Set Dedicated Spare.
Figure 8-8 Set Dedicated Spare
122 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
2. Select the pool which the disk drive is set to its dedicated spare, and then click OK button.
Disk Scrub and Clear Disk Read Error
Click ▼ -> Disk Scrub to scrub the disk drive. It’s not available when the disk drive is in used.
Click ▼ -> Clear Disk Read Error to clean the read error of the disk drive and reset the failed status.
Update Disk Firmware
Click ▼ -> Update Disk Firmware to upgrade the firmware of the disk drive.
Figure 8-9 Update Disk Firmware
Turn on Disk LED
Click ▼ -> Turn on Disk LED to turn on the indication LED of the disk drive.
Turn off Disk LED
Click ▼ -> Turn off Disk LED to turn off the indication LED of the disk drive.
S.M.A.R.T.
Click ▼ -> S.M.A.R.T. to show the S.M.A.R.T. information of the disk drive. For more
information about S.M.A.R.T., please refer to the chapter 8.3.3, S.M.A.R.T.
Storage Management
123
More Information of the Disk
Click ▼ -> More Information to show the detail information of the disk drive.
Figure 8-10 More Information of Disk Drive
8.3.3. S.M.A.R.T.
S.M.A.R.T. (Self-Monitoring Analysis and Reporting Technology) is a diagnostic tool for disk drives to deliver warning of drive failures in advance. It provides users a chance to take actions before a possible drive failure.
124 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
S.M.A.R.T. measures many attributes of disk drives all the time and inspects the properties of disk drives which are close to being out of tolerance. The advanced notice of possible disk drive failure will allow users to back up the data of disk drive or replace the disk drive.
This is much better than a disk drive crash when it is writing data or rebuilding a failed disk drive.
Select a disk drive and click ▼ -> S.M.A.R.T. to show the S.M.A.R.T. information of the disk drive. The number is the current value. Please refer to the product specification of that disk drive for details.
Figure 8-11 S.M.A.R.T. Attributes of Disk Drive
Storage Management
125
8.4. Configure Thick Provisioning Pools
Select the Pools function submenu to create, modify, delete, or view the status of the pools.
We will describe thick provisioning pool in the following section, keep thin provisioning
technology in the Thin Provisioning
chapter and auto tiering in the Auto Tiering chapter.
Figure 8-12 Pools Function Submenu
8.4.1. Create a Thick Provisioning Pool
Here is an example of creating a thick provisioning pool with 3 disks configured in RAID 5.
1. Select the Pools function submenu, click the Create Pool button. It will scan available disks first.
TIP:
It may take 20 ~ 30 seconds to scan disks if your system has more than
200 disk drives. Please wait patiently.
126 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 8-13 Create a Thick Provision Pool Step 1
2. Select the Pool Type as Thick Provisioning.
3. Enter a Pool Name for the pool. Maximum length of the pool name is 16 characters.
Valid characters are [ A~Z | a~z | 0~9 | -_<> ].
4. Select a Preferred Controller from the drop-down list. The backend I/O resources in this pool will be processed by the preferred controller which you specified. This option is available when dual controllers are installed.
5. Click the Next button to continue.
Storage Management
127
Figure 8-14 Create a Thick Provision Pool Step 2
6. Please select disks for pool. Maximum quantity of disk is 64. Select an Enclosure from the drop-down list to select disks from expansion enclosures.
7. Click the Next button to continue.
128 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 8-15 Create a Thick Provision Pool Step 3
8. Select a RAID Level from the drop-down list which lists available RAID level only according to the disk selection.
9. Click the Next button to continue.
Storage Management
129
Figure 8-16 Create a Thick Provision Pool Step 4
10. Disk properties can also be configured optionally in this step:
。 Enable Disk Write Cache: Check to enable the write cache option of disks. Enabling disk write cache will improve write I/O performance but have a risk of losing data when power failure.
。 Enable Disk Read-ahead: Check to enable the read-ahead function of disks. System will preload data to disk buffer based on previously retrieved data. This feature will efficiently improve the performance of sequential data retrieved.
。 Enable Disk Command Queuing: Check to enable the command queue function of disks. Send multiple commands to a disk at once to improve performance.
。 Enable Disk Standby: Check to enable the auto spin down function of disks. The disks will be spun down for power saving when they are idle for the period of time specified.
11. Click the Next button to continue.
130 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 8-17 Create a Thick Provision Pool Step 5
12. After confirmation at summary page, click the Finish button to create a pool.
Figure 8-18 A Thick Provisioning Pool is Created
13. A pool has been created. If necessary, click Create Pool button again to create others.
8.4.2. List Thick Provisioning Pools
Select one of the pools, it will display the related disk drives. The pool properties can be configured by clicking the functions button ▼ to the left side of the specific pool.
Storage Management
131
Figure 8-19 List Thick Provisioning Pools
This table shows the column descriptions.
Table 8-8 Pool Column Descriptions
Column Name Description
Pool Name
Status
Health
Total
Free
The pool name.
The status of the pool:
Online : The pool is online.
Offline : The pool is offline.
Rebuilding : The pool is being rebuilt.
Migrating : The pool is being migrated.
Relocating : The pool is being relocated.
The health of the pool:
Good : The pool is good.
Failed : The pool is failed.
Degraded : The pool is not healthy and not complete. The reason could be missing or failed disks.
Total capacity of the pool.
Free capacity of the pool.
Available
Thin
Provisioning
Volumes
RAID
Current
Controller
Available capacity of the pool.
The status of Thin provisioning:
Disabled.
Enabled.
The quantity of volumes in the pool.
The RAID level of the pool.
The current running controller of the pool.
132 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
(This option is only visible when dual controllers
are installed.)
Table 8-9 Disk Column Descriptions
Column Name Description
Enclosure ID
Slot
The enclosure ID.
The position of the disk drive.
Status
Health
Capacity
Disk Type
The status of the disk drive:
Online : The disk drive is online.
Missing : The disk drive is missing in the pool.
Rebuilding : The disk drive is being rebuilt.
Transitioning : The disk drive is being migrated or is replaced by another disk when rebuilding occurs.
Scrubbing : The disk drive is being scrubbed.
Check Done : The disk drive has been checked the disk health.
The health of the disk drive:
Good : The disk drive is good.
Failed : The disk drive is failed.
Error Alert : S.M.A.R.T. error alerts.
Read Errors : The disk drive has unrecoverable read errors.
The capacity of the disk drive.
Manufacturer
Model
The type of the disk drive:
[ SAS HDD | NL-SAS HDD | SAS SSD | SATA SSD ]
[ 12.0Gb/s | 6.0Gb/s | 3.0Gb/s | 1.5Gb/s ]
The manufacturer of the disk drive.
The model name of disk drive.
8.4.3. Operations on Thick Provisioning Pools
The options available in this tab:
Activate and Deactivate the Pool
Click ▼ -> Activate/Deactivate, these options are usually used for online disk roaming.
Deactivate can be executed when the status is online. Conversely, activate can be executed
Storage Management
133
when the pool status is offline. For more information, please refer to the chapter 14.4, Disk
Change Disk Properties of the Pool
Click ▼ -> Change Disk Properties to change disk properties of the pool.
Figure 8-20 Change Disk Properties
Change Thin Provisioning Policy of the Pool
Click ▼ -> Change Thin Provisioning Policy to change policy of the thin provisioning pool.
For more information, please refer to the chapter 9, Thin Provisioning .
Change Preferred Controller of the Pool
Click ▼ -> Change Preferred Controller to set the pool ownership to the other controller.
Verify Parity of the Pool
Click ▼ -> Verify Parity which will regenerate parity for the pool. It supports RAID level 3, 5,
6, 30, 50, and 60.
Add a Disk Group into the Pool
If the auto tiering license is enabled, click ▼ -> Add Disk Group to transfer from the thick provisioning pool to auto tiering pool. For more information, please refer to the chapter 11.7,
Transfer to Auto Tiering Pool section.
134 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 8-21 Add Disk Group
CAUTION:
The action of transferring from the thick provisioning pool to auto tiering is irreversible. Consider all possible consequences before making this change.
Migrate a Thick Provisioning Pool
Click ▼ -> Migrate Pool to change the RAID level of a pool or move the member disk drives of the pool to different disks. For more information, please refer to the chapter 8.4.4,
Migrate a Thick Provisioning Pool section.
Storage Management
135
Figure 8-22 Migrate RAID Level
Delete a Pool
Click ▼ -> Delete to delete the pool. The pool cannot be deleted when there are volumes in the pool.
More Information of the Pool
Click ▼ -> More Information to show the detail information of the pool.
8.4.4. Migrate a Thick Provisioning Pool
The Migrate Pool function changes the pool to different RAID level or adds the member disks of the pool for larger capacity.
Figure 8-23 Expand a Thick Provisioning Pool
136 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Usually, the pool migrates to higher RAID level for better protection. To perform a migration, the total capacity of the pool must be larger than or equal to the original pool. If the RAID level doesn’t change, the migration can also move the member disk drives of the pool to totally different disk drives.
Figure 8-24 Migrate a Thick Provisioning Pool
Here’s an example of migrate a thick provisioning pool from RAID 5 to RAID 6.
1. Select a pool, and then click ▼ -> Migrate Pool
Figure 8-25 Migrate RAID Level Step 1
2. Change the Pool Name if necessary. Maximum length of the pool name is 16 characters.
Valid characters are [ A~Z | a~z | 0~9 | -_<> ].
3. Select a RAID Level from the drop-down list.
Storage Management
137
Figure 8-26 Migrate RAID Level Step 2
4. Click the Select Disks button to select disks from either head unit or expansion unit, and click OK to complete the selection. The selected disks are displayed at the Disks Used.
Figure 8-27 Migrate RAID Level Step 3
5. At the confirmation dialog, click the OK button to execute migration.
138 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 8-28 Migrate RAID Level Step 4
6. Migration starts. The status of the Disks, Pools and Volumes are changing. The complete percentage of migration is displayed in the Status.
7. It’s done when the complete percentage reaches 100%.
TIP:
Thin provisioning pool cannot execute migrate or move, it uses the Add
Disk Group option to enlarge capacity. For more information, please refer
to the chapter 9, Thin Provisioning .
8.4.5. Operation Limitations during Migration
There are some operation limitations when a pool is being migrated. The System would reject these operations:
Add dedicated spare.
Remove a dedicated spare.
Create a new volume.
Delete a volume.
Extend a volume.
Scrub a volume.
Perform another migration operation.
Scrub entire pool.
Take a snapshot.
Delete a snapshot.
Expose a snapshot.
Rollback to a snapshot.
Storage Management
139
CAUTION:
Pool migration cannot be executed during rebuilding or volume extension.
8.5. Configuring Volumes
Select the Volumes function submenu to create, modify, delete, or view the status of the volumes. The same as pool, we will describe thick provisioning volume in the following
section, and keep thin provisioning in the Thin Provisioning chapter and auto tiering in the
Figure 8-29 Pools Function Submenu
8.5.1. Create a Volume
Here is an example of creating a volume in think provisioning Pool.
1. Select the Volumes function submenu, click the Create Volume button.
140 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 8-30 Create a Volume in Thick Provisioning Pool Step 1
2. Enter a Volume Name for the pool. The maximum length of the volume name is 32 characters. Valid characters are [ A~Z | a~z | 0~9 | -_<> ].
3. Select a Pool Name from the drop-down list. It will also display the available capacity of the pool.
4. Enter required Capacity. The unit can be selected from the drop-down list.
5. Select the Volume Type. The options are RAID Volume (for general RAID usage) and
Backup Volume (for the target volume of local clone or remote replication).
6. Click the Next button to continue.
Storage Management
141
Figure 8-31 Create a Volume in Thick Provisioning Pool Step 2
7. Volume advanced settings can also be configured optionally in this step:
。 Block Size: The options are 512 Bytes to 4,096 Bytes.
。 Priority: The options are High, Medium, and Low. The priority compares to other volumes. Set it as High if the volume has many I/O.
。 Background I/O Priority: The options are High, Medium, and Low. It will influence volume initialization, rebuild, and migration.
。 Erase Volume Data: This option is available when the pool is thick provisioning. This option will wipe out old data in volume to prevent that OS recognizes the old partition. The options are Do Not Erase, Fast Erase which will erase the first 1GB data of the volume, or Full Disk which will erase entire volume.
。 Enable Cache Mode (Write-back Cache): Check to enable cache mode function of volume. Write back optimizes the system speed but comes with the risk where the data may be inconsistent between cache and disks in one short time interval.
。 Enable Video Editing Mode: Check to enable video editing mode function. It is optimized for video editing usage. Please enable it when your application is in video editing environment. This option provides a more stable performance figure without high and low peaks but slower in average.
。 Enable Read-ahead: Check to enable the read ahead function of volume. The system will discern what data will be needed next based on what was just retrieved from
142 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
disk and then preload this data into the disk's buffer. This feature will improve performance when the data being retrieved is sequential.
。 Enable Fast RAID Rebuild: This option is available when the pool is created in protection RAID level (e.g., RAID 1, 3, 5, 6, 0+1, 10, 30, 50, and 60). For more
information, please refer to the chapter 8.5.5, Fast RAID Rebuild section.
8. Click the Next button to continue.
Figure 8-32 Create a Volume in Thick Provisioning Pool Step 3
9. After confirmation at summary page, click the Finish button to create a volume.
10. The volume has been created. It will be initialized in protection RAID level (e.g., RAID 1, 3,
5, 6, 0+1, 10, 30, 50, and 60).
Figure 8-33 A Volume in Thick Provisioning Pool is Created and Initializing
Storage Management
143
11. A volume has been created. If necessary, click Create Volume button to create another.
TIP:
SANOS supports instant RAID volume availability. The volume can be used immediately when it is initializing or rebuilding.
8.5.2. List Volumes
Select one of the volumes, it will display the related LUN Mappings if the volume is mapped.
The volume properties can be configured by clicking the functions button to the left side of the specific volume.
Figure 8-34 List Volumes
This table shows the column descriptions.
Table 8-10 Volume Column Descriptions
Column Name Description
Volume Name The volume name.
Status The status of the volume:
Online : The volume is online.
Offline : The volume is offline.
Erasing : The volume is being erased if the Erase Volume Data option is set.
Initiating : The volume is being initialized.
144 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Rebuilding : The volume is being rebuilt.
Migrating : The volume is being migrated.
Rollback : The volume is being rolled back.
Parity Checking : The volume is being parity check.
Relocating : The volume is being relocated.
Health
Capacity
SSD Cache
The health of the volume:
Optimal : the volume is working well and there is no failed disk in the RG.
Failed : the pool disk of the volume has single or multiple failed disks than its RAID level can recover from data loss.
Degraded : At least one disk from the pool of the volume is failed or plugged out.
Partially optimal : the volume has experienced recoverable read errors. After passing parity check, the health will become Optimal.
Total capacity of the volume.
The status of the SSD cache:
Enabled: The volume is enabled SSD cache.
Disabled: The volume is disabled SSD cache.
Snapshot space Used snapshot space / Total snapshot space. The first capacity is current used snapshot space, and the second capacity is reserved total snapshot space.
Snapshots
Clone
Write
The quantity of the snapshot in the volume.
The target name of the clone volume.
Pool Name
The access right of the volume:
WT: Write Through.
WB: Write Back.
RO: Read Only.
Which pool the volume belongs to.
8.5.3. Operations on Volumes
The options available in this tab:
Setup Local Cloning Options
Click the Local Cloning Options button to set the clone options. For more information,
please refer to the chapter 12.2.4, Local Cloning Options section in the Data Backup chapter.
Storage Management
145
Figure 8-35 Local Cloning Options
LUN Mapping Operations
Click ▼ -> Map LUN to map a logical unit number to the volume. For more information
about map and unmap LUN, please refer to the chapter 8.6, Configure LUN Mappings
section.
Figure 8-36 Map a LUN of iSCSI Connectivity
Click ▼ -> Unmap LUNs to unmap logical unit numbers from the volume.
146 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 8-37 Unmap LUNs
Snapshot Operations
Click ▼ -> Set Snapshot Space to set snapshot space for preparing to take snapshots. For
more information about snapshot, please refer to the chapter 12.1, Managing Snapshots
section in the Data Backup chapter.
Figure 8-38 Set Snapshot Space
Click ▼ -> Take Snapshot to take a snapshot of the volume.
Click ▼ -> Schedule Snapshots to set the snapshots by schedule.
Click ▼ -> List Snapshots to list all snapshots of the volume.
Click ▼ -> Cleanup Snapshots to clean all snapshots of the volume and release the snapshot space.
Storage Management
147
Local Clone Operations
Click ▼ -> Create Local Clone to set the target volume for clone. For more information
about local clone, please refer to the chapter 12.2, Managing Local Clones section in the
Data Backup chapter.
Figure 8-39 Create Local Clone
Click ▼ -> Clear Clone to clear the clone.
Click ▼ -> Start Clone to start the clone.
Click ▼ -> Stop Clone to stop the clone.
Click ▼ -> Schedule Clone to set the clone function by schedule.
Click ▼ -> Change Replication Options to change the clone to Replication relationship. For
more information about remote replication, please refer to the chapter 12.3, Managing
Remote Replications section in the Data Backup chapter.
SSD Cache Options
Click ▼ -> Enable SSD Cache to enable SSD cache for the volume. For more information
about SSD cache, please refer to the chapter 10, SSD Cache chapter.
Click ▼ -> Disable SSD Cache to disable SSD cache for the volume.
Change Volume Properties
Click ▼ -> Change Volume Properties to change the volume properties of the volume.
148 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 8-40 Change Volume Properties
Reclaim Space with Thin Provisioning Pool
Click ▼ -> Space Reclamation to reclaim space from the pool when the volume is in a thin provisioning pool. For more information about space reclamation, please refer to the
chapter 9.2.1, Space Reclamation section in the Thin Provisioning chapter.
Verify Parity of the Volume
Click ▼ -> Verify Parity to execute parity check for the volume which is created by the pool in parity RAID level (e.g., RAID 3, 5, 6, 30, 50, and 60). This volume can either be verified and repaired or only verified for data inconsistencies. This process usually takes a long time to complete.
Figure 8-41 Verify Parity
Storage Management
149
Extend Volume Capacity
Click ▼ -> Extend Volume to extend the volume capacity. For more information, please refer
to the chapter 8.5.4, Extend Volume Capacity section.
Delete Volume
Click ▼ -> Delete to delete the volume. The related LUN mappings will also be deleted.
More Information of the Volume
Click ▼ -> More Information to show the detail information of the volume.
8.5.4. Extend Volume Capacity
Extend Volume function extends the capacity of the volume if there is enough free space.
Here’s how to extend a volume:
1. Select the Volumes function submenu, select a volume, and then click ▼ -> Extend
Volume.
2. Change the volume capacity. The capacity must be larger than the current, and then click the OK button to start extension.
Figure 8-42 Extend Volume Capacity
3. If the volume needs initialization, it will display the status Initiating and the complete percentage of initialization in Status.
4. It’s done when the complete percentage reaches 100%.
150 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
TIP:
The extension capacity must be larger than the current capacity.
CAUTION:
Extension cannot be executed during rebuilding or migration.
8.5.5. Fast RAID Rebuild
When executing rebuild, the Fast RAID Rebuild feature skips any partition of the volume where no write changes have occurred, it will focus only on the parts that have changed.
This mechanism may reduce the amount of time needed for the rebuild task. It also reduces the risk of RAID failure cause of reducing the time required for the RAID status from degraded mode to healthy. At the same time, it frees up CPU resources more quickly to be available for other I/O and demands. For more information, please refer to the Fast Rebuild
White Paper .
Enable Fast RAID Rebuild
Here is an example of enabling the Fast RAID Rebuild function when creating a volume.
Storage Management
151
Figure 8-43 Enable Fast RAID Rebuild When Creating a Volume
Fast RAID Rebuild Notices
Here are some notices about Fast RAID Rebuild.
Only a thick provisioning pool supports enabling/disabling this feature, a thin provisioning pool has included this feature and set as enabled by default.
When a rebuild occurs in a fast rebuild volume, clean partitions are not rebuilt since there are no data saved there. Though clean partitions are never rebuilt, their health status is good.
If all partitions of the fast rebuild volume are clean, then no rebuild would happen and no event would be sent.
The RAID stacks could not use optimize algorithm to compute parities of a partition which is not rebuilt. Thus, the performance of random write in a clean partition would be worse.
CAUTION:
Disabling Fast RAID Rebuild function is recommended when the access pattern to the volume is random write.
152 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
8.6. Configure LUN Mappings
Select the LUN Mappings function submenu to map, unmap or view the status of LUN
(Logical Unit Number) for each volume.
Figure 8-44 Pools Function Submenu
8.6.1. Map a LUN of iSCSI Connectivity
Here’s an example of mapping a LUN of iSCSI connectivity.
1. Select the LUN Mappings function submenu, click the Map LUN button.
Figure 8-45 Map a LUN of iSCSI Connectivity
2. Select the Protocol as iSCSI.
3. Select a Volume from the drop-down list.
Storage Management
153
4. Enter the Allowed Hosts with semicolons (;) or click the Add Host button to add one by one. Fill-in wildcard (*) for access by all hosts.
5. Select a Target from the drop-down list.
6. Select a LUN from the drop-down list.
7. Select a Permission level, normally set it as Read-write.
8. Click the OK button to map a LUN.
The matching rules of access control are followed from created time of the LUNs. The earlier created LUN is prior to the matching rules. For example: there are two LUN rules which are set to the same volume:
1. Allow Hosts sets *, LUN is LUN 0
2. Allow Hosts sets iqn.host1, LUN is LUN 1
The host iqn.host2 can login successfully because it matches the rule 1.
Wildcard * and ? are allowed in this field.
Wildcard * can replace any word.
Wildcard ?: can replace only one character.
For example:
iqn.host? -> iqn.host1 and iqn.host2 are accepted.
iqn.host* -> iqn.host1 and iqn.host12345 are accepted.
This field cannot accept comma, so iqn.host1, iqn.host2 stands a long string, not two iqns.
8.6.2. Map a LUN of Fibre Channel Connectivity
Here’s an example of mapping a LUN of fibre channel connectivity.
1. Select the LUN Mappings function submenu, click the Map LUN button.
154 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 8-46 Map a LUN of FC Connectivity
2. Select the Protocol as FCP (Fibre Channel Protocol).
3. Select a Volume from the drop-down list.
4. Enter the Allowed Hosts with semicolons (;) or click the Add Host button to add one by one. Fill-in wildcard (*) for access by all hosts.
5. Select a Target from the drop-down list.
6. Select a LUN from the drop-down list.
7. Select a Permission level, normally set it as Read-write.
8. Set Link Reset to Yes if fibre channel needs to reset when connected.
9. Click the OK button to map a LUN.
8.6.3. List LUN Mappings
List all LUN mappings in this page. The LUN properties can be configured by clicking the functions button to the left side of the specific LUN.
Figure 8-47 List LUN Mappings
Storage Management
155
This table shows the column descriptions.
Table 8-11 LUN Mapping Column Descriptions
Column Name Description
Allowed Hosts The target of FC / iSCSI for access control or a wildcard (*) for access by all hosts.
Target
LUN
Permission
Sessions
Volume
The target number.
The logical unit number which is mapped.
The permission level:
Read-write
Read-only
The quantity of the active iSCSI connection linked to the logical unit.
Show N/A if the protocol is FCP.
The name of the volume assigned to this LUN.
8.6.4. Operations on LUN Mappings
The options are available on this tab:
Unmap a LUN
Click ▼ -> Unmap LUN to unmap a logical unit number from the volume.
Active Sessions
Click ▼ -> Active Sessions to show the active sessions of iSCSI connection. This option is available when the protocol of LUN mapping is iSCSI.
8.7. Connect by Host Initiator
After map a LUN to a volume, the host can connect the volume by initiator program. We provide some documents of host connections for reference. The documents are available at the website:
How to Configure iSCSI Initiator in Microsoft Windows
How to Configure iSCSI Initiator in ESXi 6.x
Implement iSCSI Multipath in RHEL 6.5
156 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
8.8. Disk Roaming
Disks can be re-sequenced in the same system or move all member disks in the same pool from system-1 to system-2. This is called disk roaming. System can execute disk roaming online. Please follow these steps.
1. Select the Pools function submenu, selects a pool. And then click ▼ -> Deactivate.
2. Click OK button to apply. The Status changes to Offline.
3. Move all member disks of the pool to another system.
4. In Volumes tab, select the pool. And then click ▼ -> Activate.
5. Click OK to apply. The Status changes to Online.
Disk roaming has some constraints as described in the following.
1. Check the firmware version of two systems first. It is better for both systems to have the same firmware version or the firmware version of the system-2 is newer.
2. All physical disks of the pool should be moved from system-1 to system-2 together. The configuration of both pool and volume will be kept but LUN configuration will be cleared in order to avoid conflict with the current setting of the system-2.
INFORMATION:
XCubeSAN series does NOT support disk roaming from AegisSAN LX,
AegisSAN Q500, nor AegisSAN V100.
Storage Management
157
9. Thin Provisioning
This chapter describes an overview and operations of thin provisioning.
9.1. Overview
Nowadays, thin provisioning is a hot topic people talk about in IT management and storage industry circles. To contrast thin provisioning, it naturally comes to mind with the opposite term – thick or fat provisioning, which is the traditional way IT administrators allocate storage space to each logical volume that is used by an application or a group of users.
When it comes to the point of deciding how much space a logical volume requires for three years or for the lifetime of an application, it's really hard to make the prediction correctly and precisely. To avoid the complexity of adding more space to the volumes frequently, IT administrators might as well allocate more storage space to each logical volume than it needs in the beginning. This is why it's called thick or fat provisioning. Usually it turns out that a lot of free disk space is sitting around idle. This stranded capacity is wasted, which equals to waste of investment in drives, energy usage, and general inefficiency. Various studies indicate that as much as 75% of the storage capacity in small and medium enterprises or large data centers is allocated but unused. And this is where thin provisioning kicks in.
Figure 9-1 Traditional Thick Provisioning
Thin provisioning sometimes is known as just-in-time capacity or over allocation. As the term explains itself, it provides storage space dynamically on demand. Thin provisioning presents more storage space to the hosts or servers connecting to the storage system than is actually physically available on the storage system. To put it in another way, thin provisioning allocates storage space that may or may not exist. The whole idea is actually another way of virtualization. Virtualization is always about a logical pool of physical assets
158 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
and provides better utilization over those assets. Here the virtualization mechanism behind thin provisioning is storage pool. The capacity of the storage pool is shared by all volumes.
When write requests come in, the space will be drawn dynamically from this storage pool to meet the needs.
Figure 9-2 Thin Provisioning
9.1.1. Thick Provisioning vs. Thin Provisioning
The efficiency of thin or thick provisioning is a function of the use case, not of the technology. Thick provisioning is typically more efficient when the amount of resource used very closely approximates to the amount of resource allocated. Thin provisioning offers more efficiency where the amount of resource used is much smaller than allocated, so that the benefit of providing only the resource needed exceeds the cost of the virtualization technology used.
Figure 9-3 Benefits of Thin Provisioning
Thin Provisioning
159
9.2. Theory of Operation
Thin provisioning, in a shared-storage environment, provides a method for optimizing utilization of available storage. It relies on on-demand allocation of blocks of data versus the traditional method of allocating all the blocks up front. This methodology eliminates almost all whitespace which helps avoid poor utilization rates, that occur in the traditional storage allocation method where large pools of storage capacity are allocated to individual servers but remain unused (not written to. This traditional model is often called "fat" or
"thick" provisioning.
With thin provisioning, storage capacity utilization efficiency can be automatically driven up towards 100% with very little administrative overhead. Organizations can purchase less storage capacity up front, defer storage capacity upgrades in line with actual business growth, and save the operating costs (electricity and floor space) associated with keeping unused disk capacity spinning.
9.2.1. Thin Provisioning Architecture
A thin provisioning pool is a collection of disk groups which contain disk drives. Similarly, a storage pool is grouped to provide capacity for volumes. Volumes are then allocated out of the storage pool and are mapped to LUN which can be accessed by a host system. The following is the storage architecture of a thin provisioning pool.
Figure 9-4 Storage Architecture of Thin Provisioning Pool
160 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Disk groups which contain disk drives can be added to a thin provisioning pool at any time to increase the capacity of the pool. For simplifying usage and better performance, every disk group must have the same quantity of disk drives. A thin provisioning pool can have up to 32 disk groups with each disk group containing up to 8 disk drives. The maximum capacity of each disk group is 64TB. So the maximum capacity in a system is 256TB.
Table 9-1
Item
Thin Provisioning Pool Parameters
Value
Maximum Disk Group Quantity in a Thin Provisioning Pool 32
Maximum Disk Drive Quantity in a Disk Group 8
Maximum Disk Drive Quantity in a Thin Provisioning Pool
Maximum Capacity of a Disk Group
256 (= 32 x 8)
64TB
Maximum Thin Provisioning Pool Capacity per Pool
Maximum Thin Provisioning Pool Capacity per System
Provisioning Granularity
256TB
1,024TB
1GB
9.2.2. Space Reclamation
Previously allocated, but currently unused volume space can be reclaimed in disk pools after all data within a granularity have been deleted from volumes by hosts. Data must be permanently deleted to be considered as unused space. The unused space is returned to the pool.
Similar to the manual operation of reclaiming volume pace is the auto-reclamation process, which is the continual process of automatically reclaiming spaces that are "not in use" or
"zeroed-out" and returning them to the pool without user intervention. The auto-reclamation process has some benefits over running the manual operation. For more information about
the operation, refer to the chapter 9.4.1, Create a Volume in a Thin Provisioning Pool section
and the chapter 8.5.3, Operations on Volumes section.
9.3. Configure Thin Provisioning Pools
This section will describe the operations of configuring thin provisioning pool.
Thin Provisioning
161
Figure 9-5 Pools Function Submenu
9.3.1. Create a Thin Provisioning Pool
Here is an example of creating a thin provisioning pool with 3 disks configured in RAID 5. At the first time of creating a thin provisioning pool, it contains a disk group and the maximum quantity of disk in a disk group is 8.
1. Select the Pools function submenu, click the Create Pool button. It will scan available disks first.
TIP:
It may take 20 ~ 30 seconds to scan disks if your system has more than
200 disk drives. Please wait patiently.
162 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 9-6 Create a Thin Provisioning Pool Step 1
2. Select the Pool Type as Thin Provisioning.
3. Enter a Pool Name for the pool. The maximum length of the pool name is 16 characters.
Valid characters are [ A~Z | a~z | 0~9 | -_<> ].
4. Select a Preferred Controller from the drop-down list. The backend I/O resources in this pool will be processed by the preferred controller which you specified. This option is available when dual controllers are installed.
5. Click the Next button to continue.
Thin Provisioning
163
Figure 9-7 Create a Thin Provisioning Pool Step 2
6. Please select disks for a disk group in pool. Maximum quantity of disk in a disk group is
8. Select an Enclosure from the drop-down list to select disks from expansion enclosures.
7. Click the Next button to continue.
164 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 9-8 Create a Thin Provisioning Pool Step 3
8. Select a RAID Level from the drop-down list which lists available RAID level only according to the disk selection.
9. Click the Next button to continue.
Thin Provisioning
165
Figure 9-9 Create a Thin Provisioning Pool Step 4
10. Disk properties can also be configured optionally in this step:
。 Enable Disk Write Cache: Check to enable the write cache option of disks. Enabling disk write cache will improve write I/O performance but have a risk of losing data when power failure.
。 Enable Disk Read-ahead: Check to enable the read-ahead function of disks. System will preload data to disk buffer based on previously retrieved data. This feature will efficiently improve the performance of sequential data retrieved.
。 Enable Disk Command Queuing: Check to enable the command queue function of disks. Send multiple commands to a disk at once to improve performance.
。 Enable Disk Standby: Check to enable the auto spin down function of disks. The disks will be spun down for power saving when they are idle for the period of time specified.
11. Click the Next button to continue.
166 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 9-10 Create a Thin Provisioning Pool Step 5
12. After confirmation of info on the summary page, click the Finish button to create a pool.
Figure 9-11 A Thin Provisioning Pool is Created
13. A pool has been created. If necessary, click the Create Pool button again to create others.
9.3.2. List Thin Provisioning Pools
Select one of the pools, it will display the related disk groups. Similarly, select one of the disk groups, it will display the related disk drives. The pool properties can be configured by clicking the functions button to the left side of the specific pool.
Thin Provisioning
167
Figure 9-12 List Thin Provisioning Pools
This table shows the column descriptions.
Table 9-2 Pool Column Descriptions
Column Name Description
Pool Name
Status
The pool name.
The status of the pool:
Online : The pool is online.
Offline : The pool is offline.
Rebuilding : The pool is being rebuilt.
Migrating : The pool is being migrated.
Relocating : The pool is being relocated.
Health
Total
The health of the pool:
Good : The pool is good.
Failed : The pool is failed.
Degraded : The pool is not healthy and not completed. The reason could be lack of disk(s) or have failed disk.
Total capacity of the pool.
Free
Available
Thin
Provisioning
Volumes
Free capacity of the pool.
Available capacity of the pool.
The status of Thin provisioning:
Disabled.
Enabled.
The quantity of volumes in the pool.
168 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
RAID
Current
Controller
(This option is only visible when dual controllers
are installed.)
The RAID level of the pool.
The current running controller of the pool.
Table 9-3 Disk Group Column Descriptions
Column Name Description
No.
Status
The number of the disk group.
The status of the disk group:
Online : The disk group is online.
Offline : The disk group is offline.
Rebuilding : The disk group is being rebuilt.
Migrating : The disk group is being migrated.
Relocating : The disk group is being relocated.
Health
Total
The health of the disk group:
Good : The disk group is good.
Failed : The disk group fails.
Degraded : The pool is not healthy and not complete. The reason could be missing or failed disks.
Total capacity of the disk group.
Free
Disks Used
Free capacity of the disk group.
The quantity of disk drives in the disk group.
Table 9-4 Disk Column Descriptions
Column Name Description
Enclosure ID The enclosure ID.
Slot
Status
The position of the disk drive.
The status of the disk drive:
Online : The disk drive is online.
Missing : The disk drive is missing in the pool.
Rebuilding : The disk drive is being rebuilt.
Transitioning : The disk drive is being migrated or is replaced by another disk when rebuilding occurs.
Scrubbing : The disk drive is being scrubbed.
Thin Provisioning
169
Health
Capacity
Disk Type
Manufacturer
Model
Check Done : The disk drive has been checked the disk health.
The health of the disk drive:
Good : The disk drive is good.
Failed : The disk drive is failed.
Error Alert : S.M.A.R.T. error alerts.
Read Errors : The disk drive has unrecoverable read errors.
The capacity of the disk drive.
The type of the disk drive:
[ SAS HDD | NL-SAS HDD | SAS SSD | SATA SSD ]
[ 12.0Gb/s | 6.0Gb/s | 3.0Gb/s | 1.5Gb/s ]
The manufacturer of the disk drive.
The model name of disk drive.
9.3.3. Operations on Thin Provisioning Pools
Most operations are described in the Configuring Storage Pools section. For more
information, please refer to the chapter 8.4.3, Operations on Thick Provisioning Pools
section. We describe the operations about thin provisioning in the following.
Change Thin Provisioning Policy of the Pool
Click ▼ -> Change Thin Provisioning Policy in pools to change the policies of the thin provisioning pool. There are 6 levels of threshold percentage and the default values defined.
The event log level and action to take can be changed when the usage of the pool capacity reaches the threshold.
170 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 9-13 Change Thin Provisioning Policy
Table 9-5 Thin Provisioning Policy Column Descriptions
Column Name Description
Threshold
Level
The threshold of the pool.
Define the event log level when the usage of the pool reaches the threshold. The options are:
Information
Warning
Error
Action Action to take on the system when the usage of the pool reaches the threshold. The options are:
Take no Action
Reclaim Space
Delete Snapshots
De-activate Pool
Add a Disk Group into the Pool
If the auto tiering license is enabled, click ▼ -> Add Disk Group to transfer from the thin provisioning pool to auto tiering pool. For more information, please refer to the chapter 11.7,
Transfer to Auto Tiering Pool section.
Thin Provisioning
171
Figure 9-14 Add Disk Group
CAUTION:
The action of transferring from the thin provisioning pool to auto tiering is irreversible. Carefully consider the consequences before making this change.
Remove Disk Group
Click ▼ -> Remove in disk group to remove the selected disk group. The option is grayed out if there is only one disk group.
Move Disk Group Member Disks
Click ▼ -> Move Disk Group in disk group to move the member disks of disk group to other disk drives. Select disks and then click OK button.
172 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 9-15 Move Disk Group
9.3.4. Add a Disk Group in a Thin Provisioning Pool
Here is an example of adding a disk group in thin provisioning pool.
1. Select a pool, click ▼ -> Add Disk Group to add a disk group in a thin provisioning pool.
Thin Provisioning
173
Figure 9-16 Add Disk Group
2. Please select disks for adding a disk group in the pool. The quantity of selected disks must be the same as the quantity of current disk group. Select an Enclosure from the drop-down list to select disks from the expansion enclosures.
3. Click the OK button to add a disk group.
174 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 9-17 List Pool after Adding a Disk Group
9.4. Configure Volumes
This section will describe the operations of configuring volume in thin provisioning pool.
9.4.1. Create a Volume in a Thin Provisioning Pool
Here is an example of creating a volume of thin provisioning pool.
1. Select the Volumes function submenu, click the Create Volume button.
Thin Provisioning
175
Figure 9-18 Create a Volume in Thin Provisioning Pool Step 1
2. Enter a Volume Name for the pool. The maximum length of the volume name is 32 characters. Valid characters are [ A~Z | a~z | 0~9 | -_<> ].
3. Select a Pool Name from the drop-down list. It will also display the available capacity of the pool.
4. Enter required Capacity. The unit can be selected from the drop-down list.
5. Select Volume Type. The options are RAID Volume (for general RAID usage) and
Backup Volume (for the target volume of local clone or remote replication).
6. Click the Next button to continue.
176 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 9-19 Create a Volume in Thin Provisioning Pool Step 2
7. Volume advanced settings can also be configured optionally in this step:
。 Block Size: The options are 512 Bytes to 4,096 Bytes.
。 Priority: The options are High, Medium, and Low. The priority compares to other volumes. Set it as High if the volume has many I/O.
。 Background I/O Priority: The options are High, Medium, and Low. It will influence volume initialization, rebuild, and migration.
。 Enable Cache Mode (Write-back Cache): Check to enable cache mode function of volume. Write back optimizes the system speed but comes with the risk where the data may be inconsistent between cache and disks in one short time interval.
。 Enable Video Editing Mode: Check to enable video editing mode function. It is optimized for video editing usage. Please enable it when your application is in video editing environment. This option provides a more stable performance figure without high and low peaks but slower in average.
。 Enable Read-ahead: Check to enable the read ahead function of volume. The system will discern what data will be needed next based on what was just retrieved from disk and then preload this data into the disk's buffer. This feature will improve performance when the data being retrieved is sequential.
Thin Provisioning
177
。 Enable Space Reclamation: Check to enable the space reclamation function of the volume when the pool is thin provisioning. For more information about space
reclamation, please refer to the chapter 9.2.1, Space Reclamation section.
8. Click the Next button to continue.
Figure 9-20 Create a Volume in Thin Provisioning Pool Step 3
9. After confirmation at summary page, click the Finish button to create a volume.
10. The volume has been created. It will be initialized in protection RAID level (e.g., RAID 1, 3,
5, 6, 0+1, 10, 30, 50, and 60).
Figure 9-21 A Volume in Thin Provisioning Pool is Created
178 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
11. A volume has been created. If necessary, click the Create Volume button again to create others.
TIP:
SANOS supports instant RAID volume availability. The volume can be used immediately when it is initializing or rebuilding.
9.4.2. List Volumes and Operations on Volumes
Most operations are described in the chapter 8.5, Configuring Volumes section. For more
information about list volume, please refer to the chapter 8.5.2, List Volumes section. For
Change Volume Properties
Click ▼ -> Change Volume Properties to change the volume properties of the volume.
Figure 9-22 Change Volume Properties
Thin Provisioning
179
Reclaim Space with Thin Provisioning Pool
Click ▼ -> Space Reclamation to reclaim space from the volume when the volume is in a thin provisioning pool. For more information about space reclamation, please refer to the
chapter 9.2.1, Space Reclamation section.
9.5. Configure LUN Mappings and Connect by Host Initiator
Next step, you can configure LUN mapping and connect by host initiator. For more
information about LUN mapping, please refer to the chapter 8.6, Configure LUN Mappings
180 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
10. SSD Cache
This chapter describes an overview and operations of SSD cache. Now we provide two cache types, SSD read cache and read-write cache.
INFORMATION:
SSD cache 2.0 with read and write cache is available in SANOS firmware
1.1.0.
10.1. Overview
Traditionally, data has been stored on traditional rotating memory, or HDDs (Hard Disk
Drives) and SSDs (Solid-State Drives) are mainly used for mission-critical applications that demand high-speed storage systems, however tend to be costly. In recent years, the capacity of HDDs has increased, but their random I/O (Input / Output) has not kept pace. For some applications such as web commerce, clouds, and virtualization that require both high capacity and performance, HDDs, though capacious, simply are not fast enough and SSDs have increased capacity and have declined in cost, making them more attractive for caching in SAN storage networks.
Smart Response Technology (also known as SSD cache technology) leverages the strengths of both HDDs and SSDs, to cost-effectively meet the capacity and performance requirements of enterprise applications. Data is primarily stored on HDDs while SSDs serve as an extended HDD memory cache for many I/O operations. One of the major benefits of using SSD cache is the improved application performance, especially for workloads with frequent in I/O activity. The read data of an application that is frequently accessed is copied to the SSD cache; the write data that is stored to the SSD cache temporary and then flush to
HDDs in bulk. So the application receives an immediate performance boost. QSAN SSD cache enables applications to deliver consistent performance by absorbing bursts of read/write loads at SSD speeds.
Another important benefit is improved TCO (Total Cost of Ownership) of the system. SSD cache copies the hot or frequency of data to SSDs in chunks. By offloading many if not most of the remaining IOPS after SSD cache, the user can fill the remainder of their storage needs with low cost, high capacity HDDs. This ratio of a small amount of SSD paired with a lot of
HDD offers the best performance at the lowest cost with optimal power efficiency.
SSD Cache
181
Generally, SSD read cache is particularly effective when:
Reads are far more common than writes in the production environment, common in live database or web service applications.
The inferior speeds of HDD reads cause performance bottlenecks.
The size of repeatedly accessed data is smaller than the capacity of the SSD cache.
Figure 10-1 SSD Read and Write Cache
SSD read-write cache is particularly effective when:
Reads and writes mix in the production environment, common in file service applications.
The inferior speeds of HDD reads and writes cause performance bottlenecks.
Same as SSD read cache case; the size of repeatedly accessed data is smaller than the capacity of the SSD cache.
Willing to take a little risk to increase write performance because it’s write cache buffering at SSD cache pool. Of course, these write data in SSD cache can be used at the next read.
182 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
10.2. Theory of Operation
SSD cache allows an SSD to function as read cache or write buffer for a HDD volume. In
SSD read cache, it is a secondary cache that improves performance by keeping frequently accessed data on SSDs where they are read far more quickly than from the HDD volume.
When reads or writes are performed, the data from the HDDs are copied into the SSD cache.
Although the data is duplicated to SSD cache pool, it does not matter if the read cache pool is corrupted.
In SSD write cache, SSDs are a write buffering storage that improves performance by storing the write data in SSDs temporary where they are write far more quickly than to the
HDD volume. And then the write data will be flushed to the HDD volume at the appropriate time. It may take risk of losing data during the period that the write data is stored in SSD cache if the SSD cache pool is corrupted. The write data has not yet written back to the HDD volume. So the read-write cache pool needs data protection to protect the write data.
CAUTION:
Using SSD read-write cache, it may take risk of losing data if the SSD cache pool is corrupted. User has to monitor the health of the SSD cache pool carefully.
10.2.1. System Memory and SSD Cache Capacity
SSD cache function needs system memory as index. The usable capacity of SSD cache is in proportion to the size of the controller system memory. The following table is the relationship between system memory and SSD cache capacity.
Table 10-1 The Relationship between System Memory and SSD Cache Capacity
System Memory per Controller Maximum SSD Cache Capacity per System
4GB
8GB ~ 15GB
16GB ~ 31GB
X (Not Support)
2TB
4TB
32GB ~ 63GB
64GB ~ 127GB
128GB
8TB
16TB
32TB
SSD Cache
183
CAUTION:
SSD cache function is not support when the system memory is under 8GB per controller.
10.2.2. SSD Cache Pool Architecture
A SSD cache pool is grouped to provide capacity for SSD cache usage of a dedicated storage pool. The maximum SSD cache pool quantity per system (either dual controller or single controller) is 4. The following is the storage architecture of SSD cache pool.
Figure 10-2 Storage Architecture of SSD Cache Pool
Take an example of SSD read cache; one or more SSDs with NRAID+ (which is described in the next section) are grouped into the “SSD Cache Pool 1” and assigned to the “Pool 1” for use. The maximum SSD quantity in an SSD cache pool is 8. When the SSD read cache is performing, the capacity of the SSD cache can be increased by adding an SSD or decreased by removing an SSD.
Another example of SSD read-write cache; two SSDs with NRAID 1+ (which is also described in the next section) are grouped into an SSD group. One or more SSD groups combine to the
“SSD Cache Pool 2” and assigned to the “Pool 2” for use. When the SSD read-write cache is performing, the capacity of the SSD cache can be increased only by adding an SSD group with two SSDs at a time. The SSD read-write cache pool can be set SSDs as dedicated hot spares. The maximum dedicated spare SSD quantity in an SSD cache pool is 4.
184 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
TIP:
Note that the capacity allocated to the SSD cache pool is not counted in the regular data storage.
The volumes in pool can be selected to enable SSD cache function. The following is the relationship between SSD cache pool and storage pool.
Figure 10-3 The Relationship between SSD Cache Pool and Storage Pool
The volumes enabled SSD cache with hot data will consume the capacity of the SSD cache pool. Users have to consider which volumes are enabled according to the SSD cache resources. When the SSD cache is performing, you can enable or disable the volume. The maximum quantity of volume shared in an SSD cache pool is 32. The following table is the summary of the SSD cache parameters.
Table 10-2 SSD Cache Parameters
Item
Maximum SSD cache pool quantity per system (either dual controller or single controller)
Value
4
Maximum SSD quantity in an SSD cache pool
Maximum capacity of an SSD cache pool
8
32TB
Maximum quantity of volume shared in an SSD cache pool 32
Maximum dedicated spare SSD quantity in an SSD cache pool
4
SSD Cache
185
10.2.3. RAID Level of SSD Cache Pool
SSD read cache with NRAID+
Generally, SSD read cache uses NRAID (Non-RAID) or RAID 0 without data protection to create the SSD cache storage space. Our SSD read cache technology uses NRAID+ which is parallel NRAID without stripe.
Figure 10-4 SSD Read Cache with NRAID+
NRAID is a method of combining the free space on multiple disk drives to create a spanned capacity. A NRAID is generally a spanned volume only, as it often contains mismatched types and sizes of disk drives. RAID 0 consists of striping, without mirroring or parity.
Adding a disk drive into RAID 0 needs to re-stripe the data. For SSD cache, it will have terrible performance when performing SSD cache and migrating RAID 0 at the same time.
Let alone remove a disk drive.
Compare to the NRAID or RAID 0, NRAID+ places the independent block distributed on the different SSDs. This NRAID+ technology combines with the advantages of NRAID and has better random I/O than NRAID. It also has the advantage of easy to add/remove SSDs from the SSD cache pool to increase/decrease the capacity.
Although the SSD read cache is a technique of duplicating data. If the SSD cache pool corrupts, the original data will be safe but stop SSD read cache.
SSD read-write cache with NRAID 1+
186 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
SSD read-write cache needs data protection, so it usually uses RAID 1 or RAID 10 level to create the SSD cache storage space. Our SSD read-write cache technology uses NRAID 1+ which is parallel NRAID with mirror.
Figure 10-5 SSD Read-write Cache with NRAID 1+
RAID 10 creates a striped set from a series of mirrored drives. The same, adding pair disk drives into RAID 10 needs to re-stripe the data. So it is hard to add SSDs into SSD cache pool.
Compare to the RAID 10, NRAID 1+ places the independent block distributed on the different
SSDs but mirror. This NRAID 1+ technology has the data protection and better random I/O than RAID 10. It also has the advantage of easy to add SSDs into the SSD cache pool to increase the capacity. In this case, we disallow the SSDs to be removed from the SSD cache pool because it has the risk of losing write cache data.
If one SSD failure occurs in the SSD cache pool, the status of the SSD cache pool will be changed to degrade. It will stop write cache and start flushing write cache data into HDDs.
At this time, read cache data still working but no more new read cache data into SSD cache pool. The SSD read-write cache pool can be set SSDs as dedicated hot spares to prevent
SSD cache pool failure. After inserting an SSD as a hot spare, the SSD read-write cache pool will be rebuilt until it is completed. And then it reverts to SSD read-write cache service.
SSD Cache
187
10.2.4. Read/Write Cache Cases
The following describes the read/write cache cases. In SSD read cache, the processes of read/write data will be disassembled to the following cases.
Read Data with Cache Miss
Read Data with Cache Hit
Write Data in SSD Read Cache
Besides, in SSD read-write cache, the processes of read data are the same as above. But the write data is different.
Read Data with Cache Miss
Read Data with Cache Hit
Write Data in SSD Read-write Cache
Each case will be described below.
Read Data with Cache Miss
The following figure shows how the controller handles a host read request when some or all of the data are not in the SSD cache.
Figure 10-6 Read Data with Cache Miss
These steps are:
1. A host requests to read data. The system will check if the requested data is in memory cache or SSD cache. If not, it is called cache miss.
2. Data is read from the HDD volume because of cache miss.
188 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
3. The requested data is returned to the host. And the system will check whether the requested data is hot data.
4. If it is, the SSD cache is populated.
INFORMATION:
The actions that read data from the HDD and then write to the SSD are called populating the cache. For more information, please refer to the
chapter 10.2.5, Populating the Cache in the next section.
Read Data with Cache Hit
The following figure shows how the controller handles a host read request when the data is in the SSD cache.
Figure 10-7 Read Data with Cache Hit
These steps are:
1. A host requests a read data. The system finds that the data is in SSD cache, so it is called cache hit.
2. Data is read from the SSD cache.
3. The requested data is returned to the host.
4. If there is an SSD cache error, data is read from the HDD volume.
Write Data in SSD Read Cache
SSD Cache
189
The following figure shows how the controller handles a host write request in SSD read cache. The write data can also be frequently accessed data and populated into SSD cache for next read.
Figure 10-8 Write Data
These steps are:
1. A host requests to write data.
2. Data is written to the HDD volume.
3. The status is returned to the host.
4. The SSD cache is populated if the write threshold is reached.
Write Data in Read-write Cache
The following figure shows how the controller handles a host write request in SSD readwrite cache. The write data stays in the SSD cache for a while, and will be flushed to the
HDD volume at the appropriate time.
190 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 10-9 Write Data
These steps are:
1. A host requests to write data.
2. Data is written to the SSD cache.
3. The status is returned to the host.
4. Data will be flushed to the HDD volume at the appropriate time.
Flush Write Data to HDD Volume
In SSD read-write cache, the write data will be flushed to the HDD volume in the following situations.
Flush data to the HDD volumes before the SSD cache pool is deleted.
Flush data to the HDD volumes before the related volume is disabled.
Flush data when the system is idle.
CAUTION:
It will take times to flush data from SSD cache to HDD volumes. Please do not remove SSDs before complete flushing, or it may take risk of losing data.
TIP:
It won’t flush data to the HDD volumes when system goes shutdown or reboots.
SSD Cache
191
10.2.5. Populating the Cache
The actions that read data from the HDD and then write to the SSD are called populating the cache. Typically, this is a background operation that immediately follows a host read or write operation. As the goal of the cache is to store frequently accessed data, not every I/O operation should trigger a cache population, but only ones that pass a certain threshold, implemented as a counter. There are both a populate-on-read threshold, and a populate-onwrite threshold.
Populate-on-read Threshold
When the same data block to be read over the threshold, it is called hot data and populated to the SSD cache. The threshold must be greater than or equal to 1 in SSD read cache or
SSD read-write cache. The value is forbidden to set 0 because of no action in read cache.
The maximum of the threshold is 4. If larger than 4, the frequently accessed data is hard into SSD cache, so there is no obvious effect.
Populate-on-write Threshold
When the same data block to be written over the threshold, it is called hot data and populated to the SSD cache. The threshold must be greater than or equal to 0. If it is set to 0, no action is performed for a write cache. The value must greater than or equal to 1 in SSD read-write cache. The value is forbidden to set 0 because of no action in write cache. The same as above, the maximum of the threshold is 4. If larger than 4, there is no obvious effect.
Operation Process
Each cache block on a HDD volume has a read and write counter associated. When a host requests to read data located in that cache block, the read count is increased. If the data is not found in the cache and the read count is greater than or equal to the populate-on-read threshold, then a cache-populate operation is performed concurrently with the host read operation. If a cache hit occurs, the data is immediately returned from the SSD cache and a populate operation is not performed. If the read count is smaller than the threshold, a populate operation is not performed.
Write cases are the same scenario as read.
192 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
10.2.6. SSD Cache Tuning
The SSD cache can be tuned to maximize its efficiency base on application usage. Cache block size, populate-on-read threshold and populate-on-write-threshold are the main parameters.
Cache Block Size
A large cache block suits applications where frequently accessed data is close to each other, known as a high locality of reference. A large cache block will also fill up the SSD cache quickly - this is known as the warm-up time. After the cache is warmed up, the performance would be quite good for applications with high locality of reference. Such as the file system or web service usage, the frequently accessed data are based on some concentrated files which are usually in large block size. However large cache blocks will also generate larger I/O overhead, increasing response time, especially for cache misses.
A smaller cache block size suits applications with data that is less localized, meaning the data is accessed more randomly, such as database usage. The SSD cache will fill up slower, but with more cache blocks, there is greater chance of a cache hit, especially for data with less locality of reference. With a smaller cache block size, cache usage is usually less than with a larger cache block size, but overhead is less, so the penalty for cache misses is less severe.
Population Threshold
The population threshold is the quantity of accesses at which point that cache block is copied to the SSD Cache. A higher number ensures that the cache only stores frequently accessed data so there will not be much cache turnover however it also means the cache will take longer to warm up and be fully effective. A lower number means the cache is warmed up quickly, but may cause excessive cache populations. A populate on read threshold of 2 is sufficient for many applications. Populate-on-write is useful when data that is written to is often read soon after. This is often the case in file systems. Other applications, such as database software, does not have this tendency so populate on write may sometimes even be disabled.
Table 10-3 I/O Type Table for SSD Read Cache
I/O Type
Block Size
(Sectors)
Populate-on-Read
Threshold
Database
File System
1MB (2,048)
2MB (4,096)
2
2
0
2
Populate-on-Write
Threshold
SSD Cache
193
Web Service
Customization
4MB (8,192)
1MB/2MB/4MB
2
≥ 1 and ≤ 4
0
≥ 0 and ≤ 4
Table 10-4 I/O Type Table for SSD Read-write Cache
I/O Type
Block Size
(Sectors)
Populate-on-Read
Threshold
Database
File System
Web Service
1MB (2,048)
2MB (4,096)
4MB (8,192)
2
2
2
Customization 1MB/2MB/4MB ≥ 1 and ≤ 4
Populate-on-Write
Threshold
1
1
1
≥ 1 and ≤ 4
As you can see, there are tradeoffs for increasing or decreasing each parameter.
Understanding the data locality of the application is essential and it can be useful to do some field testing to see what works best.
10.3. Configure SSD Cache
The SSD Cache provides SSD Cache Pools and SSD Cache Statistics function tabs to configure and monitor SSD cache. This section will describe the operations of configuring
SSD cache.
Figure 10-10 SSD Cache Function Submenu
10.3.1. Enable SSD Cache License
The SSD cache function is optional. Before using it, you have to enable SSD cache license.
Select the Update function tab in the Maintenance function submenu, download Request
License file and send to your local sales to obtain a License Key. After getting the license
194 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
key, click the Choose File button to select it, and then click the Apply button to enable. When the license is enabled, please reboot the system. Each license key is unique and dedicated to a specific system. If you have already enabled, this option will be invisible.
Figure 10-11 Enable SSD Cache License
10.3.2. Create an SSD Cache Pool
Here is an example of creating an SSD cache pool with 2 SSDs configured in SSD read cache.
1. Select the SSD Cache function submenu, click the Create SSD Cache Pool button. It will scan available SSDs first.
SSD Cache
195
Figure 10-12 Create an SSD Cache Pool Step 1
2. Enter an SSD Cache Pool Name for the pool. Maximum length of the pool name is 16 characters. Valid characters are [ A~Z | a~z | 0~9 | -_<> ].
3. Select the Cache Type as Read Cache (NRAID+).
4. Select an I/O Type from the drop-down list according to your application.
5. Please select SSDs for SSD cache pool. Maximum quantity of disk is 8. Select an
Enclosure from the drop-down list to select SSDs from expansion enclosures.
6. Click the Next button to continue.
196 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 10-13 Create an SSD Cache Pool Step 2
7. Select a Pool Name from the drop-down list which lists available pools to be assigned to.
8. Within the pool, check Volumes to enable SSD cache, uncheck Volumes to disable.
9. Click the Next button to continue.
SSD Cache
197
Figure 10-14 Create an SSD Cache Pool Step 3
10. After confirmation at summary page, click the Finish button to create a pool.
Figure 10-15 An SSD Cache Pool is Created
11. An SSD cache pool has been created.
12. Follow the step 1 to 11 to create another SSD cache pool with Read-write Cache (NRAID
1+) and set the I/O type as File System.
198 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 10-16 Another SSD Cache Pool is Created
TIP:
The Create SSD Cache Pool button will be gray out when meet the following conditions.
There are no available SSDs which can be created an SSD cache pool.
There are no available pools which can be assigned to the SSD cache pool. The available pool includes that the status of the pools must be
Online and the health must be Good .
10.3.3. List SSD Cache Pools
Select one of the SSD cache pools, it will display the related SSDs and enabled volumes. The pool properties can be configured by clicking the functions button ▼ to the left side of the specific pool.
SSD Cache
199
Figure 10-17 List SSD Cache Pools
This table shows the column descriptions.
Figure 10-18 SSD Cache Pool Column Descriptions
Column Name Description
SSD Cache Pool
Name
The SSD cache pool name.
Status
Health
The status of the pool:
Online : The SSD cache pool is online.
Offline : The SSD cache pool is offline.
Rebuilding : The SSD cache pool is being rebuilt.
Flushing : The SSD cache pool is being flushed.
The health of the pool:
Good : The SSD cache pool is good.
Failed : The SSD cache pool is failed.
Degraded : The SSD cache pool is not healthy and not complete.
The reason could be missing or failed SSDs.
Total
Cache Type
RAID
Total capacity of the SSD cache pool.
The SSD cache type:
Disabled.
Enabled.
The RAID level of the SSD cache pool:
200 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Pool Name
I/O Type
NRAID+: SSD read cache.
NRAID 1+: SSD read-write cache.
Which pool the SSD cache pool assigns to.
The I/O type of the SSD cache pool:
Database.
File System.
Web Service.
Customization.
Figure 10-19 SSDs Column Descriptions
Column Name Description
No. The number of the SSD or SSD group:
SSD Read Cache: The number of the SSD.
SSD Read-write Cache: The number of the SSD group.
Enclosure ID
Slot
Status
The enclosure ID.
The position of the SSD.
The status of the SSD:
Online : The SSD is online.
Missing : The SSD is missing in the pool.
Rebuilding : The SSD is being rebuilt.
Flushing : The SSD is being flushed.
Health
Capacity
Disk Type
Manufacturer
Model
The health of the SSD:
Good : The disk drive is good.
Failed : The disk drive is failed.
Error Alert : S.M.A.R.T. error alerts.
Read Errors : The SSD has unrecoverable read errors.
The capacity of the SSD.
The type of disk drive:
[ SAS SSD | SATA SSD ]
[ 12.0Gb/s | 6.0Gb/s | 3.0Gb/s | 1.5Gb/s ]
The manufacturer of the SSD.
The model name of SSD.
Figure 10-20 Enabled Volume Column Descriptions
Column Name Description
Volume Name
Status
The volume is enabled SSD cache.
The status of the volume:
SSD Cache
201
Health
Capacity
Total Cache
Used
Write Cache
Used
Online : The volume is online.
Offline : The volume is offline.
Rebuilding : The volume is being rebuilt.
Flushing : The volume is being flushed.
The health of the volume:
Optimal : the volume is working well and there is no failed disk in the RG.
Failed : the pool disk of the volume has single or multiple failed disks than its RAID level can recover from data loss.
Degraded : At least one disk from the pool of the volume is failed or plugged out.
Partially optimal : the volume has experienced recoverable read errors. After passing parity check, the health will become Optimal.
The capacity of the volume.
The capacity of total cache used.
The capacity of write cache used.
10.3.4. Operations on SSD Cache Pools
The options available in this tab:
Add or Remove SSDs
For an SSD read cache, click ▼ -> Add / Remove SSDs to add into or remove SSDs from the
SSD cache pool. Check SSDs to add into the SSD cache pool, and uncheck SSDs to remove from the SSD cache pool.
202 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 10-21 Add / Remove SSDs from the SSD Cache Pool
For an SSD read-write cache, click ▼ -> Add SSDs to add SSDs into the SSD cache pool.
Please select two SSDs at a time to add into the SSD cache pool.
SSD Cache
203
Figure 10-22 Add SSDs into the SSD Cache Pool
Delete an SSD Cache Pool
Click ▼ -> Delete to delete the SSD cache pool. If deleting an SSD read-write cache pool, it will take times to flush data from SSDs to HDDs. The SSD cache pool will be deleted after flushed, otherwise the data may be corrupt.
More Information of the SSD Cache Pool
Click ▼ -> More Information to show the detail information of the SSD cache pool.
10.4. Monitor SSD Cache Statistics
This section will describe the monitoring SSD cache statistics.
204 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 10-23 SSD Cache Function Submenu
Select the SSD Cache Statistics function tab in the SSD Cache function submenu to monitor the SSD cache statistics. Here is an example of monitoring the volume of the SSD read-write cache pool.
1. Select an SSD Cache Pool Name and select an Enabled Volumes which you want to monitor.
SSD Cache
205
Figure 10-24 SSD Cache Statistics
2. The SSD cache statistics include Cache Used, Cache Hits, and Cache Hit Ratio. Each items have current value, average 1 day value, average 1 week value, and average 1 month value. The diagram displays the current value with 30 seconds refresh and total 4 hours.
Table 10-5 SSD Cache Statistics Descriptions
Column Name Description
Cache Used How much capacity of the SSD cache pool is used by the volume?
Unit is MB, GB, or TB.
Cache Hits
How many cache hits is occurred by the volume? Unit is the number of times
Cache Hit Ratio
How many cache hit ratio is occurred by the volume? Unit is percentage.
206 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
10.5. SSD Cache Notices
The following are some notices about SSD cache.
To support SSD cache 2.0, the resources will be redistributed. So executing COLD
REBOOT is necessary after upgrade from firmware 1.0.x to 1.1.0.
The SSD cache is optional, the SSD cache license can be extended to SSD cache 2.0 with read and write cache.
The data structure of the SSD cache 2.0 is re-designed from the previous SSD cache.
After upgrade to firmware 1.1.0 with SSD cache 2.0, you have to configure SSD cache again if you have already enabled SSD cache in firmware 1.0.x.
CAUTION:
Execute COLD REBOOT after upgrade from firmware 1.0.x to 1.1.0.
Notice that it CAN NOT be downgrade to firmware 1.0.x after upgrade to
1.1.0.
SSD Cache
207
11. Auto Tiering
This chapter describes an overview and operations of auto tiering.
11.1. Overview
From the perspective of storage features, the performance of SSD is high, but the cost is also high per GB. Relatively speaking, the cost of traditional hard drive is low, so as performance is relatively poor. If we follow the 80/20 rule to configure storage systems, all-
SSD configuration is unreasonable for all but the most intensive applications. In fact, SSD will be needed in only a small part for most typical applications, regardless of whether or not a critical application, thus giving SSD resources for general storage needs is hugely costprohibitive. Although traditional hard disk performance is enough for general applications which I/O requirements are not high, the traditional all-hard-drive configuration is also gradually been inadequate.
On the other hand, the data itself has a lifecycle. Since the data in the course of its life cycle, it has experienced different levels of activity. In common usage, when creating the data, it is usually used. As the age of the data increases, it is accessed less often.
The Solution
Therefore, to balance performance and cost factors, adapting a hybrid storage architecture with a mixture of SSD and traditional HDD seems to be the most reasonable approach for modern IT environments. Generally, SSD-based storage capacity in 10 to 15% of the total storage capacity should be enough to fulfill the requirements of critical high I/O applications.
An automated tiering pool is a simple and elegant solution for dynamically matching storage requirements with changes in the frequency of data access.
Tier Categories
As the name suggestion, auto tiering must have two tiers at least. Automated tiering pool segregated disk drives into three categories for dual controllers and four for single controller
Tier 1: SSD drives for extreme performance tier
Tier 2: SAS drives (15K or 10K SAS HDD) for performance tier
Tier 3: Nearline SAS drives (7.2K SAS HDD) for capacity tier
208 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 11-1 Auto Tiering Pool
11.1.1. SSD Cache vs. Auto Tiering
A key difference between tiering and cache is that tiering moves data to SSD instead of simply caching it. Tiering can also move data both from slower storage to faster storage and vice versa. But SSD cache is essentially a one-way transaction. When the cache is done with the data it was accelerating it simply nullifies it instead of copying it back to HDD. The important difference between moves and copies is that a cache does not need to have the redundancy that tiering does. Tiering stores the only copy of data for potentially a considerable period of time so it needs to have full data redundancy like RAID or mirroring.
Auto Tiering
209
Figure 11-2 SSD Cache vs. Auto Tiering
Total storage capacity in auto tiering is a sum of all individual tier capacities whereas in cache, the cache capacity does not add to the overall slower storage capacity. This is one of the key differences. In addition, SSD cache affects rapider than auto tiering because auto tiering will be affected by relocation the data in a period of time. So SSD cache warm-up timeframe is usually minutes/hours whereas tiering warm-up is usually in days.
Table 11-1 SSD Cache vs. Auto Tiering
SSD Cache
Total Capacity
When SSD is Damaged
Performance
HDD
Pool Works Fine
Effective in Short Term
Auto Tiering
HDD + SSD
Pool Fails
Effective in Long Term
11.2. Theory of Operation
Auto tiering is the automated progression or demotion of data across different tiers (types) of storage devices and media. The movement of data takes place in an automated way with the help of software and is assigned to the ideal storage media according to performance and capacity requirements. It also includes the ability to define rules and policies that dictate if and when data can be moved between the tiers, and in many cases provides the ability to pin data to tiers permanently or for specific periods of time.
210 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
11.2.1. Auto Tiering Architecture
A newly created auto tiering pool is based on thin provisioning technology. Each tier works based on one or more disk group. Every disk group must have the same disk quantity when creating a tiering pool as a base unit of disk group. Maximum quantity of disk in a disk group is 8. For example, if your system is 12 bays, you can insert 4x SSDs, 4x SAS HDDs, and 4x NL-SAS HDDs to build an auto tiering pool configured in RAID 5. If you have 24 bays, you can have a pool with 8x SSDs, 8x SAS HDDs, and 8x NL-SAS HDDs. The following is the storage architecture of an auto tiering pool.
Figure 11-3 Storage Architecture of Auto Tiering Pool
To increase the capacity of an auto tiering pool, any tier (disk group) which contains either one tier of SSDs, SAS HDDs, or NL-SAS HDDs can be added to the pool any time. Similar as thin provisioning, every new added tier (disk group) must have the same quantity of disk drives. A thin provisioning pool can have up to 32 disk groups with each disk group contains up to 8 disk drives. For tiering, the quantity of lowest tier (disk group) must be larger or equal to the higher ones. The maximum capacity of each disk group is 64TB. So the maximum capacity of an auto tiering pool in a system is 256TB.
Table 11-2 Auto Tiering Pool Parameters
Item
Maximum Tiers
Maximum Disk Group Quantity in an Auto Tiering Pool
Maximum Disk Drive Quantity in a Disk Group
Value
3
32
8
Auto Tiering
211
Maximum Disk Drive Quantity in an Auto Tiering Pool
Maximum Capacity of a Disk Group
Maximum Auto Tiering Pool Capacity per Pool
Maximum Auto Tiering Pool Capacity per System
256 (= 32 x 8)
64TB
256TB
1,024TB
By design, the auto tiering feature allows selecting policies that define how data are moved between different tiers, and in many cases provides the ability to pin data to tiers permanently or for specific periods of time.
Auto tiering storage is the assignment of different categories of data to different disk types.
It operates based on relocating the most active data up to the highest available tier and the least active data down to the lowest tier. Auto tiering works based on an allocation unit
(granularity) of 1GB and relocates data by moving the entire unit to the appropriate tier, depending on the tiering policy selected for that particular volume.
In order to ensure sufficient space in the higher tiers, 10% of the space is reserved in each higher tier to prepare for the data allocation for those tiering policies which would allocate initial space in highest available tiers. By reclaiming this 10% headroom, the least active units within each tier move to lower tiers. The whole mechanism of auto tiering contains three steps, statistic collection by accessed counts, ranking hotness data by the statistic collection, and then relocation data via ranking.
11.2.2. Hotness Analysis
The volume space is divided into units of equal size in which the hotness is collected and analyzed per hour. This is also called sub LUN. The analysis consists of following two phases:
Statistics Collection
Activity level of a sub LUN is determined by counting the quantity of read and write access on the sub LUN. Logical volume manager maintains a cumulative I/O count and weights each I/O by how recently it arrived. The new coming I/O is given a full weight. After approximately 24 hours, the weight of this IO is nearly cut in half and continues to decrease.
The reduction weight is processing per hour by our precision algorism. This statistics collection occurs continuously in the background for auto tiering pool.
212 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Ranking
This analysis produces a rank ordering of each sub LUN within the pool. Note that the policies of volumes would affect how sub LUNs are ranked.
After analysis, the system would generate following information for each tier:
The amount of data to be moved up
The amount of data to be moved down
The amount of data to be moved into a tier.
TIP:
The hotness analysis process which includes statistics collection and ranking may take minutes to complete.
11.2.3. Relocation
According to the hotness analysis, relocation is processed during the user-defined relocation window, which is the number of minutes given to the relocation process. When the window closes, the relocation process would stop relocating data. The other parameter is relocation rate which controls speed of the relocation process. Valid value of relocation rate is Fast, Medium, and Slow.
Auto tiering promotes sub LUNs according to the candidate list that it created in the analysis stage. During relocation, it prioritizes relocating sub LUNs to higher tiers. At the same time, sub LUNs are only relocated to higher tiers if the space they occupy is required for a higher priority. Using the mechanism, auto tiering makes sure that the higher performing drives are always used.
During I/O, as data is written to a pool, auto tiering attempts to move it to the higher tiers if space is available and the tiering policy allows for it. As we describe before, the relocation process will keep 10% of the free space in all tiers. This space is reserved for any new allocations of higher priority sub LUNs before the next relocation. Lower tiers are used for capacity when needed. The entire relocation process is complete automatically based on the user-defined relocation schedule, or manually if user triggers by himself. Figure 11-4 provides an illustration of how auto tiering can improve sub LUN placement in a pool.
Auto Tiering
213
Figure 11-4 Auto Tiering Relocation
11.2.4. Tiering Policies
For a best performance in various environments, auto tiering has a completely automated feature that implements a set of tiering polices. Tiering policies determine how new allocations and ongoing relocations should apply within a volume for those requirements.
Auto tiering uses an algorithm to make data relocation decisions based on the activity level of each unit. It ranks the order of data relocation across all volumes within each separate pool. The system uses this information in combination with the tiering policy per volume to create a candidate list for data movement. The following volume policies are available:
Auto Tiering
It allows moving a small percentage of the “hot” data to higher tiers while maintaining the rest of the data in the lower tiers. This policy automatically relocates data to the most appropriate tier based on the activity level of each data. Sub LUNs are relocated based on the highest performance disk drives available and its hotness. Although this setting relocates data based on the performance statistics of the volume, the volume sets with
“Highest available Tier” take precedence. Initial space is allocated in the tier which is healthier and has more free capacity than other tiers, then relocated according to hotness of the data. This is the recommended policy and it is the default policy for each newly created volume.
214 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Start Highest then Auto Tiering
This takes advantage of the both “Highest Available Tier” and “Auto Tiering” policies. “Start
Highest then Auto Tiering” sets the preferred tier for initial data allocation to the highest performing disks with available space, and then it relocates the volume’s data based on the performance statistics and the auto-tiering algorithm. With this tiering policy, less active data is moved to lower tiers, making room for more active data in the higher tiers. Initial space is allocated in highest available tier first, then relocated according to hotness of the data.
Highest Available Tier
Use this policy when quick response times are a priority. This tier is effective for volumes which require high levels of performance whenever they are accessed. The policy starts with the “hottest” first and places them in the highest available tier until the tier’s capacity or performance capability limit is hit. Then it places the sub LUNs into the second higher tier.
Initial space is allocated in highest available tier. Auto tiering would prioritize sub LUNs with highest available tier selected above all other settings.
Lowest Tier
Use this policy when cost effectiveness is the highest priority. With this policy, data is initially placed on the lowest available tier with capacity. Select this policy for volumes that are not performance sensitive or response-time sensitive. Regardless of their activity level, all sub LUN of these volumes will remain on the lowest storage tier available in their pool.
Data of volumes with “Lowest tier” policy would always reside in the lowest tier. Changing policy of a volume with data in higher tiers to ‘”Lowest tier” would cause all its data in higher tier to be relocated down to the lowest tier.
No Data Movement
If a volume is configured with this policy, no sub LUN provisioned to the volumes is relocated across tiers. Data remains in its current position, but can still be relocated within the tier. The system still collects statistics on these sub LUNs after the tiering policy is changed. Initial space is allocated in the tier which is healthier and has more free capacity than other tiers. No relocation would be performed in a volume which selects “No data movement” tiering policy.
Auto Tiering
215
11.3. Configure Auto Tiering Pools
This section will describe the operations of configuring auto tiering pool.
Figure 11-5 Pools Function Submenu
11.3.1. Enable Auto Tiering License
The auto tiering function is optional. Before using it, you have to enable auto tiering license.
Select the Update function tab in the Maintenance function submenu, download Request
License file and send to your local sales to obtain a License Key. After getting the license key, click the Choose File button to select it, and then click the Apply button to enable. When the license is enabled, please reboot the system. Each license key is unique and dedicated to a specific system. If you have already enabled, this option will be invisible.
Figure 11-6 Enable Auto Tiering License
11.3.2. Create an Auto Tiering Pool
Here is an example of creating an auto tiering pool with 3 tiers, each tier has 3 disks configured in RAID 5. At the first time of creating an auto tiering pool, it may contain at least
2 tiers (disk groups) and the maximum quantity of disk in a tier (disk group) is 8.
216 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
1. Select the Pools function submenu, click the Create Pool button. It will scan available disks first.
TIP:
It may take 20 ~ 30 seconds to scan disks if your system has more than
200 disk drives. Please wait patiently.
Figure 11-7 Create an Auto Tiering Pool Step 1
2. Select the Pool Type as Auto Tiering (Thin Provisioning Enabled). This option is available when auto-tiering license is enabled.
3. Enter a Pool Name for the pool. The maximum length of the pool name is 16 characters.
Valid characters are [ A~Z | a~z | 0~9 | -_<> ].
4. Select a Preferred Controller from the drop-down list. The backend I/O resources in this pool will be processed by the preferred controller which you specified. This option is available when dual controllers are installed.
5. Click the Next button to continue.
Auto Tiering
217
Figure 11-8 Create an Auto Tiering Pool Step 2
6. Please select disks for a tiering pool. Each tier works based on one or more disk group.
Every disk group must have the same disk quantity when creating a tiering pool as a base unit of disk group. Maximum quantity of disk in a disk group is 8. For example, if your system is 12 bays, you can insert 4x SSDs, 4x SAS HDDs, and 4x NL-SAS HDDs to build an auto tiering pool configured in RAID 5. If you have 24 bays, you can have a pool with 8x SSDs, 8x SAS HDDs, and 8x NL-SAS HDDs.
7. Click the Next button to continue.
218 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 11-9 Create an Auto Tiering Pool Step 3
8. Select a RAID Level from the drop-down list which lists available RAID level only according to the disk selection.
9. Click the Next button to continue.
Auto Tiering
219
Figure 11-10 Create an Auto Tiering Pool Step 4
10. Disk properties can also be configured optionally in this step:
。 Enable Disk Write Cache: Check to enable the write cache option of disks. Enabling disk write cache will improve write I/O performance but have a risk of losing data when power failure.
。 Enable Disk Read-ahead: Check to enable the read-ahead function of disks. System will preload data to disk buffer based on previously retrieved data. This feature will efficiently improve the performance of sequential data retrieved.
。 Enable Disk Command Queuing: Check to enable the command queue function of disks. Send multiple commands to a disk at once to improve performance.
。 Enable Disk Standby: Check to enable the auto spin down function of disks. The disks will be spun down for power saving when they are idle for the period of time specified.
11. Click the Next button to continue.
220 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 11-11 Create an Auto Tiering Pool Wizard Step 5
12. By default, we set relocation schedule at 00:00 daily, relocation period set to 00:00 which means let relocation process run until it finishes, and relocation rate to fast.
13. After confirmation at summary page, click the Finish button to create a pool.
Figure 11-12 An Auto Tiering Pool is Created
14. The pool has been created. If necessary, click the Create Pool button again to create others.
Auto Tiering
221
11.3.3. List Auto Tiering Pools
Pool View
Select one of the pools, it will display the related disk groups. The same, select one of the disk groups, it will display the related disk drives. The pool properties can be configured by clicking the functions button to the left side of the specific pool.
Figure 11-13 List Auto Tiering Pools
This table shows the column descriptions.
Table 11-3 Pool Column Descriptions
Column Name Description
Name
Status
Pool name.
The status of the pool:
Online : The pool is online.
Offline : The pool is offline.
Rebuilding : The pool is being rebuilt.
Migrating : The pool is being migrated.
Relocating : The pool is being relocated.
Health The health of the pool:
Good : The pool is good.
Failed : The pool is failed.
Degraded : The pool is not healthy and not complete. The reason
222 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Total
Free
Available
Thin
Provisioning
Auto Tiering could be missing or failed disks.
Total capacity of the pool.
Free capacity of the pool.
Available capacity of the pool.
The status of Thin provisioning:
Disabled.
Enabled.
The status of Auto Tiering:
Disabled.
Enabled.
The quantity of volumes in the pool.
The RAID level of the pool.
The current running controller of the pool.
Volumes
RAID
Current
Controller
(This option is only visible when dual controllers
are installed.)
Table 11-4 Disk Group Column Descriptions
Column Name Description
No
Status
Health
Total
The number of disk group.
The status of the disk group:
Online : The disk group is online.
Offline : The disk group is offline.
Rebuilding : The disk group is being rebuilt.
Migrating : The disk group is being migrated.
Relocating : The disk group is being relocated.
The health of the disk group:
Good : The disk group is good.
Failed : The disk group fails.
Degraded : The disk group is not healthy and not completed. The reason could be lack of disk(s) or have failed disk.
Total capacity of the disk group.
Free
Disks Used
Free capacity of the disk group.
The quantity of disk drives in the disk group.
Auto Tiering
223
Table 11-5 Disk Column Descriptions
Column Name Description
Enclosure ID
Slot
Status
The enclosure ID.
The position of the disk drive.
The status of the disk drive:
Online : The disk drive is online.
Missing : The disk drive is missing in the pool.
Rebuilding : The disk drive is being rebuilt.
Transitioning : The disk drive is being migrated or is replaced by another disk when rebuilding occurs.
Scrubbing : The disk drive is being scrubbed.
Check Done : The disk drive has been checked the disk health.
Health
Capacity
Disk Type
Manufacturer
Model
The health of the disk drive:
Good : The disk drive is good.
Failed : The disk drive is failed.
Error Alert : S.M.A.R.T. error alerts.
Read Errors : The disk drive has unrecoverable read errors.
The capacity of the disk drive.
The type of the disk drive:
[ SAS HDD | NL-SAS HDD | SAS SSD | SATA SSD ]
[ 12.0Gb/s | 6.0Gb/s | 3.0Gb/s | 1.5Gb/s ]
The manufacturer of the disk drive.
The model name of disk drive.
Auto Tiering View
The Auto Tiering function tab in the Pools function submenu is only visible when auto tiering license is enabled. Select one of the pools, it will display the related tiering status.
The pool properties can be configured by clicking the functions button to the left side of the specific pool.
224 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 11-14 Auto Tiering Pools and Status
This table shows the column descriptions.
Table 11-6 Pool Tiering Status Column Descriptions
Column Name Description
Tier Level Tier categories, there are SSD, SAS, Nearline SAS, and SATA. The system will hide the tiers without any disk groups.
Tier Capacity Total capacity of the tier.
Tier Used
Move Up
Move Down
Move In
Tier Status
Used capacity of the tier.
The capacity prepares to move up to higher tier.
The capacity prepares to move down to lower tier.
The capacity prepares to move in from other tiers.
Bar chart to show the tier status:
Light Blue : Used capacity.
Orange : The data will move in.
Gray : Unallocated.
11.3.4. Operations on Auto Tiering Pools
Most operations are described in the Configuring Storage Pools section. For more
information, please refer to the chapter 8.4.3, Operations on Thick Provisioning Pools
section and the chapter 9.3.3, Operations on Thin Provisioning Pools section. We describe
the operations about auto tiering in the following.
Relocation Schedule
Click ▼ -> Relocation Schedule to setup relocation schedule in auto tiering pool. If
Relocation Period sets as 00:00, it will let relocation process run until it finishes.
Auto Tiering
225
Figure 11-15 Relocation Schedule
Relocate Now
Click ▼ -> Relocate Now to perform relocation right now in an auto tiering pool. Similarly, if
Relocation Period sets as 00:00, it will let relocation process run until it finishes.
Figure 11-16 Relocate Now
226 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
11.3.5. Add a Tier (Disk Group) in an Auto Tiering Pool
Here is an example of adding a disk group in thin provisioning pool.
1. Select a pool, click ▼ -> Add Disk Group to add a disk group in the auto tiering pool.
Figure 11-17 Add Disk Group
2. Please select disks for adding a disk group in the pool. The quantity of selected disks must be the same as the quantity of disks in all disk groups in the auto tiering pool.
Select an Enclosure from the drop-down list to select disks from the expansion enclosures.
3. Click the OK button to add a disk group.
Auto Tiering
227
11.3.6. Hot Spares in an Auto Tiering Pool
In an auto tiering pool, hot spare drives can only replace the drives of the same disk type.
For example, a SSD tier can only be assigned SSD type drives as hot spares drives.
Figure 11-18 Hot Spares in Auto Tiering Pool
11.4. Configure Volumes
This section will describe the operations of configuring volume in auto tiering pool.
11.4.1. Create a Volume in an Auto Tiering Pool
Here is an example of creating a volume in an auto tiering pool.
1. Select the Volumes function submenu, click the Create Volume button.
228 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 11-19 Create a Volume of Auto Tiering Pool Step 1
2. Enter a Volume Name for the pool. The maximum length of the volume name is 32 characters. Valid characters are [ A~Z | a~z | 0~9 | -_<> ].
3. Select a Pool Name from the drop-down list. It will also display the available capacity of the pool.
4. Enter required Capacity. The unit can be selected from the drop-down list.
5. Select Volume Type. The options are RAID Volume (for general RAID usage) and
Backup Volume (for the target volume of local clone or remote replication).
6. Click the Next button to continue.
Auto Tiering
229
Figure 11-20 Create a Volume of Auto Tiering Pool Step 2
7. Volume advanced settings can also be configured optionally in this step:
。 Block Size: The options are 512 Bytes to 4,096 Bytes.
。 Priority: The options are High, Medium, and Low. The priority compares to other volumes. Set it as High if the volume has many I/O.
。 Background I/O Priority: The options are High, Medium, and Low. It will influence volume initialization, rebuild, and migration.
。 Tiering Policy: The options are Auto Tiering, Start Highest then Auto Tiering, High
Available Tier, Lowest Tier, and No Data Movement. Please refer to the chapter
11.2.3, Tiering Policies section for detail.
。 Enable Cache Mode (Write-back Cache): Check to enable cache mode function of volume. Write back optimizes the system speed but comes with the risk where the data may be inconsistent between cache and disks in one short time interval.
。 Enable Video Editing Mode: Check to enable video editing mode function. It is optimized for video editing usage. Please enable it when your application is in video editing environment. This option provides a more stable performance figure without high and low peaks but slower in average.
。 Enable Read-ahead: Check to enable the read ahead function of volume. The system will discern what data will be needed next based on what was just retrieved from
230 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
disk and then preload this data into the disk's buffer. This feature will improve performance when the data being retrieved is sequential.
。 Enable Space Reclamation: Check to enable the space reclamation function of the volume when the pool is auto tiering.
8. Click the Next button to continue.
Figure 11-21 Create a Volume of Auto Tiering Pool Step 3
9. After confirmation at summary page, click Finish button to create a volume.
10. The volume has been created. It will be initialized in protection RAID level (e.g., RAID 1, 3,
5, 6, 0+1, 10, 30, 50, and 60).
Figure 11-22 A Volume in Auto Tiering Pool is Created
Auto Tiering
231
11. A volume has been created. If necessary, click the Create Volume button to create another.
TIP:
SANOS supports instant RAID volume availability. The volume can be used immediately when it is initializing or rebuilding.
11.4.2. List Volumes and Operations on Volumes
Most operations are described in the chapter 8.5, Configuring Volumes section. For more
information about list volume, please refer to the chapter 8.5.2, List Volumes section. For
Change Volume Properties
Click ▼ -> Change Volume Properties to change the volume properties of the volume.
Figure 11-23 Change Volume Properties
232 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Reclaim Space with Thin Provisioning Pool
Click ▼ -> Space Reclamation to reclaim space from the volume when the volume is in a thin provisioning pool. For more information about space reclamation, please refer to the
chapter 9.2.1, Space Reclamation section.
11.5. Configure LUN Mappings and Connect by Host Initiator
Next step you can configure LUN mapping and connect by host initiator. For more
information about LUN mapping, please refer to the chapter 8.6, Configure LUN Mappings
section for detail. For more information about host initiator, please refer to the chapter 8.7,
Connect by Host Initiator section for detail.
11.6. Auto Tiering Notices
There are some notices about auto tiering.
The quantity of selected disks must be the same as the quantity of current disk group.
The type of selected disks must be the same.
Only disks faster than or equal to the lowest tier disk group can be added. The quantity of the disk group in the lowest tier must be greater or equal to the higher one.
11.7. Transfer to Auto Tiering Pool
This section describes thick provisioning pool or thin provisioning pool transfer to auto tiering one. Caution that the transferring action is irreversible, think carefully about the potential consequences.
11.7.1. Transfer from Thick Provisioning Pool to Auto Tiering
First of all, make sure the auto tiering license is enabled. For more information about
enabling license operation, please refer to the chapter 11.3.1, Enable Auto Tiering License
section. And then use Add Disk Group function to add another tier (disk group). Here is an example of transfer thick provisioning pool to auto tiering one.
1. Create a thick provisioning pool with three NL-SAS disk drives. Auto Tiering status is
Disabled.
Auto Tiering
233
Figure 11-24 Transfer Thick Provisioning Pool to Auto Tiering Step 1
2. Click ▼ -> Add Disk Group to transfer from the thick provisioning pool to auto tiering pool. The tier (disk group) must be added one at a time. Select three SSDs to add a SSD tier. And then click the OK button.
Figure 11-25 Transfer Thick Provisioning Pool to Auto Tiering Step 2
234 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
3. Use the same procedure to add SAS tier with 3 SAS disk drives.
Figure 11-26 Transfer Thick Provisioning Pool to Auto Tiering Step 3
4. Auto Tiering status is Enabled. The thick provisioning pool has been transferred to auto tiering one.
CAUTION:
The action of transferring from the thick provisioning pool to auto tiering is irreversible. Please consider carefully all possible consequences before taking this step.
11.7.2. Transfer from Thin Provisioning Pool to Auto Tiering
First of all, make sure the auto tiering license is enabled. For more information about
enabling license operation, please refer to the chapter 11.3.1, Enable Auto Tiering License
section. And then use Add Disk Group function to add another tier (disk group). Here is an example of transfer thin provisioning pool to auto tiering one.
1. Create a thin provisioning pool with three NL-SAS disk drives. Auto Tiering status is
Disabled.
Auto Tiering
235
Figure 11-27 Transfer Thin Provisioning Pool to Auto Tiering Step 1
2. Click ▼ -> Add Disk Group to transfer from the thick provisioning pool to auto tiering pool. In the Auto Tiering option, select Enabled. The tier (disk group) must be added one at a time. Select three SSDs to add a SSD tier. And then click OK button.
236 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 11-28 Transfer Thin Provisioning Pool to Auto Tiering Step 2
3. Use the same procedure to add SAS tier with three SAS disk drives.
Auto Tiering
237
Figure 11-29 Transfer Thick Provisioning Pool to Auto Tiering Step 3
4. The thin provisioning pool has been transferred to auto tiering one.
CAUTION:
The action of transferring from the thin provisioning pool to auto tiering is irreversible. Please consider carefully all possible consequences before taking this step.
238 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
12. Data Backup
This chapter describes an overview and operations of data backup options in SANOS.
SANOS provides built-in data backup services for protecting data from most unpredictable accidents including:
Volume snapshot which is described in the Managing Snapshots section.
Local volume cloning which is described in the Manage Local Clones section.
Remote replication to have data duplicated to the remote sites, the function is described
in the Managing Remote Replications section.
The DATA BACKUP function menu provides submenus of Snapshots and Replications.
Clone function is built in the Volumes function tab of the Volumes function submenu.
12.1. Managing Snapshots
A volume snapshot is based on copy-on-write technology. It’s a block-based and differential backup mechanism. Snapshot functionality is designed to be highly efficient, it keeps a point in time record of block-level, incremental data changes of the target LUN volume.
Snapshot can help recover a LUN volume to a previous state quickly to meet enterprise SLA
(Service Level Agreement) requirements of RPO (Recovery Point Objectives) and RTO
(Recovery Time Objective).
Snapshot is the easiest and most effective measure to protect against ransomware attacks, virus attacks, accidental file deletion, accidental file modification, or unstable system hardware caused by a bad I/O cable connection, unstable power supply… etc.
12.1.1. Theory of Operation
Snapshot-on-the-box captures the instant state of data in the target volume in a logical sense. The underlying logic is copy-on-write, moving out the data which would be written to certain location where a write action occurs since the time of data capture. The certain location, named as “Snapshot Volume”, is essentially a new volume which can be attached to a LUN provisioned to a host as a disk like other ordinary volumes in the system.
Data Backup
239
Figure 12-1 Copy-on-Write Technology
Snapshot Rollback
Rollback restores the data back to the state of any time which was previously captured in case for any unfortunate reason it might be. Snapshot volume is allocated within the same pool in which the snapshot is taken, we suggest reserving 20% of the pool capacity or more for snapshot space.
Writable Snapshot
Apart from the rollback function, snapshot allows direct access to the snapshot content with read or read/write permissions. There are two benefits: One is that it will not consume the free capacity of the storage pool. The other one is that it will not affect the content of the target volume. Before attaching a LUN to the snapshot, the snapshot needs to be exposed to be prepared for accessing. An example of these benefits would be, programmers or developers can easily test a previous version of their compiled codes simply by mounting an older snapshot version onto a LUN instead of rolling back the snapshot and overwriting the existing source codes.
240 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Integration with Windows VSS (Volume Shadow Copy Services)
Snapshot is fully compatible with Windows VSS (Volume Shadow Copy Services). VSS is a host memory flush mechanism for creating consistent point in time copies of data known as “shadow copies”. A Windows agent utility is provided to bridge and synchronize the information between the SAN system and Windows operating system. After implementation, you can trigger a snapshot directly from the Windows operating system without any data consistency issues.
Figure 12-2 V olume Shadow Copy Services Workflow
Select Snapshots function submenu to setup snapshot space, take snapshots, list snapshots, setup snapshot schedule, and delete snapshots.
Data Backup
241
Figure 12-3 Snapshot Function Submenu
The maximum snapshot quantity per volume is 64, and the maximum volume quantity for snapshot is also 64. So a system can have 4,096 snapshots.
Table 12-1 Snapshot Parameters
Item
Maximum Snapshot Quantity per Volume
Maximum Volume Quantity for Snapshot
Maximum Snapshot Quantity per System
Maximum Snapshot Space Capacity of a Thin Provisioning
Volume
Value
64
64
4,096 (= 64 x 64)
128TB
12.1.2. Configure Snapshot
Set Snapshot Space
Before taking a snapshot, SANOS must reserve some storage space for saving variant data.
Here’s an example of setting snapshot space.
1. There are two methods to set snapshot space. Select the Snapshots function submenu, click the Set Snapshot Space button. Or select the Volumes function submenu, selects a volume, then click ▼ -> Set Snapshot Space.
Figure 12-4 Set Snapshot Space
242 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
2. Enter a Capacity which is reserved for the snapshot space, and then click the OK button.
The minimum capacity is suggested to set 20% of the volume. Now there are two capacities in Snapshot Space column in the Volumes function tab. The first capacity is current used snapshot space, and the second capacity is reserved total snapshot space.
Take a Snapshot
Here’s an example of taking a snapshot of the volume.
1. There are two methods to take snapshot. Select the Snapshots function submenu, click
Take Snapshot button. Or select the Volumes function submenu, selects a volume, then click ▼ -> Take Snapshot.
Figure 12-5 Take Snapshot
2. Enter a Snapshot Name. The maximum length of the snapshot name is 32 characters.
Valid characters are [ A~Z | a~z | 0~9 | -_<> ].
3. Click the OK button. The snapshot is taken.
List Snapshots
Select the Snapshots function submenu, the drop-down lists at the top enable you to switch the volumes.
Figure 12-6 List Snapshots
This table shows the column descriptions.
Data Backup
243
Table 12-2 Snapshot Column Descriptions
Column Name Description
Name
Status
Snapshot name.
The status of the snapshot:
N/A : The snapshot is normal.
Replicated : The snapshot is for clone or replication usage.
Undeletable : The snapshot is undeletable.
Aborted : The snapshot is over space and abort.
Health The health of the snapshot:
Good : The snapshot is good.
Failed : The snapshot fails.
The amount of the snapshot space that has been used. Used
Exposure
Permission
LUN
Time Created
The snapshot is exposed or not.
The permission of the snapshot:
N/A: Unknown when the snapshot is unexposed.
Read-write: The snapshot can be read / write.
Read-only: The snapshot is read only.
The LUN status of the snapshot:
None: The snapshot is not mapped any LUN.
Mapped: The snapshot is mapped a LUN.
The created time of the snapshot.
Select the Volumes function submenu, selects a volume, then click ▼ -> List Snapshots.
Figure 12-7 List Snapshots
244 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Expose Snapshot
Here’s an example of exposing a snapshot.
1. Select the Snapshots function submenu, click ▼ -> Expose Snapshot to set writable snapshot capacity to expose the snapshot.
Figure 12-8 Set Writable Snapshot Capacity to Expose Snapshot
2. Enter a Capacity which is reserved for the snapshot. If the size is 0, the exposed snapshot will be read only. Otherwise, the exposed snapshot can be read / written, and the size will be the maximum capacity for writing.
3. Click the OK button.
4. Click ▼ -> Map LUN to map a LUN to the snapshot. Please refer to the chapter 8.6,
Configure LUN Mappings section for detail.
5. Done. The Snapshot can be used as a volume.
TIP:
If the capacity of exposed snapshot is set to 0, the snapshot is read only.
If it is not 0, the snapshot can be writable and it becomes writable snapshot.
Unexpose Snapshot
Here’s an example of unexposing a snapshot.
1. Select the Snapshots function submenu, click ▼ -> Unexpose Snapshot to unexposed snapshot
2. Click OK button.
Configure LUN Mappings
1. Select the Snapshots function submenu, click ▼ -> Map LUN to map a LUN to the
snapshot. Please refer to the chapter 8.6, Configure LUN Mappings section for detail.
Data Backup
245
2. Click ▼ -> List LUNs to list all LUNs of the snapshot.
3. Click ▼ -> Unmap LUNs to unmap LUNs from the snapshot.
Delete One Snapshot
Select Snapshots function submenu, click ▼ -> Delete to delete the snapshot. All snapshots after deleted one will be deleted.
CAUTION:
If a snapshot has been deleted, the other snapshots which are earlier than it will also be deleted. The space occupied by these snapshots will be released after deleting.
Delete All Snapshots
To cleanup all the snapshots, please follow these steps.
1. There are two methods to cleanup snapshots. Select the Snapshots function submenu, click Delete Snapshots button. Of select the Volumes function submenu, selects a volume. And then click ▼ -> Delete Snapshots.
2. Click the OK to apply. It will delete all snapshots of the volume and release the snapshot space.
12.1.3. Configure Rollback Snapshot
Rollback Snapshot
The data in snapshot can rollback to the original volume. Please follow the procedures.
1. Select the Snapshots function submenu, selects a snapshot. And then click ▼ ->
Rollback Snapshot to rollback the snapshot to the volume.
2. Click the OK to apply.
CAUTION:
Before executing rollback, it is better that the disk is unmounted on the host computer for flushing data from cache.
When a snapshot has been rolled-back, the related snapshots which are
246 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
earlier than it will also be removed. But the rest snapshots will be kept after rollback.
12.1.4. Configure Schedule Snapshots
The snapshots can be taken by schedule such as hourly or daily. Please follow the procedures.
1. There are two methods to set schedule snapshots. Select the Snapshots function submenu, click the Schedule Snapshots button. Or select the Volumes function submenu, selects a volume. And then click ▼ -> Schedule Snapshots.
Figure 12-9 Schedule Snapshots
Data Backup
247
2. Check the schedules you want to set. They can be set by monthly, weekly, daily, or hourly.
Check the Auto Mapping box to map a LUN automatically when the snapshot is taken.
And the LUN is allowed to access by the Allowed Hosts column.
3. Click the OK to apply.
INFORMATION:
Daily snapshot will be taken at 00:00 daily.
Weekly snapshot will be taken every Sunday at 00:00.
Monthly snapshot will be taken every first day of the month at 00:00.
12.1.5. Snapshot Notices
Snapshot function applies copy-on-write technique on volume and provides a quick and efficient backup methodology. When taking a snapshot, it does not copy any data at first time until a request of data modification comes in. The snapshot copies the original data to snapshot space and then overwrites the original data with new changes. With this technique, snapshot only copies the changed data instead of copying whole data. It will save a lot of disk space.
Data Consistent Snapshot
Before using snapshot, users should understand why sometimes the data becomes corrupted after rollback of snapshot. Please refer to the following diagram.
When the data is modified from the host computer, the data will pass through file system and memory of the host (write cache). Then the host will flush the data from memory to physical disks, no matter the disk is local disk (IDE or SATA), DAS (SCSI or SAS), or SAN
(fibre or iSCSI). From the viewpoint of the storage device, it cannot control the behavior of the host side. It sometimes happens that when a snapshot is taken, some data is still in memory and did not flush to disk. Then the snapshot may have an incomplete image of the original data. The problem does not belong to the storage device. To avoid this data inconsistency issue between a snapshot and original data, the user has to make the operating system flushes the data from memory of the host (write cache) into disk before taking snapshot.
248 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 12-10 Host data I/O Route
On Linux and UNIX platforms, a command named sync can be used to make the operating system flush data from write cache to disk. For the Windows platform, Microsoft also provides a tool – sync, which does exactly the same thing as the sync command in
Linux/UNIX. It will tell the OS to flush the data on demand. For more information about sync tool, please refer to http://technet.microsoft.com/en-us/sysinternals/bb897438.aspx
Besides the sync tool, Microsoft developed VSS (volume shadow copy service) to prevent this issue. VSS is a mechanism for creating consistent “point-in-time” copies of data known as shadow copies. It is a coordinator between backup software, application (SQL or
Exchange…) and storage systems to make sure snapshots can occur without the problem of data-irregularities. For more information about the VSS, please refer to http://technet.microsoft.com/en-us/library/cc785914.aspx
. QSAN storage systems fully support Microsoft VSS.
Run Out of Snapshot Space
Before using snapshot, a snapshot space is needed from pool capacity. After a period of working snapshot, what if the snapshot size over the snapshot space of user defined? There are two different situations:
Data Backup
249
1. If there are two or more snapshots existed, the system will try to remove the oldest snapshots (to release more space for the latest snapshot) until enough space is released.
2. If there is only one snapshot existed, the snapshot will fail. Because the snapshot space is run out.
For example, there are two or more snapshots existed on a volume and the latest snapshot keeps growing. When it comes to the moment that the snapshot space is run out, the system will try to remove the oldest snapshot to release more space for the latest snapshot usage. As the latest snapshot is growing, the system keeps removing the old snapshots.
When it comes that the latest snapshot is the only one in system, there is no more snapshot space which can be released for incoming changes, then snapshot will fail.
Maximum Snapshot Quantity per Volume
There are up to 64 snapshots can be created per volume. What if the 65th snapshot has been taken? There are two different situations:
1. If the snapshot is configured as schedule snapshot, the latest one (the 65th snapshot) will replace the oldest one (the first snapshot) and so on.
2. If the snapshot is taken manually, when taking the 65th snapshot will fail and a warning message will be showed on web user interface.
Rollback and Delete Snapshot
When a snapshot has been rolled-back, the related snapshots which are earlier than it will also be removed. But the rest snapshots will be kept after rollback. If a snapshot has been deleted, the other snapshots which are earlier than it will also be deleted. The space occupied by these snapshots will be released after deleting.
12.2. Managing Local Clones
Local clone is used to make a duplicate copy of a volume in the same storage pool as well as in a separate storage pool within the same enclosure. In setting up local clone task, the first clone is a full copy. From now on, the cloning is a differential copy, created using snapshot functionality. Manual and scheduled tasks are available for management flexibility.
In the event that the source volume is broken and it fails, IT managers can quickly switch to the cloned volume and resume data services.
250 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 12-11 Local Clone Diagram
12.2.1. Theory of Operation
At the beginning, copy all data from the source volume to target. It is also called full copy.
Afterwards, take a snapshot on source volume and then copy delta data to perform the incremental copy.
TIP:
Please be fully aware that the incremental copy needs to use snapshot to compare the data difference. Therefore, having enough snapshot space for the volume is very important.
Figure 12-12 Local Clone Steps
Data Backup
251
In general, local clone operates as follows:
1. Create a local clone task, 2 snapshots are taken on the Source Volume.
2. Data from the Source Snapshot T1 is copied to the Target Volume.
3. The Target Snapshot T1 is refreshed and becomes the common base.
4. Host keeps writing new data to the Source Volume.
5. At next synchronization, the Source Snapshot T2 is refreshed and copies only the changes since the last synchronization.
6. The Target Snapshot T2 is refreshed and becomes the common base.
The local clone operations are under the Volumes function tab which provides to setup local clone task, start or stop the tasks, setup local clone schedule, and delete tasks. The maximum clone task quantity per volume is 1, and the maximum clone task quantity per system is 64.
Table 12-3 Local Clone Parameters
Item
Maximum Clone Task Quantity per Volume
(Maximum Clone Pairs per Source Volume)
Maximum Clone Task Quantity per System
Value
1
64
Rollback Local Clone
Rollback local clone operates are in the following:
1. After completing the local clone task, the data can be rolled-back.
2. Exchange the roles of source and target volume, follow the operations on local clone to create a local clone task. Notice that the data in original volume will be destroyed. You have to comprehensive consideration before doing.
3. Or create a new target volume to backup to.
252 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 12-13 Rollback Local Clone
12.2.2. Configure Local Clone
Create a Local Clone Task
Here’s an example of creating a local clone from source volume to target one. Assume that we define the name in the example.
Source Volume Name: Source-Vol-1
Target Volume: Name: Target-Vol-2
1. Before cloning, it must have a backup target volume. Select the Volumes function submenu, click the Create Volume button. And then select Volume Type to Backup
Volume. Please refer to the chapter 8.5.1, Create a Volume section for detail.
Data Backup
253
Figure 12-14 Create Target Volume
Figure 12-15 List Source and Target Volumes for Local Clone
2. Select the source volume, and then click ▼ -> Create Local Clone.
3. Select a target volume, and then click the OK button.
254 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 12-16 Create Local Clone
TIP:
The volume type of the target volume should be set as Backup Volume.
Please refer to the chapter 8.5.1, Create a Volume in the Storage
Management chapter.
4. At this time, if the source volume has no snapshot space, it will be allocated snapshot space for clone usage automatically. The capacity will depend on the parameter of
Cloning Options.
5. Done, now ready to start cloning volumes.
Figure 12-17 List Source and Target Volumes
Start Local Clone Task
To start cloning, please follow the procedures.
1. Select the source volume, and then click ▼ -> Start Local Clone.
2. Click the OK button. The source volume will take a snapshot, and then start cloning.
Data Backup
255
Stop Local Clone Task
To stop cloning, please follow the procedures.
1. Select the source volume, and then click ▼ -> Stop Local Clone.
2. Click the OK button to stop cloning.
Delete Local Clone Task
To clear the clone task, please follow the procedures.
1. Select the source volume, and then click ▼ -> Delete Local Clone.
2. Click the OK button to clear clone task.
12.2.3. Configure Schedule Local Clone Tasks
The clone task can be set by schedule such as hourly or daily. Please follow the procedures.
1. Select the source volume, and then click ▼ -> Schedule Local Clone.
256 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 12-18 Schedule Local Clone
2. Check the schedules which you want. They can be set by monthly, weekly, daily, or hourly. Click the OK button to apply.
INFORMATION:
Daily snapshots will be taken at 00:00 daily.
Weekly snapshot will be taken every Sunday at 00:00.
Monthly snapshot will be taken every first day of the month at 00:00.
12.2.4. Local Cloning Options
Select the Volumes function submenu, click the Local Clone Options button. There are three options, described in the following.
Data Backup
257
Figure 12-19 Local Cloning Options
Automatic Snapshot Space Allocation Ratio: This setting is the ratio of the source volume and snapshot space. If the ratio is set to 2, when there is no snapshot space assigned for the volume, the system will automatically reserve a free pool space to set as the snapshot space with twice capacity of the volume. The options are 0.5 ~ 3.
Automatic Snapshot Checkpoint Threshold: The setting will be effective after enabling schedule clone. The threshold will monitor the usage amount of the snapshot space.
When the used snapshot space achieves the threshold, system will take a snapshot and start clone process automatically. The purpose of threshold could prevent the incremental copy failure immediately when running out of the snapshot space. For example, the default threshold is 50%. The system will check the snapshot space every hour. When the snapshot space is used over 50%, the system will start clone task automatically. And then continue monitoring the snapshot space. When the rest snapshot space has been used 50%, in other words, the total snapshot space has been used 75%, the system will start clone task again.
Restart the task an hour later if failed The setting will be effective after enabling a scheduled clone. When running out of snapshot space, the volume clone process will be stopped because there is no more available snapshot space. If this option is checked, the system will clear the snapshots of clone in order to release snapshot space automatically, and the clone task will be restarted after an hour. This task will start a full copy.
CAUTION:
The default snapshot space allocated by the system is two times the capacity of source volume. That is the best value of our suggestion. If user sets snapshot space manually and lower than the default value, understand that if the snapshot space is not enough, the clone task will fail.
258 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
12.2.5. Local Clone Notices
While the clone is processing manually, the incremental data of the volume is over the snapshot space. The clone will complete the task, but the clone snapshot will fail. At the next time, when trying to start clone, it will get a warning message “There is not enough of snapshot space for the operation”. The user needs to clean up the snapshot space in order to use the clone feature. Each time the clone snapshot failed, it means that the system loses the reference value of incremental data. So it will need to make a full copy the next time a clone task starts.
12.3. Managing Remote Replications
Remote replication is a block-level, asynchronous, differential remote volume backup function through LAN or WAN. It has many powerful capabilities such as unlimited bandwidth, traffic shaping, and multiple connections per replication task.
Figure 12-20 Remote Replication Diagram
Remote replication uses the iSCSI function to set up a replication connection. It can use the full bandwidth of the assigned network port to allow the best backup speed. However, in order to balance replication traffic and non-replication traffic, the traffic shaping function can help to reserve necessary bandwidth for non-replication I/O.
If the replication task requires more bandwidth, Remote replication allows multiple connections per task by intelligently balancing the backup task across multiple connections to enhance the bandwidth.
Both manual and scheduled replication tasks are supported for flexible management. To handle huge data remote replication, Remote replication allows transforming local cloning task into a remote replication task. You can perform the local clone first for the full copy.
Then use disk roaming function to physically transport the disk drives that contain the
Data Backup
259
cloned volume to the remote site. Lastly, use the function of transforming from the local clone task to the remote replication one.
Figure 12-21 Local Clone Transfers to Remote Replication
12.3.1. Remote Replication Topologies
Remote replication supports multiple topologies to suit various disaster recovery configurations. They are one-directional, bi-directional, one-to-many, many-to-one, and many-to-many. Both the source volume and destination volume in a replication connection are exclusive to the pair. Either one can NOT be served as the source or destination volume of a different replication connection. Each SAN storage system can support up to 32 replication tasks concurrently. Below are the supported topologies.
One-Directional
Figure 12-22 One-Directional Remote Replication
A Source Volume (S) in Site A is replicating to a Target Volume (T) in Site B. This is the most basic remote replication topology.
260 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Bi-Directional
Figure 12-23 Bi-Directional Remote Replication
Each system in a two system topology acts as a replication target for the other’s production data. A Source Volume (S1) in Site A is replicating to a Target Volume (T1) in Site B. And a
Source Volume (S2) in Site B is replicating to a Target Volume (T2) in Site A.
One-to-Many
Figure 12-24 One-to-Many Remote Replication
A single source system replicates different storage resources to multiple target systems. A
Source Volume (S1) in Site A is replicating to a Target Volume (T1) in Site B. At the same time, a Source Volume (S2) in Site A is replicating to a Target Volume (T2) in Site C. So does
S3 in Site A to T3 in Site D.
Data Backup
261
Many-to-One
Figure 12-25 Many-to One Remote Replication
Multiple source systems replicate to a single target system. A Source Volume (S1) in Site B is replicating to a Target Volume (T1) in Site A. At the same time, a Source Volume (S2) in
Site C is replicating to a Target Volume (T2) in Site A. So does S3 in Site D to T3 in Site A.
Many-to-Many
Figure 12-26 Many-to Many Remote Replication
Combination with bi-Directional, one-to-many, and many-to-one, remote replication also supports Many-to-Many topology. Multiple source systems replicate to multiple target systems. A Source Volume (S1) in Site A is replicating to a Target Volume (T1) in Site B. At the same time, a Source Volume (S2) in Site B is replicating to a Target Volume (T2) in Site
A. And does S3 to T3, S4 to T4, …, S8 to T8.
262 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
TIP:
Note that all supported topologies have a 1-to-1 configuration for each individual replication session in the topology.
The maximum replication task quantity per system is 32. It means that 32 systems are the maximum quantity of any many-to-one or one-to-many replication configuration.
12.3.2. Theory of Operation
At the beginning, replication will copy all data from the source volume to the target. It is also called a full copy. Afterwards, use snapshot technology to perform the incremental copy.
TIP:
Please be fully aware that the incremental copy needs to use snapshot to compare the data difference. Therefore, sufficient snapshot space for the volume is very important.
Figure 12-27 Remote Replication Steps
In general, remote replication operates in the following way.
Data Backup
263
1. Create a remote replication task, as the time going, two snapshots are taken on the
Source Volume.
2. Data from the Source Snapshot T1 is copied to the Target Volume.
3. The Target Snapshot T1 is refreshed and becomes the common base.
4. Host keeps writing new data to the Source Volume.
5. At next synchronization, the Source Snapshot T2 is refreshed and copies only the changes since last synchronization.
6. The Target Snapshot T2 is refreshed and becomes the common base.
Select the Remote Replications function submenu to setup remote replication task, start, or stop tasks, setup remote replication schedule, delete tasks, and also setup traffic shaping.
Figure 12-28 Snapshot Function Submenu
The maximum replication task quantity per volume is 1, and the maximum replication task quantity per system is 32, and the maximum traffic shaping quantity per system is 8.
Table 12-4 Remote Replication Parameters
Item
Maximum Replication Task Quantity per Volume
(Maximum Replication Pairs per Source Volume)
Maximum Replication Task Quantity per System
Maximum Traffic Shaping Quantity per System
Maximum iSCSI Multi-path Quantity in Replication Task
Maximum iSCSI Multiple Connection Quantity per
Replication Task Path
Value
1
32
8
2
4
264 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
12.3.3. Configure Remote Replication
Create a Remote Replication Task
Here’s an example of creating a remote replication task from source volume to target one.
Assume that we define the name and the IP addresses in the example.
Figure 12-29 Example of Creating a Remote Replication Task
Site A Source Unit Configuration:
Controller 1, Onboard LAN 1 IP Address: 10.10.1.1
Controller 1, Onboard LAN 2 IP Address: 10.10.1.2
Controller 2, Onboard LAN 1 IP Address: 10.10.1.3
Controller 2, Onboard LAN 2 IP Address: 10.10.1.4
Source Volume Name: Source-Vol-1
Site B Target Unit Configuration:
Controller 1, Onboard LAN 1 IP Address: 10.10.1.101
Controller 1, Onboard LAN 2 IP Address: 10.10.1.102
Controller 2, Onboard LAN 1 IP Address: 10.10.1.103
Controller 2, Onboard LAN 2 IP Address: 10.10.1.104
Target Volume: Name: Target-Vol-2
Operate SANOS web UI of Site B target unit:
1. Before replication, it must have a backup target volume. Select the Volumes function submenu, click the Create Volume button. And then select Volume Type to Backup
Volume. Please refer to the chapter 8.5.1, Create a Volume section in the Storage
Management chapter.
Data Backup
265
Figure 12-30 Create Target Volume in Site B
2. After creating a target volume, please also setup snapshot space. So the snapshot of the source volume can replicate to the target volume. Please refer to the chapter 12.1.2,
Figure 12-31 List Target Volume in Site B
3. Map a LUN of the target volume. Please refer to the chapter 8.6, Configure LUN
Mappings section in the Storage Management chapter.
Operate web UI of Site A source unit:
4. Select the Remote Replications function submenu, click the Create button.
266 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 12-32 Create a Remote Replication Task Step 1
5. Select a source volume, and then click the Next button.
Data Backup
267
Figure 12-33 Create a Remote Replication Task Step 2
6. Select the Source Port and input the Target IP Address, and then click the Next button.
TIP:
Leave the setting of the Source Port to Auto if you don’t want to assign a fixed one. The system will try to connect to the target IP address automatically.
268 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 12-34 Create a Remote Replication Task Step 3
7. Select an Authentication Method and input the CHAP Username and CHAP Password if needed. Select a Target Name, and then click the Next button.
TIP:
This CHAP account is the iSCSI authentication of the Site B target unit.
For more information about CHAP, please refer to the chapter 7.3.4,
Configure iSCSI CHAP Accounts section in the Host Configuration
chapter.
Select No Authentication Method if there is no CHAP enabling in the Site
B target unit.
Data Backup
269
Figure 12-35 Create a Remote Replication Task Step 4
8. Select a Target LUN in Site B target unit. Finally, click the Finish button.
Figure 12-36 Remote Replication Task is Created
9. The replication task is created. At this time, if the source volume has no snapshot space, it will be allocated snapshot space for replication usage automatically. The size will depend on the parameter of replication options.
270 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Start Remote Replication Task
To start the remote replication task, please follow the procedures.
Launch the SANOS web UI of Site A source unit:
1. Select the Remote Replications function submenu, select the task, and then click ▼ ->
Start.
2. Click the OK button. The source volume will take a snapshot, and then start remote replication.
Stop Remote Replication Task
To stop remote replication task, please follow the procedures.
Launch the SANOS web UI of Site A source unit:
1. Select the Remote Replications function submenu, select the task, and then click ▼ ->
Stop.
2. Click the OK button to stop remote replication.
Delete Remote Replication Task
To delele the remote replication task, please follow the procedures.
Launch the SANOS web UI of Site A source unit:
1. Select the Remote Replications function submenu, select the task, and then click ▼ ->
Delete.
2. Click the OK button to delete the remote replication task.
12.3.4. Configure Traffic Shaping
The traffic shaping function can help to reserve necessary bandwidth for non-replication I/O operations. There are eight shaping groups which can be set. In each shaping group, peak and off-peak time slot are provided for different bandwidth. Here’s an example of setting shaping group.
Configure Traffic Shaping
To configure traffic shaping groups, please follow the procedures.
Launch the SANOS web UI of Site A source unit:
1. Select the Remote Replications function submenu, click the Traffic Shaping
Configuration button.
Data Backup
271
Figure 12-37 Shaping Setting Configuration
2. Select a Shaping Group to setup.
3. Input the bandwidth (MB) at the Peak time.
4. If needed, check the Enable Off-Peak box, and then input the bandwidth (MB) at Off-
Peak time. And define the off-peak hour.
5. Click the OK button.
Set Traffic Shaping on Remote Replication Task
To set a shaping group on the remote replication task, please follow the procedures.
Operate web UI of Site A source unit:
1. Select the Remote Replications function submenu, select the task, and then click ▼ ->
Set Traffic Shaping.
272 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 12-38 Set Traffic Shaping
2. Select a Shaping Group from the drop down list. And then click the OK button.
3. The shaping group is applied to the remote replication task.
12.3.5. Configure Schedule Remote Replication Tasks
The replication task can be set by schedule such as hourly or daily. Please follow the procedures.
Operate web UI of Site A source unit:
1. Select the Remote Replications function submenu, select the task, and then click ▼ ->
Schedule.
2. Check the schedules which you want. They can be set by monthly, weekly, daily, or hourly. Click the OK button to apply.
INFORMATION:
Daily snapshot will be taken at 00:00 daily.
Weekly snapshot will be taken every Sunday at 00:00.
Monthly snapshot will be taken every first day of the month at 00:00.
Data Backup
273
Figure 12-39 Schedule Remote Replication
12.3.6. Replication Options
Select the Remote Replications function submenu, click the Replications Options button.
There are three options, described in the following.
Figure 12-40 Replication Options
274 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Automatic Snapshot Space Allocation Ratio: This setting is the ratio of the source volume and snapshot space. If the ratio is set to 2, when there is no snapshot space assigned for the volume, the system will automatically reserve a free pool space to set as the snapshot space with twice capacity of the volume. The options are 0.5 ~ 3.
Automatic Snapshot Checkpoint Threshold: The setting will be effective after enabling schedule replication. The threshold will monitor the usage amount of the snapshot space. When the used snapshot space achieves the threshold, system will take a snapshot and start replication process automatically. The purpose of threshold could prevent the incremental copy failure immediately when running out of the snapshot space. For example, the default threshold is 50%. The system will check the snapshot space every hour. When the snapshot space is used over 50%, the system will start replication task automatically. And then continue monitoring the snapshot space. When the rest snapshot space has been used 50%, in other words, the total snapshot space has been used 75%, the system will start replication task again.
Restart the task an hour later if failed: The setting will be effective after enabling schedule replication. When running out of snapshot space, the volume replication process will be stopped because there is no more available snapshot space. If this option is checked, the system will clear the snapshots of replication in order to release snapshot space automatically, and the replication task will be restarted after an hour.
This task will start a full copy.
CAUTION:
The default snapshot space allocated by the system is two times the capacity of source volume. That is the best value of our suggestion. If user sets snapshot space by manually and lower than the default value, user should take the risk if the snapshot space is not enough and the replication task will fail.
12.3.7. Configure MPIO in Remote Replication Task
In remote replication scenario, MPIO (Multi-Path I/O) are support for redundancy. Normally, the remote replication task is running on the master controller (usually is controller 1) of source unit. The data is replicated from the master controller of source unit to the controller of target unit which target IP Address sets in creating remote replication task (usually is also controller 1). The second path from source unit can be added to the controller 2 of
Data Backup
275
target unit. The maximum MPIO in remote replication task is 2 paths. The following is the remote replication MPIO diagram.
Figure 12-41 Remote Replication MPIO Diagram
How Redundancy Works with Remote Replication
If controller 1 fails on the source unit, the replication connection will be taken over by controller 2 and the replication task will continue running.
Figure 12-42 Remote Replication Source Controller Fail Diagram
In another scenario, when controller1 fails on the target unit, the replication connection will be failed over to the second path from the controller 1 of the source unit to controller 2 of target unit.
276 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 12-43 Remote Replication Target Controller Fail Diagram
Add Multipath in Remote Replication Task
Here’s an example of adding the second path in the remote replication. Assume that we define the IP addresses in the example.
Site A Source Unit Configuration:
Controller 1, Onboard LAN 1 IP Address: 10.10.1.1
Controller 1, Onboard LAN 2 IP Address: 10.10.1.2
Controller 2, Onboard LAN 1 IP Address: 10.10.1.3
Controller 2, Onboard LAN 2 IP Address: 10.10.1.4
Source Volume Name: Source-Vol-1
Site B Target Unit Configuration:
Controller 1, Onboard LAN 1 IP Address: 10.10.1.101
Controller 1, Onboard LAN 2 IP Address: 10.10.1.102
Controller 2, Onboard LAN 1 IP Address: 10.10.1.103
Controller 2, Onboard LAN 2 IP Address: 10.10.1.104
Target Volume: Name: Target-Vol-2
Operate web UI of Site A source unit:
1. Select the Remote Replications function submenu, select the task, and then click ▼ ->
Add Multipath.
Data Backup
277
Figure 12-44 Add Multipath in Remote Replication Step 1
2. Select the Source Port and input the Target IP Address, and then click the Next button.
TIP:
Leave the setting of Source Port to Auto if you don’t want to assign. The system will try to connect to the target IP address automatically.
278 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 12-45 Add Multipath in Remote Replication Step 2
3. Select an Authentication Method and input the CHAP Username and CHAP Password if needed. Select a Target Name, and then click the Next button.
TIP:
This CHAP account is the iSCSI authentication of the Site B target unit.
For more information about CHAP, please refer to the chapter 7.3.4,
Configure iSCSI CHAP Accounts section in the Host Configuration
chapter.
Select No Authentication Method if there is no CHAP enabled on the Site
B target unit.
Data Backup
279
Figure 12-46 Add Multipath in Remote Replication Step 3
4. Select a Target LUN in Site B target unit. Finally, click Finish button.
Figure 12-47 List Multipath in Remote Replication Task
280 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
5. The second path in remote replication task is added.
Delete Multipath in Remote Replication Task
To delete multi path of the replication task, please follow the procedures.
Operate web UI of Site A source unit:
1. Select the Remote Replications function submenu, select the task path, and then click
▼ -> Delete.
2. Click the OK button to delete the path.
12.3.8. Configure MC/S in Remote Replication Task Path
MC/S (Multiple Connections per Session) is another feature in remote replication. If there are more than one iSCSI ports available on source unit, and they can connect to the other iSCSI ports of the target unit, MC/S can be added for increasing the replicating speed. The maximum MC/S per task path is 4 connections.
Add Connections in Remote Replication Task Path
Here’s an example of adding the second connection in the remote replication task. Assume that we define the IP addresses in the example.
Site A Source Unit Configuration:
Controller 1, Onboard LAN 1 IP Address: 10.10.1.1
Controller 1, Onboard LAN 2 IP Address: 10.10.1.2
Controller 2, Onboard LAN 1 IP Address: 10.10.1.3
Controller 2, Onboard LAN 2 IP Address: 10.10.1.4
Source Volume Name: Source-Vol-1
Site B Target Unit Configuration:
Controller 1, Onboard LAN 1 IP Address: 10.10.1.101
Controller 1, Onboard LAN 2 IP Address: 10.10.1.102
Controller 2, Onboard LAN 1 IP Address: 10.10.1.103
Controller 2, Onboard LAN 2 IP Address: 10.10.1.104
Target Volume: Name: Target-Vol-2
Data Backup
281
Operate web UI of Site A source unit:
1. Select the Remote Replications function submenu, select the task path, and then click
▼ -> Add Connection.
Figure 12-48 Add Path Connection
2. Select the Source Port and input the Target IP Address, and then click the Next button.
TIP:
Leave the setting of Source Port to Auto if you don’t want to assign. The system will try to connect to the target IP address automatically.
Figure 12-49 List Multiple Connections in Remote Replication Task Path
282 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
3. The second connection in remote replication task path is added. If necessary, click ▼ ->
Add Connection to add another.
Delete Connections in Remote Replication Task Path
To delete multi connections per session of the replication task path, please follow the procedures.
1. Operate Site A source unit. Select the Remote Replications function submenu, select the task path, and then click ▼ -> Delete Connection.
Figure 12-50 Delete Path Connection
2. Select the connection(s) which want to be deleted, and then click the OK button.
3. The multiple connection(s) are deleted.
TIP:
When deleting multiple connections, note that at least one connection must remain.
Data Backup
283
12.3.9. Local Clone Transfers to Remote Replication
It is always being a problem that to do full copy over LAN or WAN when the replication task is executed at the first time. It may take days or weeks to replicate data from source to target within limited network bandwidth. We provide two methods to help users shorten the time of executing a full copy.
1. One is to skip full copy on a new, clean volume. The term “clean” means that the volume has never been written data since created. For a new created volume which has not been accessed, the system will recognize it and skip full copy automatically when the remote replication task is created on this volume at the first time.
TIP:
Any I/O access to the new created volume will make it as “not clean”, even though executing “Erase” function when a volume is created. The full copy will take place in such a case.
2. The other way is to use local volume clone function, which is a local data copy function between volumes to execute full copy at the first time. Then carry all the physical disk drives of the target volume and move to the target unit. Finally, turn the local clone task into remote replication task with differential copy afterward.
Figure 12-51 Local Clone Transfers to Remote Replication Diagram
284 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
To do that volume disk local clone transfers to remote replication, please follow the procedures.
Launch SANOS web UI of Site A source unit:
1. Select the Volumes function submenu, select a source volume, and click ▼ -> Create
Local Clone to create a local clone task. For more information about local clone, please
refer to the chapter 12.2.2, Configure Local Clone section.
2. Click ▼ -> Start Local Clone to execute full copy the data from source volume to target one.
3. Click ▼ -> Covert to Remote Replication to change the local clone task to a remote replication task.
Figure 12-52 List Source and Target Volumes for Local Clone
4. The Clone column of the source volume will be changed from the name of the target volume into QRep.
CAUTION:
Converting a local clone task to a remote replication one is only available when the clone task has been finished. This change is irreversible.
5. Select the Pools function submenu, select the pool, and click ▼ -> Deactive.
6. Plug out all the physical disk drives, and carry them to Site B, plug them into the target unit.
Operate web UI of Site B target unit:
7. Select the Pools function submenu, select the pool, and click ▼ -> Active.
8. Select the Volumes function submenu, please setup snapshot space. So the snapshot of the source volume can replicate to the target volume. Please refer to the chapter
12.1.2, Configure Snapshot section.
Data Backup
285
9. Map a LUN of the target volume. Please refer to the chapter 8.6, Configure LUN
Mappings section in the Storage Management chapter.
Launch SANOS web UI of Site A source unit:
10. Select the Remote Replications function submenu, click the Rebuild button to rebuild the remote replication task which is changed from a clone task formerly.
Figure 12-53 Rebuild Clone Relationship Step 1
11. Select a source volume, and then click the Next button.
286 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 12-54 Rebuild Clone Relationship Step 2
12. Select the Source Port and input the Target IP Address, and then click the Next button.
TIP:
Leave the setting of Source Port to Auto if you don’t want to assign a fixed one. The system will try to connect to the target IP address automatically.
Data Backup
287
Figure 12-55 Rebuild Clone Relationship Step 3
13. Select an Authentication Method and input the CHAP Username and CHAP Password if needed. Select a Target Name, and then click the Next button.
TIP:
This CHAP account is the iSCSI authentication of the Site B target unit.
For more information about CHAP, please refer to the chapter 7.3.4,
Configure iSCSI CHAP Accounts section in the Host Configuration
chapter.
Select No Authentication Method if there is no CHAP enabled on the Site
B target unit.
288 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 12-56 Rebuild Clone Relationship Step 4
14. Select a Target LUN in Site B target unit. Finally, click the Finish button.
Figure 12-57 Remote Replication Task is Created
15. The replication task from local clone is created.
Data Backup
289
13. Monitoring
The MONITORING function menu provides submenus of Log Center, Enclosures, and
Performance.
13.1. Log Center
Select the Event Logs function tab in the Log Center function submenu to show event messages. Select or unselect the checkbox of Information, Warning, or Error levels to show or hide those particular events.
Figure 13-1 Log Center Function Submenu
Figure 13-2 Event Logs
290 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
The event logs are displayed in reverse order which means the latest event log is on the first
/ top page. They are actually saved in the first four disk drives of the head unit, each disk drive has one copy of event log. For one system, there are four copies of event logs to make sure users can check event log any time when there are failed disks. If there are no disk drives in the first four slots, the event logs will keep in memory temporary, and will disappear after system reboots.
The event logs record all system events. Each event has time frame that identifies the type of event that occurred, and has one of the following severities:
Error: A failure occurred that may affect data integrity or system stability. Correct the problem as soon as possible.
Warning: A problem occurred that may affect system stability, but not data integrity.
Evaluate the problem and correct it if necessary.
Information: An operation recorded that may help to debug.
13.1.1. Operations on Event Logs
The options are available on this tab:
Mute Buzzer
Click the Mute Buzzer button to stop alarm if the system alerts.
Download Event Logs
Click the Download button to save the event log as a file. It will pop up a filter dialog as the following. The default is “Download all event logs”.
Figure 13-3 Download Event Logs
Monitoring
291
Clear Event Logs
Click the Clear button to clear all event logs.
TIP:
Please plug-in any of the first four hard drives, then event logs can be saved and displayed in next system boot up. Otherwise, the event logs cannot be saved.
13.2. Monitoring Enclosures
The Enclosures function submenu provides Hardware Monitoring and SES tabs to show and monitor enclosure information.
Figure 13-4 Enclosure Function Submenu
13.2.1. Hardware Monitoring
Select the Hardware Monitoring function tab in the Enclosures function submenu shows the information of current voltages, temperatures, status of power supply’s, and fan modules.
292 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 13-5 Hardware Monitoring
Monitoring
293
Monitoring Notifications
The status of the voltage and temperature are Good if their values are between Low
Warning and High Warning. If the value is lower than Low Warning or higher than High
Warning, the system will send a warning event log. If it is lower than Low Critical or higher than High Critical, an error event log will be sent.
TIP:
For better protection and avoiding a single short period of abnormal voltage or temperature, its recommend to enable Auto Shutdown setting which could trigger an automatic shutdown. This is done using several sensors placed on key systems that the system checks every 30 seconds for present voltages or temperatures. The following items are trigger conditions.
The value of voltage or temperature is lower than Low Critical or higher than High Critical.
When one of these sensors reports above the threshold for three contiguous minutes, the system will be shutdown automatically.
For more information about auto shutdown, please refer to the chapter
6.3.1, Boot Management Settings section in the System Settings chapter.
Fan Module Mechanism
Fan speed will adjust automatically according to system thermal and the status of power and fan. For more information, please refer to the chapter 2.6, Fan Module and the chapter
6.3, Removing / Installing the Fan Module in the XCubeSAN Hardware Owner’s Manual .
13.2.2. Configuring SES
SES (SCSI Enclosure Services) is an enclosure management standard. The host can communicate with the enclosure using a LUN and a specialized set of SCSI commands to monitor hardware characteristics. Select SES function tab in Enclosure function submenu to enable or disable the management of SES. Enable SES will map an iSCSI LUN or a FC LUN.
294 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Enable SES
Here’s an example of enabling SES.
1. Select SES function tab in Enclosure function submenu, click the Enable SES button.
Figure 13-6 Enable SES
2. Select the Protocol as iSCSI.
3. Enter the Allowed Hosts with semicolons (;) or fill-in wildcard (*) for access by all hosts.
4. Select a Target from the drop-down list.
5. Click the OK button to enable SES.
Figure 13-7 SES Status
Disable SES
Click the Disable button to disable SES.
SES Client Tool
The SES client software is available at the following web site:
SANtools: http://www.santools.com/
Monitoring
295
13.3. Performance Monitoring
The Performance function submenu provides Disk, iSCSI, and Fibre Channel tabs to monitor performance.
Figure 13-8 Performance Function Submenu
13.3.1. Disk Performance Monitoring
Select the Disks function tab in the Performance function submenu to display the throughput and latency of the disk drives. Check the slots which you want to monitor.
CAUTION:
Enabling performance monitoring will reduce I/O performance. Please use with caution.
296 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 13-9 Disk Performance Monitoring
13.3.2. iSCSI Port Performance Monitoring
Select the iSCSI function tab in the Performance function submenu to display TX
(Transmission) and RX (Reception) of the iSCSI ports. Check the interfaces which you want to monitor.
Monitoring
297
Figure 13-10 iSCSI Performance Monitoring
13.3.3. Fibre Channel Port Monitoring
Select the Fibre Channel function tab in the Performance function submenu display TX
(Transmission) and RX (Reception) of the fibre channels. Check the interfaces which you want to monitor.
298 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
14. Troubleshooting
This chapter describes how to perform troubleshooting of software. Hardware troubleshooting is described in the chapter 6, Quick Maintenance in the XCubeSAN
Hardware Owner’s Manual .
14.1. Fault Isolation Methodology
This section presents the basic methodology used to quickly locate faults within the storage system.
14.1.1. Basic Steps
The basic fault isolation steps are listed below:
Gather fault information, including using system LEDs. Please refer to the chapter 5,
Description of LEDs and Buttons in the XCubeSAN Hardware Owner’s Manual for detail.
Review event logs. Please refer to the chapter 13.1, Log Center section in the Monitoring
chapter.
Determine where the fault is occurring in the system. If the fault is occurring from hardware, please refer to the chapter 6, Quick Maintenance in the XCubeSAN Hardware
Owner’s Manual for detail.
If required, isolate the fault to a disk drive, configuration, or system. Please refer to the
chapter 14.1.3, Diagnostic Steps section.
If all troubleshooting steps are inconclusive, please contact support for help.
14.1.2. Stopping I/O
When troubleshooting disk drive and connectivity faults, it’s necessary to stop I/O to the affected volume data from all hosts and remote systems as a data protection precaution.
As an additional data protection precaution, it is helpful to conduct regularly scheduled backups of your data.
Troubleshooting
299
14.1.3. Diagnostic Steps
The diagnostic steps are listed below:
If disk drive errors in the event logs are occurring, we suggest replacing a healthy one.
For more information, please refer to the chapter 14.2, Rebuild section.
If the fault is occurring from volume configuration. For example, the pool or volume
configuration is deleted by accident. Please refer to the chapter 14.3, Volume
Restoration section for disaster recovery.
If the fault is occurring from other hardware components of the system, the faulty component will need to be replaced.
14.2. Rebuild
If any single disk drive of the storage pool in protection RAID level (e.g., RAID 1, 3, 5, 6, 0+1,
10, 30, 50, and 60) fails or has been removed, then the status of pool will be changed to degraded mode. At the same time, the system will search the spare disk to execute volume rebuild the degraded pool into complete one.
There are three types of spare disk drive which can be set in function menu Disks:
Dedicated Spare: Set a spare disk drive to a dedicated pool.
Local Spare: Set a spare disk drive to the pools which located in the same enclosure.
The enclosure is either in head unit or in one of expansion units.
Global Spare: Set a spare disk drive to all pools which located whether in head unit and expansion unit.
The detection sequence is the dedicated spare disk as the rebuild disk first, then local spare disk and global spare disk.
The following examples are scenarios for a RAID 6 rebuild.
1. When there is no global spare disk or dedicated spare disk in the system, the pool will be in degraded mode and wait until there is one disk assigned as spare disk, or the failed disk is removed and replaced with new clean disk, and then the Auto-Rebuild starts.
2. When there are spare disks for the degraded array, system starts Auto-Rebuild immediately. In RAID 6, if there is another disk failure occurs during rebuilding, system will start the above Auto-Rebuild process as well. Auto-Rebuild feature only works at that the status of pool is Online. Thus, it will not conflict with the online roaming feature.
300 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
3. In degraded mode, the health of the pool is Degraded. When rebuilding, the status of pool and volume disk will display Rebuilding, the column R% in volume will display the ratio in percentage. After complete rebuilding, the status will become Online.
TIP:
The dedicated spare cannot be set if there is no pool or the pool is set to
RAID level 0.
Sometimes, rebuild is called recover, they have the same meaning. This table describes the relationship between RAID levels and recovery.
Table 14-1 RAID Rebuild
Operation Description
RAID 0 Disk striping. No protection for data. Pool fails if any disk drive fails or unplugs.
RAID 1 Disk mirroring over 2 disks. RAID 1 allows one disk drive fails or unplugging. Need one new disk drive to insert to the system and rebuild to be completed.
N-way mirror
RAID 3
RAID 5
RAID 6
RAID 0+1
Extension to RAID 1 level. It has N copies of the disk. N-way mirror allows N-1 disk drives failure or unplugging.
Striping with parity on the dedicated disk. RAID 3 allows one disk drive failure or unplugging.
Striping with interspersed parity over the member disks. RAID 5 allows one disk drive failure or unplugging.
2-dimensional parity protection over the member disks. RAID 6 allows two disk drives failure or unplugging. If it needs to rebuild two disk drives at the same time, it will rebuild the first one, then the other in sequence.
Mirroring of RAID 0 volumes. RAID 0+1 allows two disk drive failures or unplugging, but at the same array.
RAID 10
RAID 30
RAID 50
Striping over the member of RAID 1 volumes. RAID 10 allows two disk drive failure or unplugging, but in different arrays.
Striping over the member of RAID 3 volumes. RAID 30 allows two disk drive failure or unplugging, but in different arrays.
Striping over the member of RAID 5 volumes. RAID 50 allows two disk
Troubleshooting
301
RAID 60 drive failures or unplugging, but in different arrays.
Striping over the member of RAID 6 volumes. RAID 60 allows four disk drive failures or unplugging, every two in different arrays.
14.3. Volume Restoration
The Volume Restoration can restore the volume configuration from the volume creation history. It is used for pool corruption and tries to recreate the volume. When trying to do data recovery, the same volume configurations as original must be set and all member disks must be installed by the same sequence as original. Otherwise, data recovery will fail.
The volume restoration does not guarantee that the lost data can be restored. Please get help from an expert before executing this function.
Figure 14-1 Volume Restoration
This table shows the column descriptions.
Table 14-2 Volume Restoration Column Descriptions
Column Name Description
Pool Name
RAID
Volume
The original pool name.
The original RAID level.
The original volume name.
Volume Capacity The original capacity of the volume.
Disks Used The original quantity of physical disk in the pool.
Disk Slot The original physical disk locations.
302 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Time
Event Logs
The last action time of the volume.
The last event of the volume.
TIP:
When attempting data recovery, the same volume configurations as the original must be set and all member disks must be installed by the same sequence as original. Otherwise, data recovery will fail.
CAUTION:
Performing data recovery does not guarantee that the lost data can be restored 100%. It depends on the real operation and the degree of physical damages on disks. Users assume all risk when attempting data recovery procedures.
14.3.1. Configure Volume Restoration
The options available in this tab:
Restore the Volume
Click ▼ -> Restore to restore the deleted volume in the pool. And then click the OK button to proceed.
Troubleshooting
303
15. Support and Other Resources
15.1. Getting Technical Support
After installing your device, locate the serial number on the sticker located on the side of the chassis or from the SANOS UI -> MAINTINANCE > System Information and u se it to register your product at https://partner.qsan.com/ (End-User Registration). We recommend registering your product in QSAN partner website for firmware updates, document download, and latest news in eDM. To contact QSAN Support, please use the following information.
Via the Web: https://qsan.com/support
Via Telephone: +886-2-7720-2118 extension 136
(Service hours: 09:30 - 18:00, Monday - Friday, UTC+8)
Via Skype Chat, Skype ID: qsan.support
(Service hours: 09:30 - 02:00, Monday - Friday, UTC+8, Summer time: 09:30 - 01:00)
Via Email: [email protected]
Information to Collect
Product name, model or version, and serial number
Operating system name and version
Firmware version
Error messages or capture screenshots
Product-specific reports and logs
Add-on products or components installed
Third-party products or components installed
Information for Technical Support
The following syst em information is necessary for technical support. Please refer to following for what and where to get the information of your XCubeSAN Series model.
If the technical support requests you to download the Service Package, please navigate in the SANOS UI -> SYSTEM SETTING -> Maintenance -> System Information, and then click the Download Service Package button to download. Then the system will automatically generate a zip file the default download location of your web browser.
304 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Figure 15-1 Download Service Package in the SANOS UI
Support and Other Resources
305
15.2. Online Customer Support
For better customer support, every XCubeSAN series models include the console cable (two for dual controller models), one for single controller models) for online support. Please follow the procedures below to setup the online help environment for QSAN support team.
The following procedure will help you to setup the serial console via the console cable that is enclosed in the shipping carton. The following image is the appearance of the console cable.
Figure 15-2 Appearance of a Console Cable
Procedures to Setup the Serial Console
1. Setup the serial cable between the controller and one server/host like in the below image.
Figure 15-3 Connect the Console Cable
306 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
2. You must use terminal software such as HyperTerminal or Putty to open the console after the connection is made.
INFORMATION:
For more information about terminal software, please refer to
HyperTerminal: http://www.hilgraeve.com/hyperterminal/
PuTTY: http://www.putty.org/
3. Here we first demonstrate HyperTerminal. The console settings are on the following.
Baud rate: 115200, 8 data bit, no parity, 1 stop bit, and no flow control
Terminal type: vt100
Support and Other Resources
307
Figure 15-4 The Procedures of Setup Serial Console by HyperTerminal
4. If you are using PuTTY instead, please refer to below
308 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
Support and Other Resources
309
Figure 15-5 The Procedures of Setup Serial Console by PuTTY
5. Users should be able to login the controller system via console cable by following the procedures above.
Setup the Connection for Online Support
Following is the procedure to setup the connection for online support via TeamViewer:
1. Please download the TeamViewer from following hyper link: https://www.teamviewer.com/en/download/
2. Install TeamViewer.
3. Please provide the ID/password showed on the application to QSAN support team member to join the online support session.
15.3. Accessing Product Updates
To download product updates, please visit QSAN website: https://qsan.com/download
310 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
15.4. Documentation Feedback
QSAN is committed to providing documentation that meets and exceeds your expectations.
To help us improve the documentation, email any errors, suggestions, or comments to [email protected]
.
When submitting your feedback, include the document title, part number, revision, and publication date located on the front cover of the document.
Support and Other Resources
311
Appendix
Glossary and Acronym List
Common Terminology
Item
RAID
Disk
Pool
Volume
LUN
WebUI
WT
WB
RO
DS
Description
Redundant Array of Independent Disks. There are different RAID levels with different degree of data protection, data availability, and performance to host environment.
The Physical Disk belongs to the member disk of one specific RAID group.
A collection of removable media. One pool consists of a set of volumes and owns one RAID level attribute.
Each pool could be divided into several volumes. The volumes from one pool have the same RAID level, but may have different volume capacity.
Logical Unit Number. A logical unit number (LUN) is a unique identifier which enables it to differentiate among separate devices
(each one is a logical unit).
Web User Interface.
Write-Through cache-write policy. A cache technique in which the completion of a write request is not signaled until data is safely stored in non-volatile media. Each data is synchronized in both data cache and accessed physical disks.
Write-Back cache-write policy. A cache technique in which the completion of a write request is signaled as soon as the data is in cache and actual writing to non-volatile media occurs at a later time.
It speeds up system write performance but needs to bear the risk where data may be inconsistent between data cache and the physical disks in one short time interval.
Set the volume to be Read-Only.
Dedicated Spare disks. The spare disks are only used by one specific
312 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
LS
GS
DG
SCSI
SAS
S.M.A.R.T.
WWN
HBA
SES
NIC
BBM
SCM
RAID group. Others could not use these dedicated spare disks for any rebuilding purpose.
Local Spare disks. The spare disks are only used by the RAID groups of the local enclosure. Other enclosure could not use these local spare disks for any rebuilding purpose.
Global Spare disks. It is shared for rebuilding purpose. If some RAID groups need to use the global spare disks for rebuilding, they could get the spare disks out from the common spare disks pool for such requirement.
DeGraded mode. Not all of the array’s member disks are functioning, but the array is able to respond to application read and write requests to its virtual disks.
Small Computer System Interface
Serial Attached SCSI
Self-Monitoring Analysis and Reporting Technology
World Wide Name
Host Bus Adapter
SCSI Enclosure Services
Network Interface Card
Battery Backup Module
Super Capacitor Module
FC / iSCSI / SAS Terminology
Item
FC
FC-P2P
FC-AL
FC-SW iSCSI
LACP
MPIO
MC/S
MTU
CHAP
Description
Fibre Channel
Point-to-Point
Arbitrated Loop
Switched Fabric
Internet Small Computer Systems Interface
Link Aggregation Control Protocol
Multipath Input/Output
Multiple Connections per Session
Maximum Transmission Unit
Challenge Handshake Authentication Protocol. An optional security
Appendix
313
iSNS
SAS mechanism to control access to an iSCSI storage system over the iSCSI data ports.
Internet Storage Name Service
Serial Attached SCSI
Dual Controller Terminology
Item
SBB
6G MUX
Description
Storage Bridge Bay. The objective of the Storage Bridge Bay Working
Group (SBB) is to create a specification that defines mechanical, electrical and low-level enclosure management requirements for an enclosure controller slot that will support a variety of storage controllers from a variety of independent hardware vendors (“IHVs”) and system vendors.
Bridge board is for SATA II disk to support dual controller mode.
314 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
End-User License Agreement (EULA)
Please read this document carefully before you use our product or open the package containing our product.
YOU AGREE TO ACCEPT TERMS OF THIS EULA BY USING OUR PRODUCT, OPENING THE
PACKAGE CONTAINING OUR PRODUCT OR INSTALLING THE SOFTWARE INTO OUR
PRODUCT. IF YOU DO NOT AGREE TO TERMS OF THIS EULA, YOU MAY RETURN THE
PRODUCT TO THE RESELLER WHERE YOU PURCHASED IT FOR A REFUND IN
ACCORDANCE WITH THE RESELLER'S APPLICABLE RETURN POLICY.
General
QSAN Technology, Inc. ("QSAN") is willing to grant you (“User”) a license of software, firmware and/or other product sold, manufactured or offered by QSAN (“the Product”) pursuant to this EULA.
License Grant
QSAN grants to User a personal, non-exclusive, non-transferable, non-distributable, nonassignable, non-sub-licensable license to install and use the Product pursuant to the terms of this EULA. Any right beyond this EULA will not be granted.
Intellectual Property Right
Intellectual property rights relative to the Product are the property of QSAN or its licensor(s).
User will not acquire any intellectual property by this EULA.
License Limitations
User may not, and may not authorize or permit any third party to: (a) use the Product for any purpose other than in connection with the Product or in a manner inconsistent with the design or documentations of the Product; (b) license, distribute, lease, rent, lend, transfer, assign or otherwise dispose of the Product or use the Product in any commercial hosted or service bureau environment; (c) reverse engineer, decompile, disassemble or attempt to discover the source code for or any trade secrets related to the Product, except and only to the extent that such activity is expressly permitted by applicable law notwithstanding this limitation; (d) adapt, modify, alter, translate or create any derivative works of the Licensed
Software; (e) remove, alter or obscure any copyright notice or other proprietary rights notice on the Product; or (f) circumvent or attempt to circumvent any methods employed by QSAN to control access to the components, features or functions of the Product.
Appendix
315
Disclaimer
QSAN DISCLAIMS ALL WARRANTIES OF PRODUCT, INCLUDING BUT NOT LIMITED TO ANY
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, WORKMANLIKE EFFORT,
TITLE, AND NON-INFRINGEMENT. ALL PRODUCTS ARE PROVIDE “AS IS” WITHOUT
WARRANTY OF ANY KIND. QSAN MAKES NO WARRANTY THAT THE PRODUCT WILL BE
FREE OF BUGS, ERRORS, VIRUSES OR OTHER DEFECTS.
IN NO EVENT WILL QSAN BE LIABLE FOR THE COST OF COVER OR FOR ANY DIRECT,
INDIRECT, SPECIAL, PUNITIVE, INCIDENTAL, CONSEQUENTIAL OR SIMILAR DAMAGES OR
LIABILITIES WHATSOEVER (INCLUDING, BUT NOT LIMITED TO LOSS OF DATA,
INFORMATION, REVENUE, PROFIT OR BUSINESS) ARISING OUT OF OR RELATING TO THE
USE OR INABILITY TO USE THE PRODUCT OR OTHERWISE UNDER OR IN CONNECTION
WITH THIS EULA OR THE PRODUCT, WHETHER BASED ON CONTRACT, TORT (INCLUDING
NEGLIGENCE), STRICT LIABILITY OR OTHER THEORY EVEN IF QSAN HAS BEEN ADVISED
OF THE POSSIBILITY OF SUCH DAMAGES.
Limitation of Liability
IN ANY CASE, QSAN’S LIABILITY ARISING OUT OF OR IN CONNECTION WITH THIS EULA OR
THE PRODUCT WILL BE LIMITED TO THE TOTAL AMOUNT ACTUALLY AND ORIGINALLY
PAID BY CUSTOMER FOR THE PRODUCT. The foregoing Disclaimer and Limitation of
Liability will apply to the maximum extent permitted by applicable law. Some jurisdictions do not allow the exclusion or limitation of incidental or consequential damages, so the exclusions and limitations set forth above may not apply.
Termination
If User breaches any of its obligations under this EULA, QSAN may terminate this EULA and take remedies available to QSAN immediately.
Miscellaneous
QSAN reserves the right to modify this EULA.
QSAN reserves the right to renew the software or firmware anytime.
QSAN may assign its rights and obligations under this EULA to any third party without condition.
This EULA will be binding upon and will inure to User’s successors and permitted assigns.
316 ©
Copyright 2017 QSAN Technology, Inc. All Right Reserved.
This EULA shall be governed by and constructed according to the laws of R.O.C. Any disputes arising from or in connection with this EULA, User agree to submit to the jurisdiction of Taiwan Shilin district court as first instance trial.
Appendix
317

Public link updated
The public link to your chat has been updated.
Advertisement
Key features
- Dual controllers for high availability and performance
- Supports up to 26 SFF drives for a maximum capacity of over 1PB
- Supports a variety of RAID levels for flexibility and data protection
- Easy to manage and configure with a user-friendly web-based interface
- Supports a variety of management tools, including SNMP, CLI, and SSH