Table of Contents
Chapter 1
1.1
Overview .............................................................8
Features................................................................................ 8
1.1.1
1.1.2
1.2
Highlights .......................................................................................................9
Technical specifications .................................................................................9
RAID concepts .................................................................... 11
1.2.1
1.2.2
1.2.3
1.3
Terminology .................................................................................................11
RAID levels ..................................................................................................13
Volume relationship .....................................................................................14
Storage concepts ................................................................ 15
Chapter 2
2.1
2.2
2.3
Installation ........................................................16
Package contents ............................................................... 16
The Enclosure Description.................................................. 16
Make the system connected ............................................... 18
Chapter 3
3.1
Quick setup ......................................................19
Management interfaces ...................................................... 19
3.1.1
3.1.2
3.1.3
3.1.4
3.2
Serial console ..............................................................................................19
Remote control.............................................................................................19
LCM .............................................................................................................19
Web UI.........................................................................................................22
How to use the system quickly............................................ 24
3.2.1
3.2.2
Quick installation..........................................................................................24
Volume creation wizard................................................................................25
Chapter 4
4.1
4.2
Configuration ...................................................27
Web UI management interface hierarchy ........................... 27
System configuration .......................................................... 28
4.2.1
4.2.2
4.2.3
4.2.4
4.2.5
4.3
System setting .............................................................................................28
IP address....................................................................................................29
Login setting ................................................................................................29
Mail setting...................................................................................................30
Notification setting........................................................................................31
Volume configuration .......................................................... 32
4.3.1
4.3.2
4.3.3
4.3.4
4.3.5
4.3.6
4.4
Physical disk ................................................................................................32
RAID group ..................................................................................................36
Virtual disk ...................................................................................................39
Snapshot......................................................................................................43
Logical unit...................................................................................................46
Example.......................................................................................................47
Enclosure management ...................................................... 52
4.4.1
4.4.2
4.4.3
4.4.4
4.5
SES configuration ........................................................................................52
Hardware monitor ........................................................................................52
Hard drive S.M.A.R.T. support .....................................................................54
UPS .............................................................................................................54
System maintenance .......................................................... 56
4.5.1
4.5.2
4.5.3
4.5.4
4.5.5
System information ......................................................................................56
Upgrade .......................................................................................................57
Reset to factory default ................................................................................57
Import and export.........................................................................................58
Event log......................................................................................................58
-6-
4.5.6
4.6
Reboot and shutdown ..................................................................................59
Logout................................................................................. 59
Chapter 5
5.1
5.2
5.3
5.4
Volume rebuild .................................................................... 60
RG migration....................................................................... 62
VD extension....................................................................... 63
iSnap................................................................................... 64
5.4.1
5.4.2
5.4.3
5.4.4
5.5
5.6
5.7
Create snapshot volume ..............................................................................65
Auto snapshot ..............................................................................................66
Rollback .......................................................................................................67
iSnap constraint ...........................................................................................67
Disk roaming ....................................................................... 69
VD clone ............................................................................. 70
SAS JBOD expansion......................................................... 76
5.7.1
5.7.2
Connecting JBOD ........................................................................................76
Upgrade firmware of JBOD ..........................................................................78
Chapter 6
6.1
6.2
6.3
A.
Advanced operations ......................................60
Troubleshooting ..............................................79
System buzzer .................................................................... 79
Event notifications............................................................... 79
How to get support.............................................................. 87
Certification list.................................................................... 89
-7-
Chapter 1 Overview
1.1 Features
RS362 16- bay FC RAID Subsystem controller is a high-performance
RAID controller.
•
Backplane solution
o RS362 : FC (x2) -to- SATA II/SAS (xN bays) RAID controller.
RS362 16-bay FC RAID Subsystem controller features:
•
•
•
•
•
•
•
•
•
•
•
Front-end 2-ported 4Gb FC ports with load-balancing & failover for high
availability.
RAID 6, 60 ready.
Snapshot (iSnap) without relying on host software.
SATA II drive backward-compatible.
One logic volume can be shared by as many as 16 hosts.
Host access control.
Configurable N-way mirror for high data protection.
On-line volume migration with no system down-time.
HDD S.M.A.R.T. enabled for SATA drives.
SAS JBOD expansion support.
Microsoft VSS, VDS support.
With proper configuration, controller can provide non-stop service with
a high degree of fault tolerance by using RS362. RAID technology and
advanced array management features. For more details, please contact your direct
.
sales or email to “Sale@RackmountMart.com”
RS362 controller connects to the host system in fibre channel interface. It can
be configured to any RAID level. The controller provides reliable data protection for
servers and RAID 6. The RAID 6 allows two HDD failures without producing any
impact on the existing data. Data can be recovered from the existing data and parity
drives. (Data can be recovered from the rest of disks/drives.)
Snapshot-on-the-box is a fully usable copy of a defined collection of data
that contains an image of the data as it appeared at the point in time, which means a
point-in-time data replication. It provides consistent and instant copies of data
volumes without any system downtime. Snapshot-on-the-box can keep up to 32
snapshots for one logical volume. Rollback feature is provided for restoring the
previous-snapshot data easily while continuously using the volume for further data
access. The data access which includes read / write is working as usual without any
impact to end users. The "on-the-box" implies that it does not require any proprietary
agents installed at host side. The snapshot is taken at target side. It will not consume
any host CPU time thus the server is dedicated to the specific or other application.
-8-
The snapshot copies can be taken manually or by schedule every hour or every day,
depends on the modification.
RS362 controller is the most cost-effective disk array system with completely
integrated high-performance and data-protection capabilities which meet or exceed
the highest industry standards, and the best data solution for small / medium
business (SMB) users.
1.1.1
•
Highlights
iStoragePro feature highlights
1.
Front-end 2-ported 4Gb FC ports with load-balancing and failover for high
availability
2. RAID 6, 60
3. iSnap without relying on host software
4. SATAII drive support
5. One logic volume can be shared by as many as 16 hosts
6. Host access control
7. Configurable N-way mirror for high data protection
8. On-line volume migration with no system down-time
9. HDD S.M.A.R.T. enabled for SATA drives
10. SAS JBOD expansion support
11. Windows VSS and MPIO enabled
12. Disk auto spindown support
1.1.2
•
1.
2.
3.
4.
5.
Key components
CPU : Intel Xscale IOP 81341
Memory : 1GB ~ 2GB DDRII 533 DIMM supported
UARTs : support for serial console management and UPS
Fast Ethernet port for web-based management use.
Backend : Up to 16 SATA 1.0, 1.5Gb/s or SATA 2.0, 3Gb/s disks supported on
the controller board
Backend : Up to 24 SAS 3.0Gb/s, or SATA 1.0, 1.5Gb/s or SATA 2.0, 3Gb/s
disks supported on the controller board
Front-end : One 4Gb FC controller to have 2 SFP ports
LCM supported for easy management use
Battery backup support (optional)
6.
7.
8.
•
1.
2.
Technical specifications
RAID and volume operation
RAID level: 0,1,0+1,3,5,6,10,30,50, 60 and JBOD
Up to 1024 logical volumes in the system
-9-
3.
4.
5.
6.
7.
8.
9.
10.
11.
Up to 16 PDs can be included in one volume group
Global and dedicated hot spare disks
Write-through or write-back cache policy for different application usage
Multiple RAID volumes support
Configurable RAID stripe size
Online volume expansion
Instant RAID volume availability
Auto volume rebuilding
Online volume migration
•
1.
Advanced data protection
iSnap utility
x
Writable iSnap volume support
x
Support iStoragePro LVM 3.0 features
Up to 16 logical volumes can be configured to have iSnap ability
Up to 32 iSnap in one logical volume
iSnap rollback mechanism
Local N-way mirror
On-line disk roaming
Smart faulty sector relocation
Battery backup support (optional)
2.
3.
4.
5.
6.
7.
8.
•
Enclosure monitoring
1.
2.
3.
4.
5.
S.E.S. support for standard enclosure management
UPS management via the specific serial port
Fan speed monitoring fan x4
Redundant power supply monitor
Hardware monitor (Optional)
x
Controllable fan speed monitoring fan x 4
x
Redundant power supply monitor x 2
6. 3.3V, 5V and 12V voltage monitor
7. Thermal sensors x 3 on the controller BOARD (for CPU, backend chip and host
channel chip)
Thermal sensors x 3 on the controller BOARD (for CPU, bridge and host channel
chip)
8. Thermal sensor x 3 (up to 24) in enclosure.
9. EEPROM for backplane-like HW configuration
10. Status report of the managed SAS/SATA JBODs
•
1.
2.
3.
4.
Management interface
Management UI via serial console, SSH telnet and HTTP Web UI
Management UI via serial console, SSH telnet, HTTP Web UI, and secured Web
(HTTPS)
Online system firmware upgrade mechanism
Event notification via Email, SNMP trap, browser pop-up windows, Syslog, and
Windows Messenger.
Built-in LCD module to control most enclosure components
- 10 -
•
1.
Host and drive connection
2 x SFP optical FC host ports support independent access, fail-over or loadbalancing
4 x one-by-four connectors for hard drive cabling
32 Multiple target nodes support (multiple aliases)
Support Microsoft MPIO hardware provider for load-balancing and failover
SCSI-3 compliant
Multiple IO transaction processing
Tagged command queuing
Access control in LUN usage: Read-Write & Read-Only
Up to 32 host connection
Up to 16 hosts clustered for one volume
Hard drive S.M.A.R.T. enabled
Compatible with Windows, Linux Operation Systems, Mac, and Solaris
Up to 4 SAS JBODs can be connected to one RS362 by using the SAS JBOD port
The overall SAS/SATA drives supported for one controller is up to 16+4*16 = 80
SAS/SATA drives
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
•
1.
Chassis integration
Controller form factor
Dimension: 14 cm x 24.9 cm x 3.2 cm (W x D x H)
Dimension: 14.5 cm x 28 cm x 3.2 cm (W x D x H)
VHDM-HSD connector to customized backplane, designed with all interfaces
mounted on-board exposed to external via customized IO bracket
2.
1.2 RAID concepts
RAID is the abbreviation of “Redundant Array of Independent Disks”. The basic idea
of RAID is to combine multiple drives together to form one large logical drive. This
RAID drive obtains performance, capacity and reliability than a single drive. The
operating system detects the RAID drive as a single storage device.
1.2.1
Terminology
The document uses the following terms:
•
Part 1: Common
RAID
Redundant Array of Independent Disks. There are different
RAID levels with different degree of data protection, data
availability, and performance to host environment.
- 11 -
PD
The Physical Disk belongs to the member disk of one
specific RAID group.
RG
Raid Group. A collection of removable media. One RG
consists of a set of VDs and owns one RAID level
attribute.
VD
Virtual Disk. Each RD could be divided into several VDs.
The VDs from one RG have the same RAID level, but may
have different volume capacity.
LUN
Logical Unit Number. A logical unit number (LUN) is a
unique identifier which enables it to differentiate among
separate devices (each one is a logical unit).
GUI
Graphic User Interface.
RAID cell
When creating a RAID group with a compound RAID level, such as
10, 30, 50 and 60, this field indicates the number of subgroups in
the RAID group. For example, 8 disks can be grouped into a RAID
group of RAID 10 with 2 cells, 4 cells. In the 2-cell case, PD {0, 1,
2, 3} forms one RAID 1 subgroup and PD {4, 5, 6, 7} forms another
RAID 1 subgroup. In the 4-cells, the 4 subgroups are PD {0, 1}, PD
{2, 3}, PD {4, 5} and PD {6,7}.
WT
Write-Through cache-write policy. A caching technique in
which the completion of a write request is not signaled until
data is safely stored in non-volatile media. Each data is
synchronized in both data cache and accessed physical
disks.
WB
Write-Back cache-write policy. A caching technique in
which the completion of a write request is signaled as soon
as the data is in cache and actual writing to non-volatile
media occurs at a later time. It speeds up system write
performance but needs to bear the risk where data may be
inconsistent between data cache and the physical disks in
one short time interval.
RO
Set the volume to be Read-Only.
DS
Dedicated Spare disks. The spare disks are only used by
one specific RG. Others could not use these dedicated
spare disks for any rebuilding purpose.
GS
Global Spare disks. GS is shared for rebuilding purpose. If
some RGs need to use the global spare disks for
rebuilding, they could get the spare disks out from the
common spare disks pool for such requirement.
- 12 -
•
1.2.2
DG
DeGraded mode. Not all of the array’s member disks are
functioning, but the array is able to respond to application
read and write requests to its virtual disks.
SCSI
Small Computer Systems Interface.
SAS
Serial Attached SCSI.
S.M.A.R.T.
Self-Monitoring Analysis and Reporting Technology.
WWN
World Wide Name.
HBA
Host Bus Adapter.
SES
SCSI Enclosure Services.
NIC
Network Interface Card.
BBM
Battery Backup Module
Part 2: FC
FC
Fibre Channel.
MPIO
Multi-Path Input/Output.
RAID levels
There are different RAID levels with different degree of data protection, data
availability, and performance to host environment. The description of RAID levels are
on the following:
RAID 0
Disk striping. RAID 0 needs at least one hard drive.
RAID 1
Disk mirroring over two disks. RAID 1 needs at least two
hard drives.
N-way mirror
Extension to RAID 1 level. It has N copies of the disk.
RAID 3
Striping with parity on the dedicated disk. RAID 3 needs at
least three hard drives.
RAID 5
Striping with interspersed parity over the member disks.
RAID 3 needs at least three hard drives.
- 13 -
1.2.3
RAID 6
2-dimensional parity protection over the member disks.
RAID 6 needs at least four hard drives.
RAID 0+1
Mirroring of the member RAID 0 volumes. RAID 0+1 needs
at least four hard drives.
RAID 10
Striping over the member RAID 1 volumes. RAID 10 needs
at least four hard drives.
RAID 30
Striping over the member RAID 3 volumes. RAID 30 needs
at least six hard drives.
RAID 50
Striping over the member RAID 5 volumes. RAID 50 needs
at least six hard drives.
RAID 60
Striping over the member RAID 6 volumes. RAID 60 needs
at least eight hard drives.
JBOD
The abbreviation of “Just a Bunch Of Disks”. JBOD needs
at least one hard drive.
Volume relationship
The below graphic is the volume structure. It
describes the relationship of RAID components. One RG (RAID group) consists of a
set of VDs (Virtual Disk) and owns one RAID level attribute. Each RG can be divided
into several VDs. The VDs in one RG share the same RAID level, but may have
different volume capacity. All VDs share the CV (Cache Volume) to execute the data
transaction. LUN (Logical Unit Number) is a unique identifier, in which users can
access through SCSI commands.
- 14 -
LUN 1
VD 1
LUN 2
LUN 3
VD 2
iSnap
VD
+
+
+
RG
PD 1
PD 2
Cache Volume
PD 3
DS
RAM
Figure 1.2.3.1
1.3 Storage concepts
Fibre channel started use primarily in the supercomputer field, but has become the
standard connection type for storage area networks (SAN) in enterprise storage.
Host 2
(initiator)
FC
HBA
Host 1
(initiator)
FC
HBA
SAN
FC device 1
(target)
FC device 2
(target)
Figure 1.3.1
- 15 -
The target is the storage device itself or an appliance which controls and serves
volumes or virtual volumes. The target is the device which performs SCSI commands
or bridges to an attached storage device.
Chapter 2 Installation
2.1 Package contents
The package contains the following items:
One RS362
-
Contact your supplier if any of the above items are missing or damaged.
The RAM size for are recommended DDR2-533 1GB or above. Please refer to the
certification list in Appendix A.
2.2 The Enclosure Description
o
o
o
o
n
s
r p u u
q
t
s
Figure 2.2.1
n
o
p
q
r
s
Power On / Off Switch
Fan 1,2,3,4
RS 232
CONSOL (DB9)
JBOD
PSU-Module
: The switch to turn On/ Off the system
: Redundant, hot swappable Fan Modules
: For APC UPS
: For Web GUI
: For cascading iR16SAEJ
: Redundant, hot swappable power modules
- 16 -
t
CONSOLE (RJ45)
: For Web GUI
u
FC Port
: Connect transceiver and fiber cable
FC LED:
x
Constant bright white Æ Loss of sync.
x
Blinking bright white Æ Fault, 1 blink / sec.
x
Constant amber Æ 1G link.
x
Blinking amber Æ 1G activity, 4 blinks /
sec.
x
Constant amber Æ 2G link.
x
Blinking green Æ 2G activity, 4 blinks / sec.
x
Constant blue Æ 4G link.
x
Blinking blue Æ 4G activity, 4 blinks / sec.
FC access / fail LED:
x
Yellow Æ Asserted when FC link is
established and packets are being
transmitted along with any receive activity.
x
Red Æ Asserted when FC link can't
establish the link.
FC link LED:
x
Yellow + Blue Æ Asserted when a 1G link is
established and maintained.
x
Yellow Æ Asserted when a 2G link is
established and maintained.
x
Blue Æ Asserted when a 4G link is
established and maintained.
- 17 -
2.3 Make the system connected
Before starting, prepare the following items.
1.
2.
3.
4.
5.
6.
7.
8.
9.
Check “Certification list” in Appendix A to confirm the hardware setting is fully
supported.
Read the latest release note before upgrading. Release note accompanies with
its released firmware.
A server with a FC HBA.
FC cables.
CAT 5e, or CAT 6 network cables for management port.
Prepare storage system configuration plan.
Prepare management port network information. When using static IP, please
prepare static IP addresses, subnet mask, and default gateway.
Setup the hardware connection before power on servers. Connect console cable,
management port cable, and FC cables in advance.
Power on and (optional) first, and then power on hosts.
- 18 -
Chapter 3 Quick setup
3.1 Management interfaces
There are three management methods to manage RS362 controller, describe
in the following:
3.1.1
Serial console
Use console cable (NULL modem cable) to connect from console port of
RS362 controller to RS 232 port of management PC. Please refer to figure
2.3.1. The console settings are on the following:
Baud rate: 115200, 8 data bit, no parity, 1 stop bit, and no flow control.
Terminal type: vt100
Login name: admin
Default password: 0000
3.1.2
Remote control
SSH (secure shell) software is required for remote login. The SSH client software is
available at the following web site:
SSH Tectia Client: http://www.ssh.com/
PuTTY: http://www.chiark.greenend.org.uk/
Host name: 192.168.10.50 (Please check the DHCP address first on LCM.)
Login name: admin
Default password: 0000
Tips
iStoragePro controller only supports SSH for remote control. For
using SSH, the IP address and password are required for login.
3.1.3
LCM
After booting up the system, the following screen shows management port IP and
model name:
- 19 -
192.168.10.50
←
Figure 3.1.3.1
192.168.10.50
←
Figure 3.1.3.2
Press “Enter” button, the LCM functions “System Info.”, “Alarm Mute”,
“Reset/Shutdown”, “Quick Install”, “Volume Wizard”, “View IP Setting”,
“Change IP Config” and “Reset to Default” will rotate by pressing c (up) and d
(down).
When there is WARNING event or ERROR event occurred (LCM default filter), the
LCM shows the event log to give users more detail from front panel.
The following table is function description of each item.
•
LCM operation description:
System Info.
Display system information.
Alarm Mute
Mute alarm when error occurs.
Reset/
Reset or shutdown controller.
Shutdown
Quick Install
Quick steps to create a volume. Please refer to next
chapter for detailed operation steps in web UI.
Volume
Wizard
Smart steps to create a volume. Please refer to next
chapter for detailed operation steps in web UI.
View IP
Setting
Display current IP address, subnet mask, and gateway.
Change IP
config
Set IP address, subnet mask, and gateway. There are 2
options: DHCP (Get IP address from DHCP server) or
static IP.
Reset to
Default
Reset to default will set password to default: 0000, and set
IP address to default as DHCP setting.
Default IP address: 192.168.10.50 (DHCP)
Default subnet mask: 255.255.255.0
Default gateway: 192.168.10.254
- 20 -
•
LCM menu hierarchy:
[System Info.]
[Alarm Mute]
[Firmware
Version
x.x.x]
[RAM Size
xxx MB]
[cYes
Nod]
[Reset]
[Reset/Shutdown]
[Shutdown]
[Quick Install]
Technology
cd
[Volume Wizard]
[View IP Setting]
RAID 0
RAID 1
RAID 3
RAID 5
RAID 6
RAID 0+1
xxx GB
[Local]
RAID 0
RAID 1
RAID 3
RAID 5
RAID 6
RAID 0+1
[JBOD x] cd
RAID 0
RAID 1
RAID 3
RAID 5
RAID 6
RAID 0+1
[IP Config]
[Static IP]
[IP Address]
[192.168.010.050]
[IP Subnet Mask]
[255.255.255.0]
[IP Gateway]
[192.168.010.254]
[DHCP]
[cYes
Nod]
[cYes
Nod]
[Apply The
Config]
[cYes
Nod]
[Use default
algorithm]
[Volume
Size]
xxx GB
[Apply The
Config]
[cYes
Nod]
[new x disk]
cd
xxx BG
Adjust
Volume Size
[Apply The
Config]
[cYes
Nod]
[cYes
Nod]
[IP Address]
[Change IP
Config]
[Reset to Default]
[Static IP]
[cYes
Nod]
- 21 -
[IP Subnet
Mask]
[IP
Gateway]
[Apply IP
Setting]
Adjust IP
address
Adjust
Submask IP
Adjust
Gateway IP
[cYes
Nod]
Caution
Before power off, it is better to execute “Shutdown” to flush the data
from cache to physical disks.
3.1.4
Web UI
RS362 controller supports graphic user interface (GUI) to operate. Be sure to
connect the LAN cable. The default IP setting is DHCP; open the browser and enter:
http://192.168.10.50 (Please check the DHCP address first on LCM.)
And then it will pop up a dialog for authentication.
Figure 3.1.4.1
User name: admin
Default password: 0000
After login, choose the functions which lists on the left side of window to make any
configuration.
Figure 3.1.4.2
There are six indicators at the top-right corner for backplane solutions.
- 22 -
Figure 3.1.4.3
•
Indicator description:
RAID light:
x
Green Æ RAID works well.
x
Red Æ RAID fails.
Temperature light:
x
Green Æ Temperature is normal.
x
Red Æ Temperature is abnormal.
Voltage light:
x
Green Æ voltage is normal.
x
Red Æ voltage is abnormal.
UPS light:
x
Green Æ UPS works well.
x
Red Æ UPS fails.
Fan light:
x
Green Æ Fan works well.
x
Red Æ Fan fails.
Power light:
x
Green Æ Power works well.
x
Red Æ Power fails.
Return to home page.
Logout the management web UI.
Mute alarm beeper.
- 23 -
Tips
If the status indicators in Internet Explorer (IE) are displayed in gray,
but not in blinking red, please enable “Internet Options” Æ
“Advanced” Æ “Play animations in webpages” options in IE. The
default value is enabled, but some applications will disable it.
3.2 How to use the system quickly
The following methods will describe the quick guide to use this controller.
3.2.1
Quick installation
It is easy to use “Quick install” to create a volume. It uses whole physical disks to
create a RG; the system will calculate maximum spaces on RAID levels 0 / 1 / 3 / 5 /
6 / 0+1. “Quick install” will occupy all residual RG space for one VD, and it has no
space for snapshot and spare. If snapshot is needed, please create volumes by
manual, and refer to section 5.4 for more detail. If some physical disks are used in
other RGs, “Quick install” can not be run because the operation is valid only when
all physical disks in this system are free.
Step 1: Click “Quick install”, then choose the RAID level. After choosing the RAID
level, then click “Confirm”. It will link to another page.
Figure 3.2.1.1
Step 2: Confirm page. Click “Confirm” if all setups are correct. Then a VD will be
created.
Step 3: Done. You can start to use the system now.
- 24 -
Figure 3.2.1.2
(Figure 3.2.1.2: A virtual disk of RAID 0 is created and is named by system itself.)
3.2.2
Volume creation wizard
“Volume create wizard” has a smarter policy. When the system is inserted with
some HDDs. “Volume create wizard” lists all possibilities and sizes in different RAID
levels, it will use all available HDDs for RAID level depends on which user chooses.
When system has different sizes of HDDs, e.g., 8*200G and 8*80G, it lists all
possibilities and combination in different RAID level and different sizes. After user
chooses RAID level, user may find that some HDDs are available (free status).
It gives user:
1.
2.
Biggest capacity of RAID level for user to choose and,
The fewest disk number for RAID level / volume size.
E.g., user chooses RAID 5 and the system has 12*200G + 4*80G HDDs inserted. If
we use all 16 HDDs for a RAID 5, and then the maximum size of volume is 1200G
(80G*15). By the wizard, we do smarter check and find out the most efficient way of
using HDDs. The wizard only uses 200G HDDs (Volume size is 200G*11=2200G),
the volume size is bigger and fully uses HDD capacity.
Step 1: Select “/ Volume configuration / Volume create wizard” and then choose
the RAID level. After the RAID level is chosen, click “Next”. Then it will link to next
page.
Figure 3.2.2.1
Step 2: Please select the combination of the RG capacity, or “Use default
algorithm” for maximum RG capacity. After RG size is chosen, click “Next”.
- 25 -
Figure 3.2.2.2
Step 3: Decide VD size. User can enter a number less or equal to the default number.
Then click “Next”.
Figure 3.2.2.3
Step 4: Confirm page. Click “Confirm” if all setups are correct. Then a VD will be
created.
Step 5: Done. You can start to use the system now.
Figure 3.2.2.4
(Figure 3.2.2.4: A virtual disk of RAID 0 is created and is named by system itself.)
- 26 -
Chapter 4 Configuration
4.1 Web UI management interface hierarchy
The below table is the hierarchy of web GUI.
Æ Step 1 / Step 2 / Confirm
Quick installation
System configuration
System setting
IP address
Login setting
Mail setting
Notification
setting
Æ
Æ
Æ
Æ
Æ
System name / Date and time / System indication
MAC address / Address / DNS / port
Login configuration / Admin password / User password
Mail
SNMP / Messenger / System log server / Event log filter
Volume configuration
Step 1 / Step 2 / Step 3 / Step 4 / Confirm
Volume create
wizard
Physical disk Æ Set Free disk / Set Global spare / Set Dedicated spare /
RAID group Æ
Virtual disk Æ
Snapshot Æ
Logical unit Æ
Disk Scrub / Upgrade / Turn on/off the indication LED /
More information
Create / Migrate / Move / Activate / Deactivate / Parity
check / Delete / Set disk property / More information
Create / Extend / Parity check / Delete / Set property /
Attach LUN / Detach LUN / List LUN / Set clone / Clear
clone / Start clone / Stop clone / Schedule clone / Set
snapshot space / Cleanup snapshot / Take snapshot /
Auto snapshot / List snapshot / More information
Cleanup / Auto snapshot / Take snapshot / Export /
Rollback / Delete
Attach / Detach
Enclosure management
SES Æ Enable / Disable
configuration
Hardware Æ Auto shutdown
monitor
S.M.A.R.T. Æ S.M.A.R.T. information
(Only for SATA disks)
UPS Æ UPS Type / Shutdown battery level / Shutdown delay /
Shutdown UPS
Maintenance
System
information
Upgrade
Reset to default
Import and
export
Event log
Reboot and
shutdown
Logout
Æ System information
Æ Browse the firmware to upgrade / Export configuration
Æ Sure to reset to factory default?
Æ Import/Export / Import file
Æ Download / Mute / Clear
Æ Reboot / Shutdown
Sure to logout?
- 27 -
4.2 System configuration
“System configuration” is designed for setting up the “System setting”, “IP
address”, “Login setting”, “Mail setting” and “Notification setting”.
Figure 4.2.1
4.2.1
System setting
“System setting” can setup system name and date. Default “System name” is
composed of model name and serial number of this system.
Figure 4.2.1.1
Check “Change date and time” to set up the current date, time, and time zone
before using or synchronize time from NTP (Network Time Protocol) server. Click
“Confirm”.
- 28 -
4.2.2
IP address
Figure 4.2.2.1
“IP address” is for changing IP address for remote administration usage. There are
two options, DHCP (Get IP address from DHCP server) and static IP. The default
setting is DHCP. User can change the HTTP, HTTPS, and SSH port number when
the default port number is not allowed on host.
4.2.3
Login setting
“Login setting” can set single admin, auto logout time and admin / user password.
The single admin is to prevent multiple users access the same system in the same
time.
1.
2.
Auto logout: The options are (1) Disabled; (2) 5 minutes; (3) 30 minutes; (4) 1
hour. The system will log out automatically when user is inactive for a period of
time.
Login lock: Disabled or Enabled. When the login lock is enabled, the system
allows only one user to login or modify system settings.
- 29 -
Figure 3.2.3.1
Check “Change admin password” or “Change user password” to change admin
or user password. The maximum length of password is 12 characters.
4.2.4
Mail setting
“Mail setting” can enter 3 mail addresses for receiving the event notification. Some
mail servers would check “Mail-from address” and need authentication for antispam. Please fill the necessary fields and click “Send test mail” to test whether
email functions are available. User can also select which levels of event logs are
needed to be sent via Mail. Default setting only enables ERROR and WARNING
event logs. Please also make sure the DNS server IP is well-setup so the event
notification mails can be sent successfully.
Figure 4.2.4.1
- 30 -
4.2.5
Notification setting
“Notification setting” can set up SNMP trap for alerting via SNMP, pop-up
message via Windows messenger (not MSN), alert via syslog protocol, and event log
filter for web UI and LCM notifications.
Figure 4.2.5.1
“SNMP” allows up to 3 SNMP trap addresses. Default community setting is “public”.
User can choose the event log levels and default setting enables ERROR and
WARNING event log in SNMP. There are many SNMP tools. The following web sites
are for your reference:
SNMPc: http://www.snmpc.com/
Net-SNMP: http://net-snmp.sourceforge.net/
If necessary, click “Download” to get MIB file and import to SNMP.
To use “Messenger”, user must enable the service “Messenger” in Windows (Start
Æ Control Panel Æ Administrative Tools Æ Services Æ Messenger), and then event
logs can be received. It allows up to 3 messenger addresses. User can choose the
event log levels and default setting enables the WARNING and ERROR event logs.
- 31 -
Using “System log server”, user can choose the facility and the event log level. The
default port of syslog is 514. The default setting enables event level: INFO,
WARNING and ERROR event logs.
There are some syslog server tools. The following web sites are for your reference:
WinSyslog: http://www.winsyslog.com/
Kiwi Syslog Daemon: http://www.kiwisyslog.com/
Most UNIX systems build in syslog daemon.
“Event log filter” setting can enable event level on “Pop up events” and “LCM”.
4.3 Volume configuration
“Volume configuration” is designed for setting up the volume configuration which
includes “Volume create wizard”, “Physical disk”, “RAID group”, “Virtual disk”,
“Snapshot”, and “Logical unit”.
Figure 4.3.1
4.3.1
Physical disk
“Physical disk” can view the status of hard drives in the system. The followings are
operational steps:
1.
2.
Check the gray button next to the number of slot, it will show the functions which
can be executed.
Active function can be selected, and inactive functions show up in gray color and
cannot be selected.
For example, set PD slot number 11 to dedicated spare disk.
Step 1: Check the gray button of PD 4, select “Set Dedicated spare”, it will link to
next page.
- 32 -
Figure 4.3.1.1
Step 2: If there is any RG which is in protected RAID level and can be set with
dedicate spare disk, select one RG, and then click “Submit”.
Figure 4.3.1.2
Step 3: Done. View “Physical disk” page.
Figure 4.3.1.3
(Figure 4.3.1.3: Physical disks in slot 1,2,3 are created for a RG named “RG-R5”. Slot 4 is set as
dedicated spare disk of the RG named “RG-R5”. The others are free disks.)
Step 4: The unit of size can be changed from (GB) to (MB). It will display the capacity
of hard drive in MB.
- 33 -
Figure 4.3.1.4
•
PD column description:
Slot
The position of a hard drive. The button next to the number
of slot shows the functions which can be executed.
Size (GB)
(MB)
Capacity of hard drive. The unit can be displayed in GB or
MB.
RG Name
RAID group name.
Status
The status of hard drive:
Health
Usage
x
“Online” Æ the hard drive is online.
x
“Rebuilding” Æ the hard drive is being rebuilt.
x
“Transition” Æ the hard drive is being migrated or is
replaced by another disk when rebuilding occurs.
x
“Scrubbing” Æ the hard drive is being scrubbed.
The health of hard drive:
x
“Good” Æ the hard drive is good.
x
“Failed” Æ the hard drive is failed.
x
“Error Alert” Æ S.M.A.R.T. error alert.
x
“Read Errors” Æ the hard drive has unrecoverable
read errors.
The usage of hard drive:
x
“RAID disk” Æ This hard drive has been set to
a RAID group.
- 34 -
•
x
“Free disk” Æ This hard drive is free for use.
x
“Dedicated spare” Æ This hard drive has been set
as dedicated spare of a RG.
x
“Global spare” Æ This hard drive has been set as
global spare of all RGs.
Vendor
Hard drive vendor.
Serial
Hard drive serial number.
Type
Hard drive type:
x
“SATA” Æ SATA disk.
x
“SATA2” Æ SATA II disk.
x
“SAS” Æ SAS disk.
Write cache
Hard drive write cache is enabled or disabled. Default is
“Enabled”.
Standby
HDD auto spindown to save power. Default is “Disabled”.
Readahead
This feature makes data be loaded to disk’s buffer in
advance for further use. Default is “Enabled”.
Command
queuing
Newer SATA and most SCSI disks can queue multiple
commands and handle one by one. Default is “Enabled”.
PD operation description:
Set Free disk
Make the selected hard drive be free for use.
Set Global
spare
Set the selected hard drive to global spare of all RGs.
Set
Dedicated
spares
Set a hard drive to dedicated spare of the selected RG.
Disk Scrub
Scrub the hard drive.
Upgrade
Upgrade hard drive firmware.
Turn on/off
the indication
LED
Turn on the indication LED of the hard drive. Click again to
turn off.
- 35 -
More
information
4.3.2
Show hard drive detail information.
RAID group
“RAID group” can view the status of each RAID group. The following is an example
to create a RG.
Step 1: Click “Create”, enter “Name”, choose “RAID level”, click “Select PD” to
select PD. Then click “Next”. The “Write Cache” option is to enable or disable the
write cache option of hard drives. The “Standby” option is to enable or disable the
auto spindown function of hard drives, when this option is enabled and hard drives
have no I/O access after certain period of time, they will spin down automatically. The
“Readahead” option is to enable or disable the read ahead function. The
“Command queuing” option is to enable or disable the hard drives’ command
queue function.
Figure 4.3.2.1
Step 2: Confirm page. Click “Confirm” if all setups are correct.
Figure 4.3.2.2
- 36 -
(Figure 4.3.2.2: There is a RAID 0 with 4 physical disks, named “RG-R0”. The second RAID
group is a RAID 5 with 3 physical disks, named “RG-R5”.) (IR16FC4ER does not have
“Enclosure” column.)
Step 3: Done. View “RAID group” page.
•
RG column description:
No.
RAID group number. The button next to the No. includes
the functions which can be executed.
Name
RAID group name.
Total (GB)
(MB)
Total capacity of this RAID group. The unit can be
displayed in GB or MB.
Free (GB)
(MB)
Free capacity of this RAID group. The unit can be
displayed in GB or MB.
#PD
The number of physical disks in a RAID group.
#VD
The number of virtual disks in a RAID group.
Status
The status of RAID group:
Health
•
x
“Online” Æ the RAID group is online.
x
“Offline” Æ the RAID group is offline.
x
“Rebuild” Æ the RAID group is being rebuilt.
x
“Migrate” Æ the RAID group is being migrated.
x
“Scrubbing” Æ the RAID group is being scrubbed.
The health of RAID group:
x
“Good” Æ the RAID group is good.
x
“Failed” Æ the RAID group fails.
x
“Degraded” Æ the RAID group is not healthy and not
completed. The reason could be lack of disk(s) or
have failed disk
RAID
The RAID level of the RAID group.
Enclosure
The enclosure which a RG locates, e.g., in the local
enclosure or in the JBOD enclosure.
RG operation description:
- 37 -
Create
Create a RAID group.
Migrate
Change the RAID level of a RAID group. Please refer to
next chapter for details.
Move
“Move” the member disks of Raid Group to completely
different disks.
Activate
Activate the RAID group after disk roaming; it can be
executed when RG status is offline. This is for online disk
roaming purpose.
Deactivate
Deactivate the RAID group before disk roaming; it can be
executed when RG status is online. This is for online disk
roaming purpose.
Parity check
Regenerate parity for the RAID group. It supports RAID 3 /
5 / 6 / 30 / 50 / 60.
Delete
Delete the RAID group.
Set disk
property
Change the disk property of write cache and standby
options.
Write cache:
x
“Enabled” Æ Enable disk write cache. (Default)
x
“Disabled” Æ Disable disk write cache.
Standby:
x
“Disabled” Æ Disable auto spindown. (Default)
“30 sec / 1 min / 5 min / 30 min” Æ Enable hard
drive auto spindown to save power when no access
after certain period of time.
Read ahead:
x
x
“Enabled” Æ Enable disk read ahead. (Default)
x
“Disabled” Æ Disable disk read ahead.
Command queuing:
More
information
x
“Enabled” Æ Enable disk command queue. (Default)
x
“Disabled” Æ Disable disk command queue.
Show RAID group detail information.
- 38 -
4.3.3
Virtual disk
“Virtual disk” can view the status of each Virtual disk, create, and modify virtual
disks. The following is an example to create a VD.
Step 1: Click “Create”, enter “Name”, select RAID group from “RG name”, enter
required “Capacity (GB)/(MB)”, change “Stripe height (KB)”, change “Block size
(B)”, change “Read/Write” mode, set virtual disk “Priority”, select “Bg rate”
(Background task priority), and change “Readahead” option if necessary. “Erase”
option will wipe out old data in VD to prevent that OS recognizes the old partition.
There are three options in “Erase”: None (default), erase first 1GB or full disk. Last,
select “Type” mode for normal or clone usage. Then click “Confirm”.
Figure 4.3.3.1
Caution
If shutdown or reboot the system when creating VD, the erase process
will stop.
Step 2: Confirm page. Click “Confirm” if all setups are correct.
- 39 -
Figure 4.3.3.2
(Figure 4.3.3.2: Create a VD named “VD-01”, from “RG-R0”. The second VD is named “VD-02”,
it’s initializing.)
Step 3: Done. View “Virtual disk” page.
•
VD column description:
No.
Virtual disk number. The button includes the functions
which can be executed.
Name
Virtual disk name.
Size (GB)
(MB)
Total capacity of the virtual disk. The unit can be displayed
in GB or MB.
Write
The right of virtual disk:
Priority
x
“WT” Æ Write Through.
x
“WB” Æ Write Back.
x
“RO” Æ Read Only.
The priority of virtual disk:
x
“HI” Æ HIgh priority.
x
“MD” Æ MiDdle priority.
x
“LO” Æ LOw priority.
- 40 -
Bg rate
Background task priority:
x
Status
“4 / 3 / 2 / 1 / 0” Æ Default value is 4. The higher
number the background priority of a VD is, the more
background I/O will be scheduled to execute.
The status of virtual disk:
x
“Online” Æ The virtual disk is online.
x
“Offline” Æ The virtual disk is offline.
x
“Initiating” Æ The virtual disk is being initialized.
x
“Rebuild” Æ The virtual disk is being rebuilt.
x
“Migrate” Æ The virtual disk is being migrated.
x
“Rollback” Æ The virtual disk is being rolled back.
x
“Parity checking” Æ The virtual disk is being parity
check.
Clone
The target name of virtual disk.
Schedule
The clone schedule of virtual disk:
Type
The type of virtual disk:
Health
x
“RAID” Æ the virtual disk is normal.
x
“BACKUP” Æ the virtual disk is for clone usage.
The health of virtual disk:
x
“Optimal” Æ the virtual disk is working well and there
is no failed disk in the RG.
x
“Degraded” Æ At least one disk from the RG of the
Virtual disk is failed or plugged out.
x
“Failed” Æ the RAID group disk of the VD has single
or multiple failed disks than its RAID level can recover
from data loss.
x
“Partially optimal” Æ the virtual disk has
experienced recoverable read errors. After passing
parity check, the health will become “Optimal”.
R%
Ratio (%) of initializing or rebuilding.
RAID
RAID level.
#LUN
Number of LUN(s) that virtual disk is attached to.
Snapshot
The virtual disk size that is used for snapshot. The number
means “Used snapshot space” / “Total snapshot
- 41 -
•
(GB) (MB)
space”. The unit can be displayed in GB or MB.
#Snapshot
Number of snapshot(s) that have been taken.
RG name
The RG name of the virtual disk
VD operation description:
Create
Create a virtual disk.
Extend
Extend the virtual disk capacity.
Parity check
Execute parity check for the virtual disk. It supports RAID
3 / 5 / 6 / 30 / 50 / 60.
Regenerate parity:
x
“Yes” Æ Regenerate RAID parity and write.
x
“No” Æ Execute parity check only and find
mismatches. It will stop checking when mismatches
count to 1 / 10 / 20 / … / 100.
Delete
Delete the virtual disk.
Set property
Change the VD name, right, priority, bg rate and read
ahead.
Right:
x
“WT” Æ Write Through.
x
“WB” Æ Write Back. (Default)
x
“RO” Æ Read Only.
Priority:
x
“HI” Æ HIgh priority. (Default)
x
“MD” Æ MiDdle priority.
x
“LO” Æ LOw priority.
Bg rate:
x
“4 / 3 / 2 / 1 / 0” Æ Default value is 4. The higher
number the background priority of a VD is, the more
background I/O will be scheduled to execute.
Read ahead:
x
“Enabled” Æ Enable disk read ahead. (Default)
x
“Disabled” Æ Disable disk read ahead.
Type:
- 42 -
4.3.4
x
“RAID” Æ the virtual disk is normal. (Default)
x
“Backup” Æ the virtual disk is for clone usage.
Attach LUN
Attach to a LUN.
Detach LUN
Detach to a LUN.
List LUN
List attached LUN(s).
Set clone
Set the target virtual disk for clone.
Clear clone
Clear clone function.
Start clone
Start clone function.
Stop clone
Stop clone function.
Schedule
clone
Set clone function by schedule.
Set snapshot
space
Set snapshot space for taking snapshot. Please refer to
next chapter for more detail.
Cleanup
snapshot
Clean all snapshots of a VD and release the snapshot
space.
Take
snapshot
Take a snapshot on the virtual disk.
Auto
snapshot
Set auto snapshot on the virtual disk.
List snapshot
List all snapshots of the virtual disk.
More
information
Show virtual disk detail information.
Snapshot
“Snapshot” can view the status of snapshot, create, and modify snapshots. Please
refer to next chapter for more detail about snapshot concept. The following is an
example to take a snapshot.
Step 1: Create snapshot space. In “/ Volume configuration / Virtual disk”, check
the gray button next to the VD number; click “Set snapshot space”.
Step 2: Set snapshot space. Then click “Confirm”. The snapshot space is created.
- 43 -
Figure 4.3.4.1
Figure 4.3.4.2
(Figure 4.3.4.2: “VD-01” snapshot space has been created, snapshot space is 15GB, and used
3GB for saving snapshot index.)
Step 3: Take a snapshot. In “/ Volume configuration / Snapshot”, click “Take
snapshot”. It will link to next page. Enter a snapshot name.
Figure 4.3.4.3
Step 4: Expose the snapshot VD. Check the gray button next to the Snapshot VD
number; click “Expose”. Enter a capacity for snapshot VD. If size is zero, the
exposed snapshot VD will be read only. Otherwise, the exposed snapshot VD can be
read / written, and the size will be the maximum capacity to read / write. IR16FC4ER
supports read-only, and IR16FC4ER supports writable snapshot.
Figure 4.3.4.4
- 44 -
Figure 4.3.4.5
(Figure 4.3.4.5: This is the snapshot list of “VD-01”. There are two snapshots. Snapshot VD
“SnapVD-01” is exposed as read-only, “SnapVD-02” is exposed as read-write.)
Step 5: Attach a LUN to a snapshot VD. Please refer to the next section for attaching
a LUN.
Step 6: Done. Snapshot VD can be used.
•
Snapshot column description:
No.
The number of this snapshot VD. The button next to the
snapshot VD No. includes the functions which can be
executed.
Name
Snapshot VD name.
Used (GB)
(MB)
The amount of snapshot space that has been used. The
unit can be displayed in GB or MB.
Status
The status of snapshot:
Health
x
“N/A” Æ The snapshot is normal.
x
“Replicated” Æ The snapshot is for clone.
x
“Abort” Æ The snapshot is over space and abort.
The health of snapshot:
x
“Good” Æ The snapshot is good.
x
“Failed” Æ The snapshot fails.
Exposure
Snapshot VD is exposed or not.
Right
The right of snapshot:
- 45 -
•
4.3.5
x
“Read-write” Æ The snapshot VD can be read /
write.
x
“Read-only” Æ The snapshot VD is read only.
#LUN
Number of LUN(s) that snapshot VD is attached.
Created time
Snapshot VD created time.
Snapshot operation description:
Expose/
Unexpose
Expose / unexpose the snapshot VD.
Rollback
Rollback the snapshot VD.
Delete
Delete the snapshot VD.
Attach
Attach a LUN.
Detach
Detach a LUN.
List LUN
List attached LUN(s).
Logical unit
“Logical unit” can view, create, and modify the status of attached logical unit
number of each VD.
User can attach LUN by clicking the “Attach”. “Host” must enter with a FC node
name for access control, or fill-in wildcard “*”, which means every host can access
the volume. Choose LUN number and permission, and then click “Confirm”.
Figure 4.3.5.1
- 46 -
Figure 4.3.5.2
(Figure 4.3.5.2: VD-01 is attached to LUN 0 and every host can access. VD-02 is attached to
LUN 1 and only the FC node name is named “2001001378AC00E5” can access.)
•
LUN operation description:
Attach
Attach a logical unit number to a virtual disk.
Detach
Detach a logical unit number from a virtual disk.
The matching rules of access control are followed from the LUNs’ created time; the
earlier created LUN is prior to the matching rules. For example: there are 2 LUN
rules for the same UDV, one is “*”, LUN 0; the other is “FC node name1”, LUN 1. The
other host “FC node name2” can login successfully because it matches the rule 1.
4.3.6
Example
The following is an example to create volumes. This example is to create two VDs
and set a global spare disk.
•
Example
This example is to create two VDs in one RG, each VD shares the cache volume.
The cache volume is created after system boots up automatically. Then set a global
spare disk. Last, delete all of them.
Step 1: Create a RG (RAID group).
To create a RAID group, please follow the procedures:
- 47 -
Figure 4.3.6.1
1.
2.
3.
4.
5.
Select “/ Volume configuration / RAID group”.
Click “Create“.
Input a RG Name, choose a RAID level from the list, click “Select PD“ to
choose the RAID physical disks, then click “Next“.
Check the setting. Click “Confirm“ if all setups are correct.
Done. A RG has been created.
Figure 4.3.6.2
(Figure 4.3.6.2: Creating a RAID 5 with 3 physical disks, named “RG-R5”.)
Step 2: Create VD (Virtual Disk).
To create a data user volume, please follow the procedures.
Figure 4.3.6.3
1.
2.
3.
Select “/ Volume configuration / Virtual disk”.
Click “Create”.
Input a VD name, choose a RG Name and enter a size for this VD; decide the
stripe height, block size, read / write mode, bg rate, and set priority, finally click
“Confirm”.
- 48 -
4.
5.
Done. A VD has been created.
Follow the above steps to create another VD.
Figure 4.3.6.4
(Figure 4.3.6.4: Creating VDs named “VD-R5-1” and “VD-R5-2” from RAID group “RG-R5”, the
size of “VD-R5-1” is 50GB, and the size of “VD-R5-2” is 64GB. There is no LUN attached.)
Step 3: Attach a LUN to a VD.
There are 2 methods to attach a LUN to a VD.
1.
2.
In “/ Volume configuration / Virtual disk”, check the gray button next to the
VD number; click “Attach LUN”.
In “/ Volume configuration / Logical unit”, click “Attach”.
The procedures are as follows:
Figure 4.3.6.5
1.
2.
3.
Select a VD.
Input “Host” name, which is a FC node name for access control, or fill-in
wildcard “*”, which means every host can access to this volume. Choose LUN
and permission, and then click “Confirm”.
Done.
Figure 4.3.6.6
(Figure 4.3.6.6: VD-R5-1 is attached to LUN 0. VD-R5-2 is attached to LUN 1.)
- 49 -
Tips
The matching rules of access control are from the LUNs’ created time,
the earlier created LUN is prior to the matching rules.
Step 4: Set a global spare disk.
To set a global spare disk, please follow the procedures.
1.
2.
3.
Select “/ Volume configuration / Physical disk”.
Check the gray button next to the PD slot; click “Set global space”.
“Global spare” status is shown in “Usage” column.
Figure 4.3.6.7
(Figure 4.3.6.7: Slot 4 is set as a global spare disk.)
Step 5: Done.
Delete VDs, RG, please follow the below steps.
Step 6: Detach a LUN from the VD.
In “/ Volume configuration / Logical unit”,
Figure 4.3.6.8
1.
Check the gray button next to the LUN; click “Detach”. There will pop up a
confirmation page.
- 50 -
2.
3.
Choose “OK”.
Done.
Step 7: Delete a VD (Virtual Disk).
To delete the virtual disk, please follow the procedures:
1.
2.
3.
Select “/ Volume configuration / Virtual disk”.
Check the gray button next to the VD number; click “Delete”. There will pop up
a confirmation page, click “OK”.
Done. Then, the VD is deleted.
Tips
When deleting VD directly, the attached LUN(s) of to this VD will be
detached together.
Step 8: Delete a RG (RAID group).
To delete a RAID group, please follow the procedures:
1.
2.
3.
4.
5.
Select “/ Volume configuration / RAID group”.
Select a RG which all its VD are deleted, otherwise the this RG cannot be
deleted.
Check the gray button next to the RG number click “Delete”.
There will pop up a confirmation page, click “OK”.
Done. The RG has been deleted.
Tips
The action of deleting one RG will succeed only when all of the
related VD(s) are deleted in this RG. Otherwise, user cannot delete
this RG.
Step 9: Free a global spare disk.
To free a global spare disk, please follow the procedures.
1.
2.
Select “/ Volume configuration / Physical disk”.
Check the gray button next to the PD slot; click “Set Free disk”.
Step 10: Done, all volumes have been deleted.
- 51 -
4.4 Enclosure management
“Enclosure management” allows managing enclosure information including “SES
configuration”, “Hardware monitor”, “S.M.A.R.T.” and “UPS”. For the enclosure
management, there are many sensors for different purposes, such as temperature
sensors, voltage sensors, hard disk status, fan sensors, power sensors, and LED
status. Due to the different hardware characteristics among these sensors, they have
different polling intervals. Below are the details of the polling time intervals:
1.
2.
3.
4.
5.
6.
Temperature sensors: 1 minute.
Voltage sensors: 1 minute.
Hard disk sensors: 10 minutes.
Fan sensors: 10 seconds . When there are 3 errors consecutively, system sends
ERROR event log.
Power sensors: 10 seconds, when there are 3 errors consecutively, system
sends ERROR event log.
LED status: 10 seconds.
Figure 4.4.1
4.4.1
SES configuration
SES represents SCSI Enclosure Services, one of the enclosure management
standards. “SES configuration” can enable or disable the management of SES.
Figure 4.4.1.1
(Figure 4.4.1.1: Enable SES in LUN 0, and can be accessed from every host)
The SES client software is available at the following web site:
SANtools: http://www.santools.com/
4.4.2
Hardware monitor
“Hardware monitor” can view the information of current voltages and temperatures.
- 52 -
Figure 4.4.2.1
If “Auto shutdown” is checked, the system will shutdown automatically when
voltage or temperature is out of the normal range. For better data protection, please
check “Auto Shutdown”.
For better protection and avoiding single short period of high temperature triggering
auto shutdown, the system use multiple condition judgments to trigger auto shutdown,
below are the details of when the Auto shutdown will be triggered.
1.
2.
3.
There are several sensors placed on systems for temperature checking. System
will check each sensor for every 30 seconds. When one of these sensor is over
high temperature threshold for continuous 3 minutes, auto shutdown will be
triggered immediately.
The core processor temperature limit is 80 ℃ . The on board SAS device
temperature limit is 80℃. The backplane board temperature limit is 58℃.
If the high temperature situation doesn’t last for 3 minutes, system will not trigger
auto shutdown.
- 53 -
4.4.3
Hard drive S.M.A.R.T. support
S.M.A.R.T. (Self-Monitoring Analysis and Reporting Technology) is a diagnostic tool
for hard drives to deliver warning of drive failures in advance. S.M.A.R.T. provides
users chances to take actions before possible drive failure.
S.M.A.R.T. measures many attributes of the hard drive all the time and inspects the
properties of hard drives which are close to be out of tolerance. The advanced notice
of possible hard drive failure can allow users to back up hard drive or replace the
hard drive. This is much better than hard drive crash when it is writing data or
rebuilding a failed hard drive.
“S.M.A.R.T.” can display S.M.A.R.T. information of hard drives. The number is the
current value; the number in parenthesis is the threshold value. The threshold values
from different hard drive vendors are different; please refer to hard drive vendors’
specification for details.
S.M.A.R.T. only supports SATA drives. SAS drives do not have this function now. It
will show N/A in the web page for SAS drives.
Figure 4.4.3.1
4.4.4
UPS
“UPS” can set up UPS (Uninterruptible Power Supply).
Figure 4.4.4.1
(Figure 4.5.4.1: Without UPS.)
- 54 -
Currently, the system only supports and communicates with smart-UPS of APC
(American Power Conversion Corp.) UPS. Please review the details from the website:
http://www.apc.com/.
First, connect the system and APC UPS via RS-232 for communication. Then set up
the shutdown values (shutdown battery level %) when power is failed. UPS in other
companies can work well, but they have no such communication feature with the
system.
Figure 4.4.4.2
(Figure 4.4.4.2: With Smart-UPS.)
•
UPS column description:
UPS Type
Select UPS Type. Choose Smart-UPS for APC, None for
other vendors or no UPS.
When below the setting level, system will shutdown.
Shutdown
Battery Level Setting level to “0” will disable UPS.
(%)
Shutdown
Delay (s)
If power failure occurs, and system power can not recover,
the system will shutdown. Setting delay to “0” will disable
the function.
Shutdown
UPS
Select ON, when power is gone, UPS will shutdown by
itself after the system shutdown successfully. After power
comes back, UPS will start working and notify system to
boot up. OFF will not.
Status
The status of UPS:
x
“Detecting…”
x
“Running”
x
“Unable to detect UPS”
x
“Communication lost”
x
“UPS reboot in progress”
x
“UPS shutdown in progress”
x
“Batteries failed. Please change them NOW!”
- 55 -
Battery Level Current power percentage of battery level.
(%)
4.5 System maintenance
“Maintenance” allows the operation of system functions which include “System
information” to show the system version and details,, “Upgrade” to the latest
firmware, “Reset to factory default” to reset all controller configuration values to
factory settings, “Import and export” to import and export all controller configuration
to a file, “Event log” to view system event log to record critical events, and “Reboot
and shutdown” to reboot or shutdown the system.
Figure 4.5.1
4.5.1
System information
“System information” can display system information, including firmware version,
CPU type, installed system memory, serial number and backplane ID.
Figure 4.5.1.1
- 56 -
4.5.2
Upgrade
“Upgrade” can upgrade firmware. Please prepare new firmware file named
“xxxx.bin” in local hard drive, then click “Browse” to select the file. Click “Confirm”,
it will pop up a message “Upgrade system now? If you want to downgrade to the
previous FW later (not recommend), please export your system configuration in
advance”, click “Cancel” to export system configuration in advance, then click “OK”
to start to upgrade firmware.
Figure 4.5.2.1
Figure 4.5.2.2
When upgrading, there is a progress bar running. After finished upgrading, the
system must reboot manually to make the new firmware took effect.
Tips
Please contact with info@iStoragePro.com for latest firmware.
4.5.3
Reset to factory default
“Reset to factory default” allows user to reset controller to factory default setting.
Figure 4.5.3.1
Reset to default value, the password is: 0000, and IP address to default DHCP.
Default IP address: 192.168.10.50 (DHCP)
Default subnet mask: 255.255.255.0
Default gateway: 192.168.10.254
- 57 -
4.5.4
Import and export
“Import and export” allows user to save system configuration values: export, and
apply all configuration: import. For the volume configuration setting, the values are
available in export and not available in import which can avoid confliction / datedeleting between two controllers which mean if one system already has valuable
volumes in the disks and user may forget and overwrite it. Use import could return to
original configuration. If the volume setting was also imported, user’s current volumes
will be overwritten with different configuration.
Figure 4.5.4.1
1.
2.
Import: Import all system configurations excluding volume configuration.
Export: Export all configurations to a file.
Caution
“Import” will import all system configurations excluding volume
configuration; the current configurations will be replaced.
4.5.5
Event log
“Event log” can view the event messages. Check the checkbox of INFO, WARNING,
and ERROR to choose the level of event log display. Click “Download” button to
save the whole event log as a text file with file name “log-ModelName-SerialNumberDate-Time.txt”. Click “Clear” button to clear all event logs. Click “Mute” button to
stop alarm if system alerts.
- 58 -
Figure 4.5.5.1
The event log is displayed in reverse order which means the latest event log is on the
first / top page. The event logs are actually saved in the first four hard drives; each
hard drive has one copy of event log. For one system, there are four copies of event
logs to make sure users can check event log any time when there are failed disks.
Tips
Please plug-in any of the first four hard drives, then event logs can be
saved and displayed in next system boot up. Otherwise, the event
logs cannot be saved and would be disappeared.
4.5.6
Reboot and shutdown
“Reboot and shutdown” can “Reboot” and “Shutdown” the system. Before power
off, it’s better to execute “Shutdown” to flush the data from cache to physical disks.
The step is necessary for data protection.
Figure 4.5.6.1
4.6 Logout
For security reason, “Logout” allows users logout when no user is operating the
system. Re-login the system; please enter username and password again.
- 59 -
Chapter 5 Advanced operations
5.1 Volume rebuild
If one physical disk of the RG which is set as protected RAID level (e.g.: RAID 3,
RAID 5, or RAID 6) is FAILED or has been unplugged / removed, then the status of
RG is changed to degraded mode, the system will search/detect spare disk to rebuild
the degraded RG to a complete one. It will detect dedicated spare disk as rebuild
disk first, then global spare disk.
iStoragePro controllers support Auto-Rebuild. The following is the scenario:
Take RAID 6 for example:
1.
When there is no global spare disk or dedicated spare disk in the system, The
RG will be in degraded mode and wait until (1) there is one disk assigned as
spare disk, or (2) the failed disk is removed and replaced with new clean disk,
then the Auto-Rebuild starts. The new disk will be a spare disk to the original RG
automatically. If the new added disk is not clean (with other RG information), it
would be marked as RS (reserved) and the system will not start "auto-rebuild". If
this disk is not belonging to any existing RG, it would be FR (Free) disk and the
system will start Auto-Rebuild. If user only removes the failed disk and plugs the
same failed disk in the same slot again, the auto-rebuild will start running. But
rebuilding in the same failed disk may impact customer data if the status of disk
is unstable. iStoragePro suggests all customers not to rebuild in the failed disk
for better data protection.
2.
When there is enough global spare disk(s) or dedicated spare disk(s) for the
degraded array, system starts Auto-Rebuild immediately. And in RAID 6, if there
is another disk failure occurs during rebuilding, system will start the above AutoRebuild process as well. Auto-Rebuild feature only works at that the status of RG
is "Online". It will not work at “Offline”. Thus, it will not conflict with the “Online
roaming” feature.
3.
In degraded mode, the status of RG is “Degraded”. When rebuilding, the status
of RG / VD will be “Rebuild”, the column “R%” in VD will display the ratio in
percentage. After complete rebuilding, the status will become “Online”. RG will
become completely one.
Tips
“Set dedicated spare” is not available if there is no RG or only RG of
RAID 0, JBOD, because user can not set dedicated spare disk to
RAID 0 and JBOD.
Sometimes, rebuild is called recover; they are the same meaning. The following table
is the relationship between RAID levels and rebuild.
- 60 -
•
Rebuild operation description:
RAID 0
Disk striping. No protection for data. RG fails if any hard
drive fails or unplugs.
RAID 1
Disk mirroring over 2 disks. RAID 1 allows one hard drive
fails or unplugging. Need one new hard drive to insert to
the system and rebuild to be completed.
N-way mirror
Extension to RAID 1 level. It has N copies of the disk. Nway mirror allows N-1 hard drives failure or unplugging.
RAID 3
Striping with parity on the dedicated disk. RAID 3 allows
one hard drive failure or unplugging.
RAID 5
Striping with interspersed parity over the member disks.
RAID 5 allows one hard drive failure or unplugging.
RAID 6
2-dimensional parity protection over the member disks.
RAID 6 allows two hard drives failure or unplugging. If it
needs to rebuild two hard drives at the same time, it will
rebuild the first one, then the other in sequence.
RAID 0+1
Mirroring of RAID 0 volumes. RAID 0+1 allows two hard
drive failures or unplugging, but at the same array.
RAID 10
Striping over the member of RAID 1 volumes. RAID 10
allows two hard drive failure or unplugging, but in different
arrays.
RAID 30
Striping over the member of RAID 3 volumes. RAID 30
allows two hard drive failure or unplugging, but in different
arrays.
RAID 50
Striping over the member of RAID 5 volumes. RAID 50
allows two hard drive failures or unplugging, but in different
arrays.
RAID 60
Striping over the member of RAID 6 volumes. RAID 60
allows four hard drive failures or unplugging, every two in
different arrays.
JBOD
The abbreviation of “Just a Bunch Of Disks”. No data
protection. RG fails if any hard drive failures or unplugs.
- 61 -
5.2 RG migration
To migrate the RAID level, please follow below procedures.
1.
2.
3.
Select “/ Volume configuration / RAID group”.
Check the gray button next to the RG number; click “Migrate”.
Change the RAID level by clicking the down arrow to “RAID 5”. There will be a
pup-up which indicates that HDD is not enough to support the new setting of
RAID level, click “Select PD” to increase hard drives, then click “Confirm“ to
go back to setup page. When doing migration to lower RAID level, such as the
original RAID level is RAID 6 and user wants to migrate to RAID 0, system will
evaluate whether this operation is safe or not, and appear a warning message of
"Sure to migrate to a lower protection array?”.
Figure 5.2.1
4.
5.
6.
Double check the setting of RAID level and RAID PD slot. If there is no problem,
click “Next“.
Finally a confirmation page shows the detail of RAID information. If there is no
problem, click “Confirm” to start migration. System also pops up a message of
“Warning: power lost during migration may cause damage of data!” to give
user warning. When the power is abnormally off during the migration, the data is
in high risk.
Migration starts and it can be seen from the “status” of a RG with “Migrating”.
In “/ Volume configuration / Virtual disk”, it displays a “Migrating” in
“Status” and complete percentage of migration in “R%”.
Figure 5.2.2
(Figure 5.2.2: A RAID 0 with 3 physical disks migrates to RAID 5 with 4 physical disks.)
(IR16FC4ER does not have “Enclosure” column.)
Figure 5.2.3
To do migration, the total size of RG must be larger or equal to the original RG. It
does not allow expanding the same RAID level with the same hard disks of original
RG.
The below operations are not allowed when a RG is being migrated. System would
reject these operations:
- 62 -
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
Add dedicated spare.
Remove a dedicated spare.
Create a new VD.
Delete a VD.
Extend a VD.
Scrub a VD.
Perform another migration operation.
Scrub entire RG.
Take a snapshot.
Delete a snapshot.
Expose a snapshot.
Rollback to a snapshot.
Caution
RG Migration cannot be executed during rebuilding or VD extension.
5.3 VD extension
To extend VD size, please follow the procedures.
1.
2.
3.
Select “/ Volume configuration / Virtual disk”.
Check the gray button next to the VD number; click “Extend”.
Change the size. The size must be larger than the original, and then click
“Confirm” to start extension.
Figure 5.3.1
4.
Extension starts. If VD needs initialization, it will display an “Initiating” in
“Status” and complete percentage of initialization in “R%”.
Figure 5.3.2
- 63 -
Tips
The size of VD extension must be larger than original.
Caution
VD Extension cannot be executed during rebuilding or migration.
5.4 iSnap
Snapshot-on-the-box (iSnap) captures the instant state of data in the target volume
in a logical sense. The underlying logic is Copy-on-Write -- moving out the data which
would be written to certain location where a write action occurs since the time of data
capture. The certain location, named as “Snap VD”, is essentially a new VD which
can be attached to a LUN provisioned to a host as a disk like other ordinary VDs in
the system. Rollback restores the data back to the state of any time which was
previously captured in case for any unfortunate reason it might be (e.g. virus attack,
data corruption, human errors and so on). Snap VD is allocated within the same RG
in which the snapshot is taken, we suggest to reserve 20% of RG size or more for
snapshot space. Please refer to F the following figure for snapshot concept.
Figure 5.4.1
- 64 -
5.4.1
Create snapshot volume
To take a snapshot of the data, please follow the procedures.
1.
2.
3.
4.
5.
6.
Select “/ Volume configuration / Virtual disk”.
Check the gray button next to the VD number; click “Set snapshot space”.
Set up the size for snapshot. The minimum size is suggested to be 20% of VD
size, and then click “OK”. It will go back to the VD page and the size will show in
snapshot column. It may not be the same as the number entered because some
size is reserved for snapshot internal usage. There will be 2 numbers in
“Snapshot” column. These numbers mean “Used snapshot space” and
“Total snapshot space”.
There are two methods to take snapshot. In “/ Volume configuration / Virtual
disk”, check the gray button next to the VD number; click “Take snapshot”. Or
in “/ Volume configuration / Snapshot”, click “Take snapshot”.
Enter a snapshot name, and then click “OK”. A snapshot VD is created.
Select “/ Volume configuration / Snapshot” to display all snapshot VDs taken
from the VD.
Figure 5.4.1.1
7.
8.
9.
Check the gray button next to the Snapshot VD number; click “Expose”. Enter a
capacity for snapshot VD. If size is zero, the exposed snapshot VD is read only.
Otherwise, the exposed snapshot VD can be read / written, and the size is the
maximum capacity to read / write. IR16FC4ER supports read-only, and
IR16FC4ER supports writable snapshot.
Attach a LUN to the snapshot VD. Please refer to the previous chapter for
attaching a LUN.
Done. It can be used as a disk.
Figure 5.4.1.2
- 65 -
(Figure 5.4.1.2: This is the snapshot list of “VD-01”. There are two snapshots. Snapshot VD
“SnapVD-01” is exposed as read-only, “SnapVD-02” is exposed as read-write.) (IR16FC4ER
supports read-only, IR16FC4ER supports read-write.)
1.
2.
There are two methods to clean all snapshots. In “/ Volume configuration /
Virtual disk”, check the gray button next to the VD number; click “Cleanup
snapshot”. Or in “/ Volume configuration / Snapshot”, click “Cleanup”.
“Cleanup” will delete all snapshots of the VD and release snapshot space.
5.4.2
Auto snapshot
The snapshot copies can be taken manually or by schedule such as hourly or daily.
Please follow the procedures.
1.
2.
3.
There are two methods to set auto snapshot. In “/ Volume configuration /
Virtual disk”, check the gray button next to the VD number; click “Auto
snapshot”. Or in “/ Volume configuration / Snapshot”, click “Auto
snapshot”.
The auto snapshot can be set monthly, weekly, daily, or hourly.
Done. It will take snapshots automatically.
Figure 5.4.2.1
(Figure 5.4.2.1: It will take snapshots every month, and keep the last 32 snapshot copies.)
Tips
Daily snapshot will be taken at every 00:00. Weekly snapshot will be
taken every Sunday 00:00. Monthly snapshot will be taken every first
day of month 00:00.
- 66 -
5.4.3
Rollback
The data in snapshot VD can rollback to original VD. Please follow the procedures.
1.
2.
Select “/ Volume configuration / Snapshot”.
Check the gray button next to the Snap VD number which user wants to rollback
the data; click “Rollback”.
Done, the data in snapshot VD is rollback to original VD.
3.
Caution
Before executing rollback, it is better to dismount file system for
flushing data from cache to disks in OS first. System sends pop-up
message when user executes rollback function.
5.4.4
iSnap constraint
iStoragePro snapshot function applies Copy-on-Write technique on UDV/VD and
provides a quick and efficient backup methodology. When taking a snapshot, it does
not copy any data at first time until a request of data modification comes in. The
snapshot copies the original data to snapshot space and then overwrites the original
data with new changes. With this technique, snapshot only copies the changed data
instead of copying whole data. It will save a lot of disk space.
•
Create a data-consistent snapshot
Before using snapshot, user has to know why sometimes the data corrupts after
rollback of snapshot. Please refer to the following diagram.
When user modifies the data from host, the data will pass through file system and
memory of the host (write caching). Then the host will flush the data from memory to
physical disks, no matter the disk is local disk (IDE or SATA), DAS (SCSI or SAS), or
SAN (fibre or iSCSI). From the viewpoint of storage device, it can not control the
behavior of host side. This case maybe happens. If user takes a snapshot, some
data is still in memory and not flush to disk. Then the snapshot may have an
incomplete image of original data. The problem does not belong to the storage
device. To avoid this data inconsistent issue between snapshot and original data,
user has to make the operating system flush the data from memory of host (write
caching) into disk before taking snapshot.
- 67 -
Figure 5.4.4.1
On Linux and UNIX platform, a command named sync can be used to make the
operating system flush data from write caching into disk. For Windows platform,
Microsoft also provides a tool – sync, which can do exactly the same thing as the
sync command in Linux/UNIX. It will tell the OS to flush the data on demand. For
more detail about sync tool, please refer to: http://technet.microsoft.com/enus/sysinternals/bb897438.aspx
Besides the sync tool, Microsoft develops VSS (volume shadow copy service) to
prevent this issue. VSS is a mechanism for creating consistent point-in-time copies of
data known as shadow copies. It is a coordinator between backup software,
application (SQL or Exchange…) and storages to make sure the snapshot without
the problem of data-inconsistent. For more detail about the VSS, please refer to
http://technet.microsoft.com/en-us/library/cc785914.aspx. iStoragePro IR16FC4ER
can support Microsoft VSS.
•
What if the snapshot space is over?
Before using snapshot, a snapshot space is needed from RG capacity. After a period
of working snapshot, what if the snapshot size over the snapshot space of user
defined? There are two different situations:
- 68 -
1.
If there are two or more snapshots existed, the system will try to remove the
oldest snapshots (to release more space for the latest snapshot) until enough
space is released.
If there is only one snapshot existed, the snapshot will fail. Because the
snapshot space is run out.
2.
For example, there are two or more snapshots existed on a VD and the latest
snapshot keeps growing. When it comes to the moment that the snapshot space is
run out, the system will try to remove the oldest snapshot to release more space for
the latest snapshot usage. As the latest snapshot is growing, the system keeps
removing the old snapshots. When it comes that the latest snapshot is the only one
in system, there is no more snapshot space which can be released for incoming
changes, then snapshot will fail.
•
How many snapshots can be created on a VD
There are up to 32 snapshots can be created on a UDV/VD. What if the 33rd
snapshot has been taken? There are two different situations:
1.
If the snapshot is configured as auto snapshot, the latest one (the 33rd
snapshot) will replace the oldest one (the first snapshot) and so on.
If the snapshot is taken manually, when taking the 33rd snapshot will fail and a
warning message will be showed on Web UI.
2.
•
Rollback / Delete snapshot
When a snapshot has been rollbacked, the other snapshots which are earlier than it
will also be removed. But the rest snapshots will be kept after rollback. If a snapshot
has been deleted, the other snapshots which are earlier than it will also be deleted.
The space occupied by these snapshots will be released after deleting.
5.5 Disk roaming
Physical disks can be re-sequenced in the same system or move all physical disks in
the same RAID group from system-1 to system-2. This is called disk roaming.
System can execute disk roaming online. Please follow the procedures.
1.
2.
3.
4.
5.
Select “/ Volume configuration / RAID group”.
Check the gray button next to the RG number; click “Deactivate”.
Move all PDs of the RG to another system.
Check the gray button next to the RG number; click “Activate”.
Done.
Disk roaming has some constraints as described in the followings:
1.
Check the firmware version of two systems first. It is better that either systems
have the same firmware version or system-2 firmware version is newer.
- 69 -
2.
All physical disks of the RG should be moved from system-1 to system-2
together. The configuration of both RG and VD will be kept but LUN
configuration will be cleared in order to avoid conflict with system-2’s original
setting.
5.6 VD clone
The user can use VD clone function to backup data from source VD to target VD, set
up backup schedule, and deploy the clone rules.
The procedures of VD clone are on the following:
1. Copy all data from source VD to target VD at the beginning (full copy).
2. Use iSnap technology to perform the incremental copy afterwards. Please be fully
aware that the incremental copy needs to use snapshot to compare the data
difference. Therefore, the enough snapshot space for VD clone is very important.
The following contents will take an example of a RAID 5 virtual disk
(SourceVD_Raid5) clone to RAID 6 virtual disk (TargetVD_Raid6).
•
1.
Start VD clone
Create a RAID group (RG) in advance.
Figure 5.6.1
2.
Create two virtual disks (VD) “SourceVD_R5” and “TargetVD_R6”. The raid type
of backup target needs to be set as “BACKUP”.
- 70 -
Figure 5.6.2
3.
Here are the objects, a Source VD and a Target VD. Before starting clone
process, it needs to deploy the VD Clone rule first. Click “Configuration”.
Figure 5.6.3
4.
There are three clone configurations, describe on the following.
Figure 5.6.4
x
Snapshot space:
Figure 5.6.4
- 71 -
This setting is the ratio of source VD and snapshot space. The default ratio is 2
to 1. It means when the clone process is starting, the system will automatically
use the free RG space to create a snapshot space which capacity is double the
source VD.
x
Threshold: (The setting will be effective after enabling schedule clone)
Figure 5.6.5
The threshold setting will monitor the usage amount of snapshot space. When
the used snapshot space achieves its threshold, system will automatically take a
clone snapshot and start VD clone process. The purpose of threshold could
prevent the incremental copy fail immediately when running out of snapshot
space.
For example, the default threshold is 50%. The system will check the snapshot
space every hour. When the snapshot space is used over 50%, the system will
synchronize the source VD and target VD automatically. Next time, when the
rest snapshot space has been used 50%, in other words, the total snapshot
space has been used 75%, the system will synchronize the source VD and
target VD again.
x
Restart the task an hour later if failed: (The setting will be effective after
enabling schedule clone)
Figure 5.6.6
When running out of snapshot space, the VD clone process will be stopped
because there is no more available snapshot space. If this option has been
checked, system will clear the snapshots of clone in order to release snapshot
space automatically, and the VD clone will restart the task after an hour. This
task will start a full copy.
5.
After deploying the VD clone rule, the VD clone process can be started now.
Firstly, Click “Set clone” to set the target VD at the VD name “SourceVD_R5”.
- 72 -
Figure 5.6.7
6.
Select the target VD. Then click “Confirm”.
Figure 5.6.8
7.
Now, the clone target “TargetVD_R6” has been set.
Figure 5.6.9
8.
Click “Start clone”, the clone process will start.
- 73 -
Figure 5.6.10
9.
The default setting will create a snapshot space automatically which the capacity
is double size of the VD space. Before starting clone, system will initiate the
snapshot space.
Figure 5.6.11
10. After initiating the snapshot space, it will start cloning.
Figure 5.6.12
11. Click “Schedule clone” to set up the clone by schedule.
- 74 -
Figure 5.6.13
12. There are “Set Clone schedule” and “Clear Clone schedule” in this page.
Please remember that “Threshold” and “Restart the task an hour later if
failed” options in VD configuration will take effect after clone schedule has been
set.
Figure 5.6.14
•
Run out of snapshot space while VD clone
While the clone is processing, the increment data of this VD is over the snapshot
space. The clone will complete, but the clone snapshot will fail. Next time, when
trying to start clone, it will get a warning message “This is not enough of snapshot
space for the operation”. At this time, the user needs to clean up the snapshot space
in order to operate the clone process. Each time the clone snapshot failed, it means
- 75 -
that the system loses the reference value of incremental data. So it will start a full
copy at next clone process.
When running out of snapshot space, the flow diagram of VD clone procedure will be
like the following.
Figure 5.6.15
5.7 SAS JBOD expansion
5.7.1
Connecting JBOD
IR16FC4ER has SAS JBOD expansion port to connect extra SAS JBOD controllers.
When connecting to a SAS JBOD which can be detected, it displays tabs on the top
in “/ Volume configuration / Physical disk”. For example, Local, JBOD 1 (vendor
model), JBOD 2 (vendor model), …etc. Local means disks in local controller, and so
- 76 -
on. The disks in JBOD can be used as local disks.
Figure 5.7.1
(Figure 5.7.1: Display all PDs in JBOD 1.)
“/ Enclosure management / S.M.A.R.T.” can display S.M.A.R.T. information of all
PDs, including Local and all SAS JBODs.
Figure 5.7.2
(Figure 5.7.2: Disk S.M.A.R.T. information of Local and JBOD 1, although S.M.A.R.T. supports
SATA disk only.)
SAS JBOD expansion has some constraints as described in the followings:
- 77 -
1.
2.
3.
Up to 4 SAS JBODs can be cascaded.
Created RG can not use PDs which located in different systems. It means that
RG can be composed of PDs which are all in Local or one SAS JBOD.
Global spare disk only supports all RGs which located in the same system.
5.7.2
Upgrade firmware of JBOD
To upgrade the firmware of JBOD, please follow the procedures.
1
There is a hidden web page for JBOD firmware upgrade. Please login Web UI
as username admin first, and then add this URL to the browser.
(http://Management IP/jbod_upg.php), for example:
http://192.168.10.50/jbod_upg.php
Figure 5.7.2.1
2
3
4
Choose a JBOD which wants to upgrade.
Please prepare new firmware file in local hard drive, then click “Browse” to
select the file. Click “Confirm”.
After finished upgrading, the system must reboot manually to make the new
firmware took effect.
- 78 -
Chapter 6 Troubleshooting
6.1 System buzzer
The system buzzer features are listed below:
1.
2.
The system buzzer alarms 1 second when system boots up successfully.
The system buzzer alarms continuously when there is error occurred. The alarm
will be stopped after error resolved or be muted.
The alarm will be muted automatically when the error is resolved. E.g., when
RAID 5 is degraded and alarm rings immediately, user changes / adds one
physical disk for rebuilding. When the rebuilding is done, the alarm will be muted
automatically.
3.
6.2 Event notifications
•
PD events
Level
Type
INFO
WARNING
ERROR
ERROR
ERROR
ERROR
INFO
INFO
PD inserted
PD removed
HDD read error
HDD write error
HDD error
HDD IO timeout
PD upgrade started
PD upgrade
finished
PD upgrade failed
WARNING
•
Description
Disk <slot> is inserted into system
Disk <slot> is removed from system
Disk <slot> read block error
Disk <slot> write block error
Disk <slot> is disabled
Disk <slot> gets no response
PD [<string>] starts upgrading firmware process.
PD [<string>] finished upgrading firmware process.
PD [<string>] upgrade firmware failed.
HW events
Level
Type
WARNING
ERROR
INFO
INFO
INFO
ECC single
ECC multiple
ECC dimm
ECC none
SCSI bus reset
ERROR
ERROR
SCSI host error
SATA enable
device fail
SATA EDMA mem
fail
SATA remap mem
fail
SATA PRD mem
ERROR
ERROR
ERROR
Description
Single-bit ECC error is detected at <address>
Multi-bit ECC error is detected at <address>
ECC memory is installed
Non-ECC memory is installed
Received SCSI Bus Reset event at the SCSI Bus
<number>
SCSI Host allocation failed
Failed to enable the SATA pci device
Failed to allocate memory for SATA EDMA
Failed to remap SATA memory io spcae
Failed to init SATA PRD memory manager
- 79 -
ERROR
ERROR
ERROR
ERROR
ERROR
ERROR
INFO
INFO
INFO
INFO
•
fail
SATA revision id
fail
SATA set reg fail
SATA init fail
SATA diag fail
Mode ID fail
SATA chip count
error
SAS port reply
error
SAS unknown port
reply error
FC port reply error
FC unknown port
reply error
Failed to get SATA revision id
Failed to set SATA register
Core failed to initialize the SATA adapter
SATA Adapter diagnostics failed
SATA Mode ID failed
SATA Chip count error
SAS HBA port <number> reply terminated abnormally
SAS frontend reply terminated abnormally
FC HBA port <number> reply terminated abnormally
FC frontend reply terminated abnormally
EMS events
Level
Type
INFO
WARNING
ERROR
ERROR
Power install
Power absent
Power restore
Power fail
Power detect
Fan restore
Fan fail
Fan install
Fan not present
Fan over speed
Thermal level 1
Thermal level 2
Thermal level 2
shutdown
Thermal level 2
CTR shutdown
Thermal ignore
value
Voltage level 1
Voltage level 2
Voltage level 2
shutdown
Voltage level 2
CTR shutdown
UPS OK
UPS fail
UPS AC loss
UPS power low
WARNING
SMART T.E.C.
WARNING
WARNING
WARNING
SMART fail
RedBoot failover
Watchdog
shutdown
Watchdog reset
INFO
ERROR
INFO
ERROR
WARNING
INFO
ERROR
INFO
ERROR
ERROR
WARNING
ERROR
ERROR
ERROR
WARNING
WARNING
ERROR
ERROR
ERROR
WARNING
Description
Power(<string>) is installed
Power(<string>) is absent
Power(<string>) is restored to work.
Power(<string>) is not functioning
PSU signal detection(<string>)
Fan(<string>) is restored to work.
Fan(<string>) is not functioning
Fan(<string>) is installed
Fan(<string>) is not present
Fan(<string>) is over speed
System temperature(<string>) is higher.
System Overheated(<string>)!!!
System Overheated(<string>)!!! The system will autoshutdown immediately.
The controller will auto shutdown immediately, reason
[ Overheated(<string>) ].
Unable to update thermal value on <string>
System voltage(<string>) is higher/lower.
System voltages(<string>) failed!!!
System voltages(<string>) failed!!! The system will
auto-shutdown immediately.
The controller will auto shutdown immediately, reason
[ Voltage abnormal(<string>) ].
Successfully detect UPS
Failed to detect UPS
AC loss for system is detected
UPS Power Low!!! The system will auto-shutdown
immediately.
Disk <slot> S.M.A.R.T. Threshold Exceed Condition
occurred for attribute <string>
Disk <slot>: Failure to get S.M.A.R.T information
RedBoot failover event occurred
Watchdog timeout shutdown occurred
Watchdog timeout reset occurred
- 80 -
•
RMS events
Level
•
Type
INFO
Console Login
INFO
Console Logout
INFO
INFO
INFO
WARNING
Web Login
Web Logout
Log clear
Send mail fail
Description
<username> login from <IP or serial console> via
Console UI
<username> logout from <IP or serial console> via
Console UI
<username> login from <IP> via Web UI
<username> logout from <IP> via Web UI
All event logs are cleared
Failed to send event to <email>.
LVM events
Level
Type
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
RG create OK
RG create fail
RG delete
RG rename
VD create OK
VD create fail
VD delete
VD rename
VD read only
INFO
VD write back
INFO
VD write through
INFO
INFO
INFO
INFO
INFO
VD extend
VD attach LUN OK
VD attach LUN fail
VD detach LUN OK
VD detach LUN fail
INFO
INFO
WARNING
INFO
INFO
WARNING
INFO
INFO
ERROR
INFO
INFO
VD init started
VD init finished
VD init failed
VD rebuild started
VD rebuild finished
VD rebuild failed
VD migrate started
VD migrate finished
VD migrate failed
VD scrub started
VD scrub finished
INFO
VD scrub aborted
INFO
INFO
RG migrate started
RG migrate
finished
RG move started
RG move finished
VD move started
INFO
INFO
INFO
Description
RG <name> has been created.
Failed to create RG <name>.
RG <name> has been deleted.
RG <name> has been renamed as <name>.
VD <name> has been created.
Failed to create VD <name>.
VD <name> has been deleted.
Name of VD <name> has been renamed to <name>.
Cache policy of VD <name> has been set as read
only.
Cache policy of VD <name> has been set as writeback.
Cache policy of VD <name> has been set as writethrough.
Size of VD <name> extends.
VD <name> has been LUN-attached.
Failed to attach LUN to VD <name>.
VD <name> has been detached.
Failed to attach LUN from bus <number>, SCSI ID
<number>, lun <number>.
VD <name> starts initialization.
VD <name> completes initialization.
Failed to complete initialization of VD <name>.
VD <name> starts rebuilding.
VD <name> completes rebuilding.
Failed to complete rebuild of VD <name>.
VD <name> starts migration.
VD <name> completes migration.
Failed to complete migration of VD <name>.
Parity checking on VD <name> starts.
Parity checking on VD <name> completes with
<address> parity/data inconsistency found.
Parity checking on VD <name> stops with <address>
parity/data inconsistency found.
RG <name> starts migration.
RG <name> completes migration.
RG <name> starts move.
RG <name> completes move.
VD <name> starts move.
- 81 -
INFO
ERROR
INFO
INFO
INFO
INFO
WARNING
WARNING
WARNING
ERROR
ERROR
ERROR
VD move finished
VD move failed
RG activated
RG deactivated
VD rewrite started
VD rewrite finished
VD rewrite failed
RG degraded
VD degraded
RG failed
VD failed
VD IO fault
WARNING
ERROR
Recoverable read
error
Recoverable write
error
Unrecoverable read
error
Unrecoverable
write error
Config read fail
ERROR
Config write fail
ERROR
INFO
CV boot error
adjust global
CV boot global
CV boot error
create global
PD dedicated spare
INFO
WARNING
PD global spare
PD read error
WARNING
PD write error
WARNING
Scrub wrong parity
WARNING
INFO
INFO
INFO
INFO
INFO
INFO
INFO
Scrub data
recovered
Scrub recovered
data
Scrub parity
recovered
PD freed
RG imported
RG restored
VD restored
PD scrub started
Disk scrub finished
Large RG created
INFO
Weak RG created
INFO
INFO
WARNING
RG size shrunk
VD erase finished
VD erase failed
WARNING
ERROR
ERROR
INFO
ERROR
WARNING
WARNING
VD <name> completes move.
Failed to complete move of VD <name>.
RG <name> has been manually activated.
RG <name> has been manually deactivated.
Rewrite at LBA <address> of VD <name> starts.
Rewrite at LBA <address> of VD <name> completes.
Rewrite at LBA <address> of VD <name> failed.
RG <name> is in degraded mode.
VD <name> is in degraded mode.
RG <name> is failed.
VD <name> is failed.
I/O failure for stripe number <address> in VD
<name>.
Recoverable read error occurred at LBA <address><address> of VD <name>.
Recoverable write error occurred at LBA <address><address> of VD <name>.
Unrecoverable read error occurred at LBA <address><address> of VD <name>.
Unrecoverable write error occurred at LBA <address><address> of VD <name>.
Config read failed at LBA <address>-<address> of PD
<slot>.
Config write failed at LBA <address>-<address> of PD
<slot>.
Failed to change size of the global cache.
The global cache is ok.
Failed to create the global cache.
Assign PD <slot> to be the dedicated spare disk of
RG <name>.
Assign PD <slot> to Global Spare Disks.
Read error occurred at LBA <address>-<address> of
PD <slot>.
Write error occurred at LBA <address>-<address> of
PD <slot>.
The parity/data inconsistency is found at LBA
<address>-<address> when checking parity on VD
<name>.
The data at LBA <address>-<address> is recovered
when checking parity on VD <name>.
A recoverable read error occurred at LBA <address><address> when checking parity on VD <name>.
The parity at LBA <address>-<address> is
regenerated when checking parity on VD <name>.
PD <slot> has been freed from RG <name>.
Configuration of RG <name> has been imported.
Configuration of RG <name> has been restored.
Configuration of VD <name> has been restored.
PD <slot> starts disk scrubbing process.
PD <slot> completed disk scrubbing process.
A large RG <name> with <number> disks included is
created
A RG <name> made up disks across <number>
chassis is created
The total size of RG <name> shrunk
VD <name> finished erasing process.
The erasing process of VD <name> failed.
- 82 -
INFO
•
VD erase started
Snapshot events
Level
Type
WARNING
Snap mem
Snap space
overflow
Snap threshold
INFO
INFO
Snap delete
Snap auto delete
INFO
INFO
Snap take
Snap set space
INFO
Snap rollback
started
Snap rollback
finished
Snap quota
reached
Snap clear space
WARNING
WARNING
INFO
WARNING
INFO
•
INFO
INFO
INFO
Failed to allocate snapshot memory for VD <name>.
Failed to allocate snapshot space for VD <name>.
The snapshot space threshold of VD <name> has
been reached.
The snapshot VD <name> has been deleted.
The oldest snapshot VD <name> has been deleted to
obtain extra snapshot space.
A snapshot on VD <name> has been taken.
Set the snapshot space of VD <name> to <number>
MB.
Snapshot rollback of VD <name> has been started.
Snapshot rollback of VD <name> has been finished.
The quota assigned to snapshot <name> is reached.
The snapshot space of VD <name> is cleared
Type
iSCSI login
accepted
iSCSI login rejected
iSCSI logout recvd
Description
iSCSI login from <IP> succeeds.
iSCSI login from <IP> was rejected, reason [<string>]
iSCSI logout from <IP> was received, reason
[<string>].
Battery backup events
Level
Type
INFO
BBM start syncing
INFO
BBM stop syncing
INFO
INFO
INFO
BBM installed
BBM status good
BBM status
charging
BBM status fail
BBM enabled
BBM inserted
BBM removed
WARNING
INFO
INFO
INFO
•
Description
iSCSI events
Level
•
VD <name> starts erasing process.
Description
Abnormal shutdown detected, start flushing batterybacked data (<number> KB).
Abnormal shutdown detected, flushing battery-backed
data finished
Battery backup module is detected
Battery backup module is good
Battery backup module is charging
Battery backup module is failed
Battery backup feature is <string>.
Battery backup module is inserted
Battery backup module is removed
JBOD events
Level
Type
Description
- 83 -
INFO
PD upgrade started
INFO
WARNING
INFO
PD upgrade
finished
PD upgrade failed
PD freed
INFO
Warning
ERROR
ERROR
ERROR
ERROR
INFO
WARNING
WARNING
PD inserted
PD removed
HDD read error
HDD write error
HDD error
HDD IO timeout
JBOD inserted
JBOD removed
SMART T.E.C
WARNING
SMART fail
INFO
PD dedicated spare
INFO
PD global spare
ERROR
Config read fail
ERROR
Config write fail
WARNING
PD read error
WARNING
PD write error
INFO
PD scrub started
INFO
WARNING
INFO
PD scrub
completed
PS fail
PS normal
WARNING
INFO
WARNING
FAN fail
FAN normal
Volt warn OV
WARNING
Volt warn UV
WARNING
Volt crit OV
WARNING
Volt crit UV
INFO
WARNING
Volt recovery
Therm warn OT
WARNING
Therm warn UT
WARNING
Therm fail OT
WARNING
Therm fail UT
INFO
Therm recovery
JBOD <name> PD [<string>] starts upgrading
firmware process.
JBOD <name> PD [<string>] finished upgrading
firmware process.
JBOD <name> PD [<string>] upgrade firmware failed.
JBOD <name> PD <slot> has been freed from RG
<name>.
JBOD <name> disk <slot> is inserted into system.
JBOD <name> disk <slot> is removed from system.
JBOD <name> disk <slot> read block error
JBOD <name> disk <slot> write block error
JBOD <name> disk <slot> is disabled.
JBOD <name> disk <slot> gets no response
JBOD <name> is inserted into system
JBOD <name> is removed from system
JBOD <name> disk <slot>: S.M.A.R.T. Threshold
Exceed Condition occurred for attribute <string>
JBOD <name> disk <slot>: Failure to get S.M.A.R.T
information
Assign JBOD <name> PD <slot> to be the dedicated
spare disk of RG <name>.
Assign JBOD <name> PD <slot> to Global Spare
Disks.
Config read error occurred at LBA <address><address> of JBOD <name> PD <slot>.
Config write error occurred at LBA <address><address> of JBOD <name> PD <slot>.
Read error occurred at LBA <address>-<address> of
JBOD <name> PD <slot>.
Write error occurred at LBA <address>-<address> of
JBOD <name> PD <slot>.
JBOD <name> PD <slot> starts disk scrubbing
process.
JBOD <name> PD <slot> completed disk scrubbing
process.
Power Supply of <string> in JBOD <name> is FAIL
Power Supply of <string> in JBOD <name> is
NORMAL
Cooling fan of <string> in JBOD <name> is FAIL
Cooling fan of <string> in JBOD <name> is NORMAL
Voltage of <string> read as <string> in JBOD <name>
is WARN OVER
Voltage of <string> read as <string> in JBOD <name>
is WARN UNDER
Voltage of <string> read as <string> in JBOD <name>
is CRIT OVER
Voltage of <item> read as <string> in JBOD <name>
is CRIT UNDER
Voltage of <string> in JBOD <string> is NORMAL
Temperature of <string> read as <string> in JBOD
<name> is OT WARNING
Temperature of <string> read as <string> in JBOD
<name> is UT WARNING
Temperature of <string> read as <string> in JBOD
<name> is OT FAILURE
Temperature of <string> read as <string> in JBOD
<name> is UT FAILURE
Temperature of <string> in JBOD <name> is
NORMAL
- 84 -
•
System maintenance events
Level
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
INFO
WARNING
ERROR
INFO
•
Type
System shutdown
System reboot
System console
shutdown
System web
shutdown
System button
shutdown
System LCM
shutdown
System console
reboot
System web reboot
System LCM
reboot
FW upgrade start
FW upgrade
success
FW upgrade failure
IPC FW upgrade
timeout
Config imported
Description
System shutdown.
System reboot.
System shutdown from <string> via Console UI
System shutdown from <string> via Web UI
System shutdown via power button
System shutdown via LCM
System reboot from <string> via Console UI
System reboot from <string> via Web UI
System reboot via LCM
System firmware upgrade starts.
System firmware upgrade succeeds.
System firmware upgrade is failed.
System firmware upgrade timeout on another
controller
<string> config imported
HAC events
Level
Type
Description
INFO
RG owner changed
INFO
INFO
Force CTR write
through
Restore CTR cache
mode
Failover complete
INFO
Failback complete
INFO
ERROR
ERROR
ERROR
ERROR
ERROR
ERROR
ERROR
ERROR
ERROR
INFO
CTR inserted
CTR removed
CTR timeout
CTR lockdown
CTR memory NG
CTR firmware NG
CTR lowspeed NG
CTR highspeed NG
CTR backend NG
CTR frontend NG
CTR reboot FW
sync
The preferred owner of RG <name> has been
changed to controller <number>.
Controller <number> forced to adopt write-through
mode on failover.
Controller <number> restored to previous caching
mode on failback.
All volumes in controller <number> completed failover
process.
All volumes in controller <number> completed failback
process.
Controller <number> is inserted into system
Controller <number> is removed from system
Controller <number> gets no response
Controller <number> is locked down
Memory size mismatch
Firmware version mismatch
Low speed inter link is down
High speed inter link is down
SAS expander is down
FC IO controller is down
Controller reboot, reason [Firmware synchronization
completed]
INFO
- 85 -
•
Clone events
Level
Type
INFO
INFO
WARNING
INFO
INFO
INFO
WARNING
WARNING
VD clone started
VD clone finished
VD clone failed
VD clone aborted
VD clone set
VD clone reset
Auto clone error
Auto clone no snap
Description
VD <name> starts cloning process.
VD <name> finished cloning process.
The cloning in VD <name> failed.
The cloning in VD <name> was aborted.
The clone of VD <name> has been designated.
The clone of VD <name> is no longer designated.
Auto clone task: <string>.
Auto clone task: Snapshot <name> is not found for
VD <name>.
- 86 -
A. Certification list
•
RAM
RSF362 RAM Spec: 240-pin, DDR2-533(PC4300), Reg.(register) or
UB(Unbufferred), ECC, up to 4GB, 64-bit data bus width (and also 32-bit
memory), x8 or x16 devices, 36-bit addressable, up to 14-bit row address and
10-bit column address.
Vendor
ATP
ATP
ATP
ATP
Kingston
Kingston
Kingston
Kingston
Unigen
Unigen
Unigen
Unigen
Unigen
Unigen
Unigen
Unigen
Unigen
Unigen
•
Model
AJ64K72F8BHE6S, 512MB DDR2-667 (Unbuffered, ECC) with SEC
AJ28K64E8BHE6S, 1GB DDR2-667 (Unbuffered, non-ECC) with SEC
AJ28K72G8BHE6S, 1GB DDR2-667 (Unbuffered, ECC) with SEC
AJ56K72G8BJE6S, 2GB DDR2-667 (Unbuffered, ECC) with Samsung
KVR667D2E5/1G, 1GB DDR2-667 (Unbuffered, ECC) with Hynix
KVR800D2E6/1G, 1GB DDR2 800 (Unbuffered, ECC) with Hynix
KVR667D2E5/2G, 2GB DDR2-667 (Unbuffered, ECC) with Hynix
KVR800D2E6/2G, 2GB DDR2-800 (Unbuffered, ECC) with ELPIDA
UG12T7200L8DU-5AM, 1GB DDR2-533 (Unbuffered, ECC) with Elpida
UG12T7200L8DR-5AC, 1GB DDR2-533 (Registered, ECC) with Elpida
UG12T7200M8DU-5AL, 1GB DDR2-533 (Unbuffered, ECC) with Hynix
UG12T7200L8DU-5AM, 1GB DDR2-533 (Unbuffered, ECC) with Hynix
UG25T7200M8DU-5AM, 2GB DDR2-533 (Unbuffered, ECC) with Micron
UG64T7200L8DU-6AL, 512MB DDR2-667 (Unbuffered, ECC) with Elpida
UG12T7200L8DU-6AM, 1GB DDR2-667 (Unbuffered, ECC) with Hynix
UG12T7200M8DU-6AK, 1GB DDR2-667 (Unbuffered, ECC, Low profile)
with Hynix
UG25T7200M8DU-6AMe, 2GB DDR2-667 (Unbuffered, ECC) with Hynix
UG25T7200M8DU-6AK, 2GB DDR2-667 (Unbuffered, ECC, Low profile)
with Hynix
FC HBA card
Vendor
Brocade
LSI Logic
LSI Logic
LSI Logic
QLogic
QLogic
•
Model
410 (PCI-Express, 2.5 GHz, 4 Gb/s, 1 port, LC style pluggable SFP,
multimode optics 850nm) + Finisar FTLF 8524P2BNL
LSI7204XP-LC (PCI-X, 4 Gb/s, 2 ports, LC style pluggable SFP,
multimode optics 850nm) + Picolight PLRXPL-VE-SG4-26
LSI7104EP-LC (PCI-Express, 4 Gb/s, 1 ports, LC style pluggable SFP,
multimode optics 850nm) + Finisar FTLF8524P2BNL
LSI7204EP-LC (PCI-Express, 4 Gb/s, 2 ports, LC style pluggable SFP,
multimode optics 850nm) + Finisar FTLF8524P2BNL
QLA2462 (PCI-X 2.0, 266MHz, 4 Gb/s, 2 ports, LC style SFF, multimode
optics 850nm) + Finisar FTLF 8524E2KNL
QLE2462 (PCI-Express, 2.5 GHz, 4 Gb/s, 2 ports, LC style SFF,
multimode optics 850nm) + Finisar FTLF 8524E2KNL
FC GBIC
Vendor
Model
- 89 -
Avago
Finisar
JDSU
Picolight
•
AFBR-57R5APZ (4.25 Gb/s SFP transceiver, 850nm)
FTLF 8524P2BNL (4.25 Gb/s SFP transceiver, 850nm)
JSH-42S4DB3 (4.25Gb/s SFP transceiver, 850nm)
PLRXPL-VE-SG4-26 (4.25Gb/s SFP transceiver, 850nm)
FC Switch
Vendor
Brocade
•
Model
BR-200E
Hard drive
SAS 3.5”
Vendor
Hitachi
Hitachi
Seagate
Seagate
Seagate
Seagate
Seagate
Seagate
Seagate
Seagate
Seagate
Seagate
Model
Ultrastar 15K147, HUS151436VLS300, 36GB, 15000RPM, SAS 3.0Gb/s,
16M
Ultrastar 15K300, HUS153073VLS300, 73GB, 15000RPM, SAS 3.0Gb/s,
16M (F/W: A410)
Cheetah 15K.4, ST336754SS, 36.7GB, 15000RPM, SAS 3.0Gb/s, 8M
Cheetah 15K.5, ST373455SS, 73.4GB, 15000RPM, SAS 3.0Gb/s, 16M
Cheetah 15K.5, ST3146855SS, 146.8GB, 15000RPM, SAS 3.0Gb/s, 16M
Cheetah 15K.6, ST3450856SS, 450GB, 15000RPM, SAS 3.0Gb/s, 16M
(F/W: 003)
Cheetah NS, ST3400755SS, 400GB, 10000RPM, SAS 3.0Gb/s, 16M
Barracuda ES.2, ST31000640SS, 1TB, 7200RPM, SAS 3.0Gb/s, 16M
(F/W: 0002)
Cheetah NS.2, ST3600002SS, 600GB, 10000RPM, SAS 2.0, 6.0Gb/s,
16M (F/W: 0004)
Cheetah 15K.7, ST3600057SS, 600GB, 15000RPM, SAS 2.0, 6.0Gb/s,
16MB (F/W: 0004)
Constellation ES, ST31000424SS, 1TB, 7200RPM, SAS 2.0 6.0Gb/s,
16MB (F/W: 0005)
Constellation ES, ST32000444SS, 2TB, 7200RPM, SAS 2.0 6.0Gb/s,
16MB (F/W: 0005)
SAS 2.5”
Vendor
Seagate
Seagate
Seagate
Model
Savvio 10K.3, ST9300603SS, 300GB, 10000RPM, SAS 2.0, 6.0Gb/s, 16M
(F/W: 0003)
Savvio 15K.2, ST9146852SS, 147GB, 15000RPM, SAS 2.0, 6.0Gb/s, 16M
(F/W: 0002)
Constellation, ST9500430SS, 500GB, 7200RPM, SAS 2.0, 6.0Gb/s, 16M
(F/W: 0001)
SATA 3.5”
Vendor
Hitachi
Hitachi
Model
Deskstar 7K250, HDS722580VLSA80, 80GB, 7200RPM, SATA, 8M
Deskstar E7K500, HDS725050KLA360, 500GB, 7200RPM, SATA II, 16M
- 90 -
Hitachi
Hitachi
Hitachi
Hitachi
Hitachi
Maxtor
Maxtor
Samsung
Seagate
Seagate
Seagate
Seagate
Seagate
Seagate
Seagate
Seagate
Seagate
Seagate
Seagate
Seagate
Seagate
Seagate
Westem Digital
Westem Digital
Westem Digital
Westem Digital
Westem Digital
Westem Digital
Westem Digital
Westem Digital
Westem Digital
Westem Digital
Westem Digital
Westem Digital
Westem Digital
Westem Digital
Westem Digital
Deskstar 7K80, HDS728040PLA320, 40GB, 7200RPM, SATA II, 2M
Deskstar T7K500, HDT725032VLA360, 320GB, 7200RPM, SATA II, 16M
Deskstar P7K500, HDP725050GLA360, 500GB, 7200RPM, SATA II, 16M
(F/W: K2A0AD1A)
Deskstar E7K1000, HDE721010SLA330, 1TB, 7200RPM, SATA 3.0Gb/s,
32MB, NCQ (F/W: ST60A3AA)
UltraStar A7K2000, HUA722020ALA330, 2TB, 7200RPM, SATA 3.0Gb/s,
32MB, NCQ (F/W: JKAOA20N)
DiamondMax Plus 9, 6Y080M0, 80GB, 7200RPM, SATA, 8M
DiamondMax 11, 6H500F0, 500GB, 7200RPM, SATA 3.0Gb/s, 16M
SpinPoint P80, HDSASP0812C, 80GB,7200RPM, SATA, 8M
Barracuda 7200.7, ST380013AS, 80GB, 7200RPM, SATA 1.5Gb/s, 8M
Barracuda 7200.7, ST380817AS, 80GB, 7200RPM, SATA 1.5Gb/s, 8M,
NCQ
Barracuda 7200.8, ST3400832AS, 400GB, 7200RPM, SATA 1.5Gb/s, 8M,
NCQ
Barracuda 7200.9, ST3500641AS, 500GB, 7200RPM, SATA 3.0Gb/s,
16M, NCQ
Barracuda 7200.11, ST3500320AS, 500GB, 7200RPM, SATA 3.0Gb/s,
32M, NCQ
Barracuda 7200.11, ST31000340AS, 1TB, 7200RPM, SATA 3.0Gb/s,
32M, NCQ
Barracuda 7200.11, ST31500341AS, 1.5TB, 7200RPM, SATA 3.0Gb/s,
32M, NCQ (F/W: SD17)
NL35.2, ST3400633NS, 400GB, 7200RPM, SATA 3.0Gb/s, 16M
NL35.2, ST3500641NS, 500GB, 7200RPM, SATA 3.0Gb/s, 16M
Barracuda ES, ST3500630NS, 500GB, 7200RPM, SATA 3.0Gb/s, 16M
Barracuda ES, ST3750640NS, 750GB, 7200RPM, SATA 3.0Gb/s, 16M
Barracuda ES.2, ST31000340NS, 1TB, 7200RPM, SATA 3.0Gb/s, 32M
(F/W: SN06)
SV35.5, ST3500410SV, 500GB, 7200 RPM, SATA 3.0Gb/s, 16M, NCQ
(F/W: CV11)
Constellation ES, ST31000524NS, 1TB, 7200RPM, SATA 3.0Gb/s, 32M,
NCQ (F/W: SN11)
Caviar SE, WD800JD, 80GB, 7200RPM, SATA 3.0Gb/s, 8M
Caviar SE, WD1600JD, 160GB, 7200RPM, SATA 1.5G/s , 8M
Caviar RE2, WD4000YR, 400GB, 7200RPM, SATA 1.5Gb/s, 16M, NCQ
Caviar RE16, WD5000AAKS, 500GB, 7200RPM, SATA 3.0Gb/s, 16M
RE2, WD4000YS, 400GB, 7200RPM, SATA 3.0Gb/s, 16M
RE2, WD5000ABYS, 500GB, 7200RPM, SATA 3.0Gb/s, 16M, NCQ
RE2-GP, WD1000FYPS, 1TB, 7200RPM, SATA 3.0Gb/s, 16M
RE3, WD1002FBYS, 1000GB, 7200RPM, SATA 3.0Gb/s, 32M, NCQ
(F/W: 03.00C05)
RE4, WD2002FYPS, 2TB, IntelliPower, SATA 3.0Gb/s, 64M, NCQ (F/W:
04.05G04)
RE4-GP, WD2002FYPS, 2TB, IntelliPower, SATA 3.0Gb/s, 64M, NCQ
(F/W: 04.01G01)
RE4, WD2003FYYS, 2TB, 7200RPM, SATA 3.0Gb/s, 64M, NCQ
(F/W: 01.01D01)
RE4, WD1003FBYX, 1TB, 7200RPM, SATA 3.0Gb/s, 64M, NCQ
(F/W: 01.01V01)
RE4, WD5003ABYX, 500GB, 7200RPM, SATA 3.0Gb/s, 64M, NCQ
(F/W: 01.01S01)
Raptor, WD360GD, 36.7GB, 10000RPM, SATA 1.5Gb/s, 8M
VelcoiRaptor, WD3000HLFS, 300GB, 10000RPM, SATA 3.0Gb/s, 16M
(F/W: 04.04V01)
SATA 2.5”
- 91 -
Vendor
Seagate
Model
Constellation, ST9500530NS, 500GB, 7200RPM, SATA 3.0Gb/s, 32M
(F/W: SN02)
System information
SW version
1.0.8p2
- 92 -
Download PDF