Netstor
NR760A / NR340A / NR330A
iSCSI GbE to SATA II RAID Storage
User Manual
Version 1.0 (May, 2008)
-1-
Preface
About this manual
This manual is the introduction of Netstor’s NR760A / NR340A / NR330A
iSCSI GbE Raid Storage Solution and it aims to help users know the
operations of the disk array system easily. Information contained in this
manual has been reviewed for accuracy, but not for product warranty because
of the various environments/OS/settings, Information and specification will be
changed without further notice. For any update information, please visit
www.netstor.com.tw and your contact windows.
Copyright@2008, Netstor Technology, Inc. All rights reserved.
Thank you for using Netstor Technology, Inc. products; if you have any
question, please e-mail to “services@netstor.com.tw”. We will answer your
question as soon as possible.
Product Description
Panel layout
1. Power LED
2. Mute Button
Reset for Buzzer Beeping
3. Temperature LED
Normal – Green;
Fail – Red (too slow rpm or
stop)
-2-
4. Fan Status LED
Normal – Green; Over
55°C – Red
5. LCM
6. Back (Control button)
7. Up (Control button)
8. Enter (Control button)
9. Down (Control button)
10. HDD Power LED
11. HDD Status LED
12. Power Cord Receptacle
*CH 0, CH 1
Gigabit Ethernet ports
*RJ 45
Ethernet port for management
*RS 232
Consol port
Disk Installation
Install each Hard Drive into the Drive Trays and fasten using the supplied HDD screws.
System Connection
-3-
1. Connect CH0, CH1 to GbE switching ports for data transmission.
2.. Connect RJ45 to Ethernet port for management.
-4-
Table of Contents
Features........................................................................... 7
Terminology ..................................................................... 8
RAID levels .................................................................... 10
Volume relationship diagram.......................................... 12
1.1
1.2
1.3
1.4
Chapter 2
2.1
2.2
2.3
Getting started ............................................. 13
Before starting................................................................ 13
iSCSI introduction .......................................................... 13
Management methods ................................................... 15
2.3.1
Web GUI......................................................................................... 16
2.3.2
Remote control – secure shell ........................................................ 16
2.4
Enclosure ....................................................................... 17
2.4.1
LCM................................................................................................ 17
2.4.2
System buzzer................................................................................ 20
2.4.3
LED ................................................................................................ 20
Chapter 3
3.1
3.2
3.3
3.4
Web GUI guideline....................................... 22
Web GUI hierarchy ........................................................ 22
Login .............................................................................. 23
Quick install.................................................................... 24
System configuration ..................................................... 26
3.4.1
System name ................................................................................. 27
3.4.2
IP address ...................................................................................... 27
3.4.3
Language ....................................................................................... 28
3.4.4
Login config .................................................................................... 28
3.4.5
Password........................................................................................ 29
3.4.6
Date................................................................................................ 29
3.4.7
Mail................................................................................................. 30
3.4.8
SNMP ............................................................................................. 31
3.4.9
Messenger...................................................................................... 32
3.4.10
System log server........................................................................... 32
3.4.11
Event log ........................................................................................ 33
3.5
iSCSI config ................................................................... 34
3.5.1
Entity property ................................................................................ 35
3.5.2
NIC ................................................................................................. 35
3.5.3
Node............................................................................................... 36
3.5.4
Session........................................................................................... 38
3.5.5
CHAP account................................................................................ 39
-5-
3.6
Volume configuration ..................................................... 39
3.6.1
Physical disk................................................................................... 40
3.6.2
Volume group ................................................................................. 43
3.6.3
User data volume ........................................................................... 45
3.6.4
Cache volume ................................................................................ 47
3.6.5
Logical unit number ........................................................................ 48
3.6.6
Example ......................................................................................... 50
3.7
Enclosure management ................................................. 60
3.7.1
SES configuration........................................................................... 61
3.7.2
Hardware monitor ........................................................................... 62
3.7.3
Hard drive S.M.A.R.T. support ....................................................... 63
3.8
3.9
System maintenance ..................................................... 64
3.8.1
Upgrade.......................................................................................... 65
3.8.2
Info ................................................................................................. 65
3.8.3
Reset to default .............................................................................. 66
3.8.4
Config import & export.................................................................... 66
3.8.5
Shutdown ....................................................................................... 67
Logout ............................................................................ 67
Chapter 4
4.1
4.2
4.3
4.4
4.5
Advanced operation .................................... 68
Rebuild........................................................................... 68
VG migration and expansion.......................................... 70
UDV Extension............................................................... 73
Disk roaming .................................................................. 74
Support Microsoft MPIO and MC/S................................ 74
Appendix............................................................................ 76
A.
B.
C.
D.
E.
F.
Certification list............................................................... 76
Event notifications.......................................................... 78
Known issues................................................................. 84
Microsoft iSCSI Initiator.................................................. 84
Installation steps for large volume (TB).......................... 86
MPIO and MC/S setup instructions ................................ 91
-6-
Chapter 1 RAID introduction
1.1 Features
Netstor iSCSI series is a high-performance RAID solution including the
following Models
NR760A: Desktop 8 bay iSCSI(Host) to SATA II(Disk) Raid
Storage
o NR340A: 2U-8 bay iSCSI(Host) to SATA II(Disk) Raid Storage
o
o
NR330A: 3U-16 bay iSCSI(Host) to SATA II(Disk) Raid Storage
Netstor iSCSI storage solution features:
•
•
•
•
•
•
•
•
•
•
Front-end 2 ports GbE NIC ports with load-balancing & fail-over for
high availability.
iSCSI jumbo frame support.
RAID 0, 1, 0+1, 3, 5, 6, 50, 60, & JBOD ready.
SATA II drive backward-compatible.
One logic volume can be shared by as many as 32 hosts.
Host access control.
Configurable N-way mirror for high data protection.
On-line volume migration with no system down-time.
HDD S.M.A.R.T. enabled for SATA drives.
Global/dedicated cache configurable by volume.
With proper configuration, Netstor product can provide non-stop service with
a high degree of fault tolerance by using RAID technology and advanced
array management features. Should you have any question, please feel free
to contact your local sales representative or directly send email to
“services@netstor.com.tw
iSCSI GbE enclosure connects to the host system via iSCSI interface. It can
be configured to any RAID level. The controller provides reliable data
protection for servers and RAID 6. RAID 6 allows two HDD failures without
producing any impact on the existing data. Data can be recovered from the
existing data and parity drives. (Data can be recovered from the rest
disks/drives.)
-7-
Netstor’s iSCSI solution is the most cost-effective disk array controller with
completely integrated high-performance and data-protection capabilities which
meet or exceed the highest industry standards, and the best data solution
for small/medium business (SMB) users.
1.2 Terminology
The document uses the following terms:
RAID
RAID is the abbreviation of “Redundant Array of Independent
Disks”. There are different RAID levels with different degree
of the data protection, data availability, and performance to
host environment.
PD
The Physical Disk belongs to the member disk of one specific
volume group.
VG
Volume Group. A collection of removable media. One VG
consists of a set of UDVs and owns one RAID level attribute.
UDV
User Data Volume. Each VG could be divided into several
UDVs. The UDVs from one VG share the same RAID level,
but may have different volume capacity.
CV
Cache Volume. Controller uses onboard memory as cache.
All RAM (except for the part which is occupied by the
controller) can be used as cache.
LUN
Logical Unit Number. A logical unit number (LUN) is a unique
identifier which enables it to differentiate among separate
devices (each one is a logical unit).
GUI
Graphic User Interface.
RAID width,
RAID copy,
RAID width, copy and row are used to describe one VG.
E.g.:
-8-
RAID row
(RAID cell in
one row)
1.
One 4-disk RAID 0 volume: RAID width= 4; RAID
copy=1; RAID row=1.
2.
One 3-way mirroring volume: RAID width=1; RAID
copy=3; RAID row=1.
3.
One RAID 10 volume over 3 4-disk RAID 1 volume:
RAID width=1; RAID copy=4; RAID row=3.
WT
Write-Through cache-write policy. A caching technique in
which the completion of a write request is not signaled until
data is safely stored in non-volatile media. Each data is
synchronized in both data cache and accessed physical
disks.
WB
Write-Back cache-write policy. A caching technique in which
the completion of a write request is signaled as soon as the
data is in cache and actual writing to non-volatile media
occurs at a later time. It speeds up system write performance
but needs to bear the risk where data may be inconsistent
between data cache and the physical disks in one short time
interval.
RO
Set the volume to be Read-Only.
DS
Dedicated Spare disks. The spare disks are only used by one
specific VG. Others could not use these dedicated spare
disks for any rebuilding purpose.
GS
Global Spare disks. GS is shared for rebuilding purpose. If
some VGs need to use the global spare disks for rebuilding,
they could get the spare disks out from the common spare
disks pool for such requirement.
DC
Dedicated Cache.
GC
Global Cache.
DG
DeGraded mode. Not all of the array’s member disks are
functioning, but the array is able to respond to application
-9-
read and write requests to its virtual disks.
SCSI
Small Computer Systems Interface.
iSCSI
Internet Small Computer Systems Interface.
S.M.A.R.T.
Self-Monitoring Analysis and Reporting Technology.
WWN
World Wide Name.
HBA
Host Bus Adapter.
SAF-TE
SCSI Accessed Fault-Tolerant Enclosures.
NIC
Network Interface Card.
LACP
Link Aggregation Control Protocol.
MPIO
Multi-Path Input/Output.
MC/S
Multiple Connections per Session
MTU
Maximum Transmission Unit.
CHAP
Challenge Handshake Authentication Protocol. An optional
security mechanism to control access to an iSCSI storage
system over the iSCSI data ports.
iSNS
Internet Storage Name Service.
1.3 RAID levels
RAID 0
Disk striping. RAID 0 needs at least one hard drive.
- 10 -
RAID 1
Disk mirroring over two disks. RAID 1 needs at least two hard
drives.
N-way
mirror
Extension to RAID 1 level. It has N copies of the disk.
RAID 3
Striping with parity on the dedicated disk. RAID 3 needs at
least three hard drives.
RAID 5
Striping with interspersed parity over the member disks. RAID
3 needs at least three hard drives.
RAID 6
2-dimensional parity protection over the member disks. RAID
6 needs at least four hard drives.
RAID 0+1
Mirroring of the member RAID 0 volumes. RAID 0+1 needs at
least four hard drives.
RAID 10
Striping over the member RAID 1 volumes. RAID 10 needs at
least four hard drives.
RAID 30
Striping over the member RAID 3 volumes. RAID 30 needs at
least six hard drives.
RAID 50
Striping over the member RAID 5 volumes. RAID 50 needs at
least six hard drives.
RAID 60
Striping over the member RAID 6 volumes. RAID 60 needs at
least eight hard drives.
JBOD
The abbreviation of “Just a Bunch Of Disks”. JBOD needs at
least one hard drive.
- 11 -
1.4 Volume relationship diagram
LUN 1
UDV 1
LUN 2
LUN 3
UDV 2
Snap
UDV
+
+
+
Global CV
VG
Dedicated
CV
PD 1
PD 2
PD 3
DS
RAM
Figure 1.4.1
This is the volume structure of Netstor designed. It describes the relationship
of RAID components. One VG (Volume Group) consists of a set of UDVs
(User Data Volume) and owns one RAID level attribute. Each VG can be
divided into several UDVs. The UDVs in one VG share the same RAID level,
but may have different volume capacity. Each UDV will be associated with
one specific CV (Cache Volume) to execute the data transaction. Each CV
can have different cache memory size by user’s modification/setting. LUN
(Logical Unit Number) is a unique identifier, in which users can access
through SCSI commands.
- 12 -
Chapter 2 Getting started
2.1 Before starting
Before starting, prepare the following items.
1.
2.
3.
4.
5.
6.
7.
8.
9.
Check “Certification list” in Appendix A to confirm the hardware
setting is fully supported.
Read the latest release note before upgrading. Release note
accompany with release firmware.
A server with a NIC or iSCSI HBA.
CAT 5e, or CAT 6 network cables for management port and iSCSI
data ports. Recommend CAT 6 cables for best performance.
Prepare storage system configuration plan.
Management and iSCSI data ports network information. When using
static IP, please prepare static IP addresses, subnet mask, and
default gateway.
Gigabit LAN switches. (recommended) Or Gigabit LAN switches
with VLAN/LCAP/Trunking functions. (optional)
CHAP security information, including CHAP username and
password. (optional)
Setup the hardware connection before power on servers and
Netstor iSCSI storage. Connect console cable, management port
cable, and iSCSI data port cables in advance.
2.2 iSCSI introduction
iSCSI (Internet SCSI) is a protocol which encapsulates SCSI (Small Computer
System Interface) commands and data in TCP/IP packets for linking storage
devices with servers over common IP infrastructures. iSCSI provides high
performance SANs over standard IP networks like LAN, WAN or the Internet.
IP SANs(Storage Area Networks) allow few servers to attach to an infinite
- 13 -
number of storage volumes by using iSCSI over TCP/IP networks. IP SANs
can scale the storage capacity with any type and brand of storage system. In
addition, it can be used by any type of network (Ethernet, Fast Ethernet, and
Gigabit Ethernet) and combination of operating systems (Microsoft Windows,
Linux, Solaris, etc.) within the SAN network. IP-SANs also include
mechanisms for security, data replication, multi-path and high availability.
Storage protocol, such as iSCSI, has “two ends” in the connection. These
ends are initiator and target. In iSCSI, we call them iSCSI initiator and iSCSI
target. The iSCSI initiator requests or initiates any iSCSI communication. It
requests all SCSI operations like read or write. An initiator is usually located
on the host/server side (either an iSCSI HBA or iSCSI SW initiator).
The target is the storage device itself or an appliance which controls and
serves volumes or virtual volumes. The target is the device which executes
SCSI commands or plays a role as the bridge to an attached storage device.
Host 2
(initiator)
Host 1
iSCSI
(initiator)
HBA
NIC
IP SAN
iSCSI device 1
iSCSI device 2
(target)
(target)
Figure 2.2.1
The host side needs an iSCSI initiator. The initiator is a driver which handles
the SCSI traffic over iSCSI. The initiator can be software or hardware (HBA).
- 14 -
Please refer to the certification list of iSCSI HBA(s) in Appendix A. OS native
initiators or other software initiators use standard TCP/IP stack and Ethernet
hardware, while iSCSI HBA(s) use their own iSCSI and TCP/IP stacks on
board.
Hardware iSCSI HBA(s) provide its own initiator tool. Please refer to the
vendors’ HBA user manual. Microsoft, Linux and Mac provide iSCSI initiator
driver. Below are the available links:
1.
Link to download the Microsoft iSCSI software initiator:
http://www.microsoft.com/downloads/details.aspx?FamilyID=12cb3c
1a-15d6-4585-b385-befd1319f825&DisplayLang=en
Please refer to Appendix D for Microsoft iSCSI initiator installation
procedure.
2.
Linux iSCSI initiator is also available. For different kernels, there are
different iSCSI drivers. Please check Appendix A for iSCSI initiator
certification list. If user needs the latest Linux iSCSI initiator, please
visit Open-iSCSI project for most update information. Linux-iSCSI
(sfnet) and Open-iSCSI projects merged in April 11, 2005.
Open-iSCSI website: http://www.open-iscsi.org/
Open-iSCSI README: http://www.open-iscsi.org/docs/README
Google groups:
http://groups.google.com/group/open-iscsi/threads?gvc=2
http://groups.google.com/group/open-iscsi/topics
3.
ATTO iSCSI initiator is available for Mac.
Website: http://www.attotech.com/xtend.html
2.3 Management methods
There are three management methods to manage Netstor iSCSI storage,
described in the following:
- 15 -
2.3.1
Web GUI
Netstor iSCSI storage support graphic user interface(GUI) to manage the
system. Be sure to connect LAN cable. The default setting of management
port IP is DHCP and DHCP address displays on LCM; user can inspect LCM
for IP first, then open the browser and type the DHCP address: (The DHCP
address is dynamic and user may need to check every time after reboot.)
When DHCP service is not available, controllers use zero configuration
(Zeroconf) to get an IP address.
Take an example on LCM:
192.168.1.1
GbE iSCSI Storage
http://192.168.1.1
Click any function at the first time; it will pop up a dialog to authenticate
current user.
Login name: admin
Default password: 1234
Or login with read-only account which only allows to read the configuration but
cannot change setting.
Login name: user
Default password: 1234
2.3.2
Remote control – secure shell
SSH (secure shell) is required for administrators to login from a remote
location. The SSH client software is available at the following web site:
SSHWinClient WWW: http://www.ssh.com/
Putty WWW: http://www.chiark.greenend.org.uk/
Host name: 192.168.1.1 (Please check your DHCP address for this field.)
- 16 -
Login name: admin
Default password: 1234
Tips
Netstor iSCSI storage only support SSH for remote control.
For using SSH, the IP address and password are required for
2.4 Enclosure
2.4.1
LCM
There are four buttons to control Netstor LCM (LCD Control Module),
including: c (up), d (down), ESC (Escape), and ENT (Enter).
After booting up the system, the following screen shows management port IP
and model name:
192.168.XX.XX
GbE iSCSI Storage
←
Press “ENT”, the LCM functions “Alarm Mute”, “Reset/Shutdown”, “Quick
Install”, “View IP Setting”, “Change IP Config” and “Reset to Default” will
rotate by pressing c (up) and d (down).
When there is WARNING or ERROR occurred (LCM default filter), the LCM
shows the event log to give users more detail from front panel.
The following table is the function description.
Alarm Mute
Mute alarm when error occurs.
Reset/Shutdown
Reset or shutdown controller.
- 17 -
Quick Install
Quick steps to create a volume. Please refer to next
chapter for operation in web UI.
View IP Setting
Display current IP address, subnet mask, and gateway.
Change IP
Config
Set IP address, subnet mask, and gateway. There are 2
options: DHCP (Get IP address from DHCP server) or
static IP.
Reset to Default
Reset to default sets password to default: 1234, and set
IP address to default as DHCP setting.
Default IP address: 192.168.1.1 (DHCP)
Default subnet mask: 255.255.255.0
Default gateway: 192.168.1.1
- 18 -
The following is LCM menu hierarchy.
[Alarm Mute]
[Reset/Shutdown]
[cYes
Nod]
[Reset]
[cYes
Nod]
[Shutdown]
[cYes
Nod]
RAID 0
[Volume Size]
Adjust Volume
RAID 1
xxx GB
Size
RAID 3
[Quick Install]
RAID 5
RAID 6
RAID 0+1
[Apply The
Config]
[cYes
Nod]
xxx GB
[IP Config]
[Static IP]
[IP Address]
Netstor
Technology
cd
[View IP Setting]
[192.168.1.1]
[IP Subnet Mask]
[255.255.255.0]
[IP Gateway]
[192.168.010.254]
[DHCP]
[cYes
Nod]
[IP Address]
[Change IP
Config]
[Static IP]
Adjust Submask
Mask]
IP
[Apply IP
Setting]
[cYes
address
[IP Subnet
[IP Gateway]
[Reset to Default]
Adjust IP
Adjust Gateway
IP
[cYes
Nod]
Nod]
Caution
Before power off, it is better to execute “Shutdown” to flush
the data from cache to physical disks.
- 19 -
2.4.2
System buzzer
The system buzzer features are listed below:
1.
2.
3.
2.4.3
The system buzzer alarms 1 second when system boots up
successfully.
The system buzzer alarms continuously when there is an error
occurred. The alarm will be stopped after the error is cleared or the
alarm is muted.
The alarm will be muted automatically when the error is cleared.
E.g., when RAID 5 is degraded then the alarm rings immediately,
user changes/adds one physical disk for rebuilding. When the
rebuilding is done, the alarm will be muted automatically.
LED
The LED features are listed below:
1. Marquee / Disk Status / Disk Rebuilding LED: The Marquee /
Disk Status / Disk Rebuilding LEDs are displayed with same LEDs.
The LEDs indicates different functions in different stages.
I.
Marquee LEDs: When the system successfully boots up, the
Marquee LED is on until the system boots successfully.
II.
Disk status LEDs: the LEDs reflect the disk status for the tray.
Only On/Off situation.
III.
Disk rebuilding LEDs: the LEDs are blinking when the disks
are under rebuilding.
2. Disk Access LED: Hardware activated LED when accessing disks
(IO).
3. Disk Power LED: Hardware activated LED when the disks are
plugged in and powered on.
4. System status LED: Used to reflect the system status, when turned
on, there is an error or a RAID malfunction occurred.
5. Management LAN port LED: GREEN LED is for LAN
transmit/receive indication. ORANGE LED is for LAN port 10/100
LINK indication.
6. BUSY LED: Hardware activated LED when the front-end channel is
busy.
- 20 -
7.
POWER LED: Hardware activated LED when system is powered on.
- 21 -
Chapter 3 Web GUI guideline
3.1 Web GUI hierarchy
The below table is the hierarchy of web GUI.
Æ Step 1 / Step 2 / Step 3 / Confirm
Quick Install
System Config
System name Æ System name
IP address Æ DHCP / Static / Address / Mask / Gateway / DNS /
HTTP port / HTTPS port / SSH port
Language Æ Language
Login config Æ Auto logout / Login lock
Password Æ Old password / Password / Confirm
Date Æ Time zone / Date / Time / NTP Server
Mail Æ Mail-from address / Mail-to address / Sent events /
SMTP relay / SMTP server / Authentication / Account /
Password / Confirm / Send test mail
SNMP Æ SNMP trap address / Community / Send events
Messenger Æ Messenger IP/hostname / Send events
System log Æ Server IP/hostname / Port / Facility / Event level
server
Event log Æ Filter / Download / Mute / Clear
iSCSI config
Entity Property Æ Entity name / iSNS IP
NIC Æ Aggregation / IP settings for iSCSI ports / Become
default gateway / Set MTU
Node Æ Change Authentication
Session Æ Delete
CHAP account Æ Create /Delete
Volume config
Physical disk Æ Free disks / Global spares / Dedicated spares / More
information / Auto Spindown
Volume group Æ Create / Delete / More information / Rename / Migrate
User data Æ Attach / Create / Delete / More information / Rename /
Volume
Extend / Set read/write mode / Set priority
Cache volume Æ Create / Delete / More information / Resize
- 22 -
Logical unit Æ Attach / Detach
Enclosure management
SES config Æ Enable / Disable
Hardware Æ Auto shutdown
monitor
S.M.A.R.T. Æ S.M.A.R.T. information
(Only for SATA disks)
UPS Æ UPS Type / Shutdown Battery Level / Shutdown Delay
/ Shutdown UPS
Maintenance
Upgrade Æ Browse the firmware to upgrade / Export config
Info Æ System information
Reset to default Æ Sure to reset to factory default?
Config import & Æ Import/Export / Import file
export
Shutdown Æ Reboot / Shutdown
Logout
Sure to logout?
3.2 Login
Netstor iSCSI storage supports graphic user interface (GUI) to operate the
system. Be sure to connect the LAN cable. The default IP setting is DHCP;
open the browser and enter:
http://192.168.xx.xx (Please check the DHCP address first on LCM.)
Click any function at the first time; it will pop up a dialog for authentication.
Login name: admin
Default password: 1234
After login, you can choose the functions which lists on the left side of window
to setup configuration.
- 23 -
Figure 3.2.1
There are four indicators at the top-right corner.
Figure 3.2.2
1.
RAID light: Green means RAID works well. Red represents
RAID failure.
2.
Temperature light: Green means normal temperature. Red
represents abnormal temperature.
3.
Voltage light: Green means normal voltage. Red represents
abnormal voltage..
4.
UPS light: Green means UPS works well. Red represents UPS
failure.
3.3 Quick install
It is easy to use “Quick install” to create a volume. Depends on how many
physical disks or how many residual spaces are free, the system will calculate
- 24 -
maximum spaces on RAID levels 0/1/3/5/6/0+1. “Quick install” will occupy all
residual VG space for one UDV.
“Quick Install” has a smarter policy. When the system is inserted with some
HDDs. “Quick Install” lists all possibilities and sizes in different RAID levels, it
will use all available HDD for RAID level depends on user’s choose. When
system has different sizes of HDDs, e.g., 8*200G and 8*80G, it lists all
possibilities and combinations in different RAID levels and different sizes.
After user sets the RAID level, user may find there are still some HDDs are
available (free status). The result is using smarter policy designed by Netstor.
It gives user:
1.
2.
Biggest capacity of RAID level for user to choose.
The fewest disk number for RAID level / volume size.
E.g., user chooses RAID 5 and the Netstor iSCSI Storage has 12*200G +
4*80G HDDs inserted. If we use all 16 HDDs for a RAID 5, and then the
maximum size of volume is 1200G (80G*15). With the wizard, we can do
smarter check and find out the most efficient way of using HDDs. The wizard
only uses 200G HDDs (Volume size is 200G*11=2200G), the volume size is
bigger and fully uses HDD capacity.
Step 1: Select “Quick install” and then choose the RAID level. After RAID
”. Then it will link to next page.
level is chosen, click “
Figure 3.3.1
Step 2: Please select a LUN number. Access control of host would show as a
wildcard “*”, which means every host can access to this volume. In this page,
- 25 -
the “Volume size” can be changed. Default value is the maximum volume
size. To adjust the size, notice that it is less or equal to maximum volume size.
Then click “
”.
Step 3: Confirm page. Click “
correct. Then a UDV will be created.
” if all configurations are
Done. You can start to use the system now.
Figure 3.3.2
(Figure 3.3.2: A RAID 0 user data volume with the UDV name “QUICK68809”, named by
system itself, with the total available volume size 609GB.)
3.4 System configuration
“System config” is designed for setting up the “System name”, “IP
address”, “Language”, “Login config”, “Password”, “Date”, “Mail”,
“SNMP”, “Messenger”, “System log server” and view “Event log”.
- 26 -
Figure 3.4.1
3.4.1
System name
“System name” allows users change system name. Default “system name”
composed of model name and serial number of this system, e.g.: P200CA00001.
Figure 3.4.1.1
3.4.2
IP address
“IP address” allows users change IP address for remote administration.
There are 2 options, DHCP (Get IP address from DHCP server) or static IP.
The default setting is DHCP. User can change the HTTP, HTTPS, and SSH
- 27 -
port number when the default port number is not allowed on host/server.
Figure 3.4.2.1
3.4.3
Language
“Language” allows users set the language shown in Web UI. The option
“Auto Detect” will set language setting by browser’s language setting.
Figure 3.4.3.1
3.4.4
Login config
“Login config” allows users set single admin and auto logout time. The
single admin can prevent multiple users access the same controller at the
same time.
- 28 -
1.
2.
Auto logout: The options are (1) Disable; (2) 5 minutes; (3) 30
minutes; (4) 1 hour. The system will log out automatically when user
idled for a specific period of time.
Login lock: Disable/Enable. When the login lock is enabled, the
system allows only one user to login or modify system settings.
Figure 3.4.4.1
3.4.5
Password
“Password” allows users change administrator password. The maximum
length of admin password is 12 characters.
Figure 3.4.5.1
3.4.6
Date
“Date” allows users set up the current date, time, and time zone before using
or synchronize time from NTP (Network Time Protocol) server.
- 29 -
Figure 3.4.6.1
3.4.7
Mail
“Mail” allows users set 3 mail addresses at most to receive the event
notification. Some mail servers would check “Mail-from address” and need
authentication for anti-spam. Please fill the necessary fields and click “Send
test mail” to test whether email functions are available. User can also select
which level of event logs are needed to be sent via Mail. Default setting only
enables ERROR and WARNING event logs.
- 30 -
Figure 3.4.7.1
3.4.8
SNMP
“SNMP” allows users set up SNMP trap for alerting via SNMP. It allows up to
3 SNMP trap addresses. Default community setting is “public”. User can
choose the event log level and default setting only enables INFO event log in
SNMP.
Figure 3.4.8.1
- 31 -
There are many SNMP tools. The following web sites are for your reference:
SNMPc: http://www.snmpc.com/
Net-SNMP: http://net-snmp.sourceforge.net/
3.4.9
Messenger
Using “Messenger”, user must enable the service “Messenger” in Windows
(Start Æ Control Panel Æ Administrative Tools Æ Services Æ Messenger),
and then event logs can be received. It allows up to 3 messenger addresses.
User can choose the event log levels and default setting enables the
WARNING and ERROR event logs.
Figure 3.4.9.1
3.4.10
System log server
Using “System log server”, user can choose the facility and the event log
level. The default port of syslog is 514. The default setting enables event level:
INFO, WARNING and ERROR event logs.
- 32 -
Figure 3.4.10.1
There are some syslog server tools. The following web sites are for your
reference:
WinSyslog: http://www.winsyslog.com/
Kiwi Syslog Daemon: http://www.kiwisyslog.com/
Most UNIX systems build in syslog daemon.
3.4.11
Event log
“Event log” can view the event messages. Click “Filter” button to choose
the level of event log display. Click “Download” button will save the whole
event log as a text file with file name “log-ModelName-SerialNumber-DateTime.txt” (e.g., log-P200C-A00001-20070801-120000.txt). Click “Clear”
button will clear event log. Click “Mute” button will stop alarm if system alerts.
Figure 3.4.11.1
For customizing your own display of event logs, there are three display
methods, on Web UI/Console event log page, popup windows on Web UI, and
on LCM. The default setting of these three displays is WARNING and ERROR
event logs displayed on Web UI and LCM. The default setting disabled the
- 33 -
popup function.
Figure 3.4.11.2
The event log is displayed in reverse order which means the latest event log is
on the first page. The event logs are actually saved in the first four hard drives;
each hard drive has one copy of event log. For one set of Netstor iSCSI
Storage, there are four copies of event logs to make sure users can check
event log any time when there is/are failed disk(s).
Tips
Please plug-in any of the first four hard drives, then event logs
can be saved and displayed during next system boot up.
3.5 iSCSI config
“iSCSI config” is designed for setting up the “Entity Property”, “NIC”,
“Node”, “Session”, and “CHAP account”.
Figure 3.5.1
- 34 -
3.5.1
Entity property
“Entity property” allows users view the entity name of the Netstor iSCSI
Storage and setup “iSNS IP” for iSNS (Internet Storage Name Service). iSNS
protocol allows automated discovery, management and configuration of iSCSI
devices on a TCP/IP network. Using iSNS, it needs to install a iSNS server in
SAN. Add an iSNS server IP address into iSNS server lists in order that iSCSI
initiator service can send queries.
Figure 3.5.1.1
3.5.2
NIC
“NIC” allows users change IP addresses of iSCSI data ports. NR760A /
NR340A / NR330A has two gigabit LAN ports to transmit data.
Figure 3.5.2.2
Figure 3.5.2.3
(Figure 3.5.2.3: NR760A/NR340A/NR330A, there are 2 iSCSI data ports. Each of them is
set to static IP.)
- 35 -
IP settings:
User can change IP address by clicking the button “
” in the “DHCP”
column. There are 2 selections, DHCP (Get IP address from DHCP server) or
static IP.
Figure 3.5.2.5
Default gateway:
Default gateway can be changed by clicking the button “
“Gateway” column. There is only one default gateway.
” in the
MTU / Jumbo frame:
MTU (Maximum Transmission Unit) size can be changed by clicking the
button “
” in the “MTU” column.
Caution
The MTU size of switching hub and HBA on host must be
enabled. Otherwise, the LAN connection can not work properly.
3.5.3
Node
“Node” allows users view the target name for iSCSI initiator. NR760A /
NR340A / NR330A supports single-node. The node name exists by default.
- 36 -
Figure 3.5.3.1
(Figure 3.5.3.1: NR760A / NR340A / NR330A, single-mode.)
CHAP:
CHAP is the abbreviation of Challenge Handshake Authorization Protocol.
CHAP is a strong authentication method used in point-to-point for user login.
It’s a type of authentication in which the authentication server sends the client
a key to be used for encrypting the username and password. CHAP enables
the username and password to transmitting in an encrypted form for
protection.
To use CHAP authentication in NR760A / NR340A / NR330A, please follow
the procedures.
1.
2.
Click “
” in Auth column.
Select “CHAP”.
Figure 3.5.3.7
3.
Click “
”.
Figure 3.5.3.8
4.
5.
Go to “/ iSCSI config / CHAP” page to create CHAP account.
Please refer to next section for more detail.
In “ / iSCSI config / Node / Change Authentication ”, select
“None” to disable CHAP.
- 37 -
Tips
After setting CHAP, the initiator in host/server should be set the
3.5.4
Session
“Session” can display iSCSI session and connection information, including
the following items:
1.
2.
3.
4.
Host (Initiator Name)
Error Recovery Level
Error Recovery Count
Detail of Authentication status and Source IP: port number.
Figure 3.5.4.1
(Figure 3.5.4.1: iSCSI Session.)
Clicking the button “
“ will display connection(s).
Figure 3.5.4.2
(Figure 3.5.4.2: iSCSI Connection.)
- 38 -
3.5.5
CHAP account
“CHAP account” allows users manage a CHAP account for authentication.
There is only one account only.
To setup CHAP account, please follow the procedures.
”.
1.
Click “
2.
Enter “User”, “Secret”, and “Confirm” secret again.
Figure 3.5.5.3
3.
Click “
”.
Figure 3.5.5.4
(Figure 3.5.5.4: Netstor iSCSI Storage, create a CHAP account named “chap1”.)
4.
Click “
” to delete CHAP account.
3.6 Volume configuration
“Volume config” is designed for setting up the volume configurations
- 39 -
including “Physical disk”, “Volume group”, “User data volume”, “Cache
volume”, and “Logical unit”.
Figure 3.6.1
3.6.1
Physical disk
“Physical disk” to view the status of hard drives in the system. The following
are operation tips:
1.
2.
3.
4.
Multiple selection. Select one or more checkboxes in front of the slot
number. Or select the checkbox at the top left corner which will
select all slots. Check again will select none.
The list will disappear if there is no VG or only VG of RAID 0 and
JBOD. Because these RAID levels cannot be set as dedicated
spare disk.
These three functions “Free disks”, “Global spares”, and
“Dedicated spares” can make multiple selections.
The instructions of the web pages (e.g.: volume config of VG, UDV,
CV, LUN pages) are the same as previous steps.
- 40 -
Figure 3.6.1.1
(Figure 3.6.1.1: Physical disks of slot 1,2,3,4 are created for a VG named “VG-R0”.
Physical disks of slot 6,7,8,9 are created for a VG named “VG-R6”. Slot 11 is set as
dedicated spare disk of VG named “VG-R6”. The others are free disks.)
•
PD column description:
Slot
The position of hard drives. The number of slot
begins from left to right at the front side. The button
next to the number of slot is “More Information”. It
shows the details of the hard drive.
WWN
World Wide Name.
- 41 -
Size (GB)
Capacity of hard drive.
VG Name
Related volume group name.
Status
The status of hard drive.
“GOOD” Æ the hard drive is good.
“DEFECT” Æ the hard drive has the bad blocks.
“FAIL” Æ the hard drive cannot work in the
respective volume.
Status 1
“RD” Æ RAID Disk. This hard drive has been set to
RAID.
“FR” Æ FRee disk. This hard drive is free for use.
“DS” Æ Dedicated Spare. This hard drive has been
set to the dedicated spare of the VG.
“GS” Æ Global Spare. This hard drive has been set
to
a global spare of all VGs.
“RS” Æ ReServe. The hard drive contains the VG
information but cannot be used. It may be
caused by an uncompleted VG set, or hot-plug
of this disk in the running time. In order to
protect the data in the disk, the status changes
to reserve. It can be reused after setting it to
“FR” manually.
Status 2
“R” Æ Rebuild. The hard drive is doing rebuilding.
“M”Æ Migration. The hard drive is doing migration.
Speed
3.0G Æ From SATA ATAPI standard, if the disk can
support ATAPI IDENTIFY PACKET DEVICE
command, and the speed can achieve Serial
ATA Gen-2 signaling speed (3.0Gbps).
1.5G Æ From SATA ATAPI standard, if the disk can
support ATAPI IDENTIFY PACKET DEVICE
command, and the speed can achieve Serial
- 42 -
ATA Gen-1 signaling speed (1.5Gbps).
Unknown Æ The disk doesn’t support above
command, so the speed is defined as unknown.
•
PD operations description:
Free disks
Make the selected hard drive to be free for use.
Global
spares
Set the selected hard drive(s) to global spare of all
VGs.
Dedicated
spares
Set hard drive(s) to dedicated spare of selected VGs.
In this page, Netstor ISCSI Storage also provides HDD auto spindown down
to save power. The default setting is disabled. User can set up in physical disk
page, too.
Figure 3.6.1.2
Figure 3.6.1.3
3.6.2
Volume group
“Volume group” allows users view the status of each volume group.
- 43 -
•
VG column description:
Figure 3.6.2.1
(Figure 3.6.2.1: There is a RAID 0 with 4 physical disks, named “VG-R0”, total size is
297GB, free size is 267GB, related to 1 UDV. Another is a RAID 6 with 4 physical disks,
named “VG-R6”.)
No.
Number of volume group. The button next to the No.
is “More Information” indication. It shows the details
of the volume group.
Name
Volume group name. The button next to the Name is
“Rename”.
Total(GB)
Total capacity of this volume group.
Free(GB)
Free capacity of this volume group.
#PD
The number of physical disks in volume group.
#UDV
The number of user data volumes in volume group.
Status
The status of volume group.
“Online” Æ volume group is online.
“Fail” Æ volume group is fail.
Status 1
“DG” Æ DeGraded mode. This volume group is not
completed. The reason could be lack of one
- 44 -
disk or disk failure.
•
3.6.3
Status 2
“R” Æ Rebuild. This volume group is doing
rebuilding.
Status 3
“M” Æ Migration. This volume group is doing
migration.
RAID
The RAID level of the volume group. The button next
to the RAID level is “Migrate”. Click “Migrate” can
add disk(s) to do expansion or change the RAID level
of the Volume group.
VG operations description:
Create
Create a volume group
Delete
Delete a volume group
User data volume
“User data volume” allows users view the status of each user data volume.
Figure 3.6.3.1
- 45 -
(Figure 3.6.3.1: Create a UDV named “UDV-01”, related to “VG-R0”, size is 30GB, status
is online, write back, high priority, related to 1 LUN, with cache volume 663MB. The
other UDV is named “UDV-02”, initializing to 46%.
•
UDV column description:
No.
Number of user data volume. The button below to
the UDV No. is “More Information”. It shows the
details of the User data volume.
Name
Name of this user data volume. The button below the
UDV Name is “Rename”.
Size(GB)
Total capacity of user data volume. The button below
to the size is “Extend”.
Status
The status of user data volume.
“Online” Æ user data volume is online.
“Fail” Æ user data volume is failed.
Status 1
“WT” Æ Write Through.
“WB” Æ Write Back.
“RO” Æ Read Only.
The button below to the status1 is “Set read/write
mode”.
Status 2
“HI” Æ HIgh priority.
“MD” Æ MiD priority.
“LO” Æ LOw priority.
The button in below to the status2 is “Set Priority”.
Status 3
“I” Æ user data volume is being initialized.
“R” Æ user data volume is being rebuilt.
“M” Æ user data volume is being migrated.
R%
Ratio of initializing or rebuilding.
- 46 -
•
3.6.4
RAID
The levels of RAID that user data volume is using.
#LUN
Number of LUN(s) that user data volume is
attaching.
VG name
The VG name of the user data volume.
CV (MB)
The cache volume of the user data volume.
UDV operations description:
Attach
Attach to a LUN.
Create
Create a user data volume.
Delete
Delete a user data volume.
Cache volume
“Cache volume” can view the status of cache volume.
The global cache volume is a default cache volume which is created after
power on automatically, and cannot be deleted. The size of global cache is
based on the RAM size. It is total memory size minus the system usage.
Figure 3.6.4.1
- 47 -
•
•
CV column description:
No.
Number of the Cache volume. The button next to the
CV No. is “More Information”. It shows the details
of the cache volume.
Size(MB)
Total capacity of the cache volume The button next
to the CV size is “Resize”. The CV size can be
adjusted.
UDV Name
Name of the UDV.
CV operations description:
Create
Create a cache volume.
Delete
Delete a cache volume.
If there is no free space for creating a new dedicated cache volume, cut down
the global cache size first. After resized, then the dedicated cache volume can
be created.
Tips
The minimum size of global cache volume is 40MB. The
minimum size of dedicated cache volume is 20MB.
3.6.5
Logical unit number
“Logical unit” allows users view the status of attached logical unit number of
each UDV.
User can attach LUN by clicking the “
”. “Host” must
enter an initiator node name for access control, or fill-in wildcard “*”, which
- 48 -
means every host can access the volume. Choose LUN number and
permission, then click “
”.
Figure 3.6.5.1
Figure 3.6.5.2
(Figure 3.6.5.2: UDV-01 is attached to LUN 0 and every host can access. UDV-02 is
attached to LUN 1 and only initiator note named “iqn.1991-05.com.microsoft:demo”
can access.)
•
LUN operations description:
Attach
Attach a logical unit number to a user data volume.
Detach
Detach a logical unit number from a user data
volume.
The matching rules of access control are inspected from top to bottom in
sequence. For example: there are 2 rules for the same UDV, one is “*”, LUN 0;
and the other is “iqn.host1”, LUN 1. The other host “iqn.host2” can login
successfully because it matches the rule 1.
The access will be denied when matching rule fails to deploy.
- 49 -
3.6.6
Example
The followings are examples of creating volumes. Example 1 is to create two
UDVs sharing the same CV (global cache volume) and set a global spare disk.
Example 2 is to create two UDVs. One shares the global cache volume, and
the other uses dedicated cache volume. Set a dedicated spare disk.
•
Example 1
Example 1 is to create two UDVs in one VG, each UDV uses global cache
volume. Global cache volume is created after system boots up automatically.
So, no action is needed to set CV. Then set a global spare disk. Eventually,
delete all of them.
Step 1: Create VG (Volume Group).
To create the volume group, please follow the procedures:
Figure 3.6.6.1
1.
Select “/ Volume config / Volume group”.
2.
Click “
3.
Key in a VG Name, choose a RAID level from the list, click
“
4.
5.
“.
“ to choose the RAID PD slot(s), then click
“.
“
Check the outcome. Click “
correct.
Done. A VG has been created.
- 50 -
“ if all setups are
Figure 3.6.6.2
(Figure 3.6.6.2: Creating a RAID 5 with 4 physical disks, named “VG-R5”. The total size
is 114GB. Because there is no related UDV, free size still remains 114GB.)
Step 2: Create UDV (User Data Volume).
To create a user data volume, please follow the procedures.
Figure 3.6.6.3
1.
Select “/ Volume config / User data volume”.
2.
Click “
3.
Enter a UDV name, choose a VG Name and enter a size of UDV;
decide the stripe high, block size, read/write mode and set priority,
then click “
“.
4.
5.
Done. A UDV has been created.
Do one more time to create another UDV.
”.
- 51 -
Figure 3.6.6.4
(Figure 3.6.6.4: Create UDVs named “UDV-R5-1” and “UDV-R5-2”. Regarding to “VGR5”, the size of “UDV-R5-1” is 50GB, the size of “UDV-R5-2” is 64GB. The status of
these UDVs are online, write back, high priority with cache volume 120MB. “UDV-R5-1”
is initialing about 4%. There is no LUN attached.)
Step 3: Attach LUN to UDV.
There are 2 methods to attach LUN to UDV.
1. In “/ Volume config / User data
“
”.
2. In “/ Volume config / Logical unit”, press “
volume”,
press
”.
The procedures are as follows:
Figure 3.6.6.5
1.
2.
Select a UDV.
Enter “Host” name, which is an initiator node name for access
control, or fill-in wildcard “*”, which means every host can access to
this volume. Choose LUN and permission, then click
- 52 -
3.
“
Done.
”.
Figure 3.6.6.6
(Figure 3.6.6.6: UDV-R5-1 is attached to LUN 0 and any hosts can access. UDV-R5-2 is
attached to LUN 1 and only initiator note named “iqn.1991-05.com.microsoft:demo”
can access.)
Tips
The matching rules of access control are from top to bottom in
sequence.
Step 4: Set global spare disk.
To set global spare disks, please follow the procedures.
1.
2.
3.
Select “/ Volume config / Physical disk”.
Select the free disk(s) by clicking the checkbox in the row, then click
“ to set as global spares.
“
“GS” icon is shown in status 1 column.
- 53 -
Figure 3.6.6.7
(Figure 3.6.6.7: Slot 5 is set as global spare disk.)
Step 5: Done. They can be used as iSCSI disks.
Delete UDVs, VG, please follow the steps listed below.
Step 6: Detach LUN from UDV.
In “/ Volume config / Logical unit”,
Figure 3.6.6.8
1.
Select LUNs by clicking the checkbox in the row, and then click
“
2.
3.
”. There will pop up a confirmation page.
Choose “OK”.
Done.
- 54 -
Step 7: Delete UDV (User Data Volume).
To delete the user data volume, please follow the procedures:
1.
2.
Select “/ Volume config / User data volume”.
Select UDVs by clicking the checkbox in the row.
3.
Click “
4.
5.
Choose “OK”.
Done. Then, the UDVs are deleted.
“. There will pop up a confirmation page.
Tips
When deleting UDV, the attached LUN(s) related to this UDV
will be detached automatically.
Step 8: Delete VG (Volume Group).
To delete the volume group, please follow the procedures:
1.
2.
Select “/ Volume config / Volume group”.
Select a VG by clicking the checkbox in the row, make sure there is
no UDV on this VG, otherwise the UDV(s) on this VG must be
deleted first.
3.
Click “
4.
5.
Choose “OK”
Done. The VG has been deleted.
“. There will pop up a confirmation page.
Tips
The action of deleting one VG will succeed only when all of the
related UDV(s) are deleted in this VG. Otherwise, it will
Step 9: Free global spare disk.
- 55 -
To free global spare disks, please follow the procedures.
1.
2.
Select “/ Volume config / Physical disk”.
Select the global spare disk by clicking the checkbox in the row,
then click “
“ to free disk.
Step 10: Done, all volumes have been deleted.
•
Example 2
Example 2 is to create two UDVs in one VG. One UDV shares global cache
volume, the other uses dedicated cache volume. First, dedicated cache
volume should be created; it can be used in creating UDV. Eventually, delete
them.
Each UDV is associated with one specific CV (cache volume) to execute the
data transaction. Each CV could have different cache memory size. If there is
no special request in UDVs, it uses global cache volume. Or user can create a
dedicated cache for indivifual UDV manually. Using dedicated cache volume,
the performance would not be affected by other UDV’s data access.
The total cache size depends on the RAM size and then set all cache size as
global cache automatically. To create a dedicated cache volume, first step is
to cut down global cache size for the dedicated cache volume. Please follow
the procedures.
Step 1: Create dedicated cache volume.
Figure 3.6.6.9
- 56 -
1.
2.
Select “/ Volume config / Cache volume”.
If there is no free space for creating a new dedicated cache volume.
Firstly, decrease the global cache size by clicking the button
“
” in size column. After resizing, click “
”
to return to the cache volume page.
3.
Click “
4.
Fill in the size and click “
5.
“ to enter the setup page.
“.
Done. A new dedicated cache volume has been set.
Tips
The minimum size of global cache volume is 40MB. The
minimum size of dedicated cache volume is 20MB.
Step 2: Create VG (Volume Group).
Please refer to Step 1 of Example 1 to create VG.
Step 3: Create UDV (User Data Volume).
Please refer to Step 2 of Example 1 to create UDV. To create a UDV with
dedicated cache volume, please follow the below procedures.
Figure 3.6.6.10
1.
Select “/ Volume config / User data volume”.
2.
Click “
”.
- 57 -
3.
4.
Enter a UDV name, choose a VG Name, and select “Dedicated”
cache which is created at Step 1. Enter the size of UDV; decide the
stripe height, block size, read/write mode and set priority, then click
“
“.
Done. A UDV using dedicated cache has been created.
Figure 3.6.6.11
(Figure 3.6.6.11: UDV named “UDV-R5-1” uses global cache volume 40MB, and “UDVR5-2” uses dedicated cache volume 20MB. “UDV-R5-2” is initialing about 5%.)
Figure 3.6.6.12
(Figure 3.6.6.12: In “/ Volume config / Cache volume”, UDV named “UDV-R5-2” uses
dedicated cache volume 20MB.)
Step 4: Attach LUN to UDV.
Please refer to Step 3 of Example 1 to attach LUN.
Step 5: Set dedicated spare disk.
- 58 -
To set dedicated spare disks, please follow the procedures:
1.
2.
3.
Select “/ Volume config / Physical disk”.
Select a VG from the list, then select the free disk(s). Click
“
” to set the dedicated spare for the VG.
The “DS” icon is shown in the column of status 1.
Figure 3.6.6.13
(Figure 3.6.6.13: Slot 5 has been set as dedicated spare disk of VG named “VG-R5”.)
Step 6: Done. The PDs can be used as iSCSI disks.
Delete UDVs and VG, please follow the steps.
Step 7: Detach LUN from UDV.
Please refer to Step 6 of Example 1 to detach LUN.
Step 8: Delete UDV (User Data Volume).
Please refer to Step 7 of Example 1 to delete UDV.
Step 9: Delete VG (User Data Volume).
- 59 -
Please refer to Step 8 of Example 1 to delete VG.
Step 10: Free dedicated spare disk.
To free dedicated spare disks, please follow the procedures:
1.
2.
Select “/ Volume config / Physical disk”.
Select the dedicated spare disk by clicking the checkbox in the row,
then click “
“ to free disk.
Step 11: Delete dedicated cache volume.
To delete the cache volume, please follow the procedures:
1.
2.
Select “/ Volume config / Cache volume”.
Select a CV by clicking the checkbox in the row.
3.
Click “
4.
5.
Choose “OK”.
Done. The CV has been deleted.
“. There will pop up a confirmation page.
Caution
Global cache volume cannot be deleted.
Step 12: Done, all volumes have been deleted.
3.7 Enclosure management
“Enclosure management” allows managing enclosure information including
“SES config”, “Hardware monitor”, “S.M.A.R.T.” and “UPS”. For the
enclosure management, there are many sensors for different purposes, such
as temperature sensors, voltage sensors, hard disks, fan sensors, power
sensors, and LED status. Due to the different hardware characteristics among
these sensors, they have different polling intervals. Below is the detail polling
- 60 -
time intervals:
1.
2.
3.
4.
5.
6.
Temperature sensors: 1 minute.
Voltage sensors: 1 minute.
Hard disk sensors: 10 minutes.
Fan sensors: 10 seconds . When there are 3 errors consecutively,
controller sends ERROR event log.
Power sensors: 10 seconds, when there are 3 errors consecutively,
controller sends ERROR event log.
LED status: 10 seconds.
Figure 3.7.1
3.7.1
SES configuration
SES represents SCSI Enclosure Services, one of the enclosure management
standards. “SES config” can enable or disable the management of SES.
Figure 3.7.1.1
(Figure 3.7.1.1: Enable SES in LUN 0, and can be accessed from every host.)
The SES client software is available at the following web site:
SANtools: http://www.santools.com/
- 61 -
3.7.2
Hardware monitor
“Hardware monitor” allows users view the information of current voltage and
temperature.
Figure 3.7.2.1
If “Auto shutdown” has been detected, the system will shutdown
automatically when voltage or temperature is out of the normal range. For
better data protection, please check “Auto Shutdown”.
For better protection and avoiding single short period of high temperature or
alias triggering auto shutdown, controllers use multiple condition judgments
for auto shutdown, below are the details of condition that Auto shutdown will
be triggered.
1.
There are 3 sensors placed on controllers for temperature checking,
they are on core processor, PCI-X bridge, and daughter board.
controller will check each sensor every 30 seconds. When one of
these sensors is over high temperature value for continuously 3
minutes, auto shutdown will be triggered immediately.
- 62 -
2.
The core processor temperature limit is 85℃. The PCI-X bridge
temperature limit is 80℃. The daughter board temperature limit is
80℃.
3.
If the overheat situation doesn’t last for 3 minutes, controller will not
do auto shutdown.
3.7.3
Hard drive S.M.A.R.T. support
S.M.A.R.T. (Self-Monitoring Analysis and Reporting Technology) is a
diagnostic tool for hard drives to deliver warning of drive failures in advance.
S.M.A.R.T. provides users chances to take actions before possible drive
failure.
S.M.A.R.T. measures many attributes of the hard drive all the time and
inspects the properties of hard drives which are close to be intolerable. The
advanced notice of possible hard drive failure can allow users to back up hard
drive or replace the hard drive. This is much better than hard drive crash when
it is writing data or rebuilding a failed hard drive.
“S.M.A.R.T.” allows users display S.M.A.R.T. information of hard drives. The
number is the current value; the number in parenthesis is the threshold value.
The threshold values of hard drive vendors are different; please refer to
vendors’ specification for details.
S.M.A.R.T. only supports SATA drive. SAS drive does not have. It will show
N/A in this web page.
- 63 -
Figure 3.7.3.1
3.8 System maintenance
“Maintenance” allows operation of the system functions including “Upgrade”
to the latest firmware, “Info” to show the system version, “Reset to default”
to reset all controller configuration values to factory settings, “Config import
& export” to import and export all controller configuration except VG/UDV
setting and LUN setting, and “Shutdown” to either reboot or shutdown the
system.
Figure 3.8.1
- 64 -
3.8.1
Upgrade
“Upgrade” allows users upgrade firmware. Please prepare new firmware file
named “xxxx.bin” in local hard drive, then click “
” to select the file.
Click “
”, it will pop up a message “Upgrade system now?
If you want to downgrade to the previous FW later (not recommend), please
export your system configuration in advance”, click “Cancel” to export system
configuration in advance, then click “OK” to start to upgrade firmware.
Figure 3.8.1.1
Figure 3.8.1.2
When upgrading, there is a progress bar running. After finished upgrading, the
system must reboot manually to make the new firmware took effect.
Tips
Please contact
firmware.
3.8.2
with
service@netstor.com.tw
for
latest
Info
“Info” can display system information (including firmware version), CPU type,
installed system memory, and controller serial number.
- 65 -
3.8.3
Reset to default
“Reset to default” allows user to reset controller to the factory default setting.
Figure 3.8.3.1
Reset to default value, the password is: 1234, and IP address to default
DHCP.
Default IP address: 192.168.10.50 (DHCP)
Default subnet mask: 255.255.255.0
Default gateway: 192.168.10.254
3.8.4
Config import & export
“Config import & export” allows user to save system configuration values:
export, and apply all configuration: import. For the volume configuration
setting, the values are available in export and not available in import which
can avoid confliction/date-deleting between two controllers. That says if one
controller already exists valuable data in the disks and user may forget to
overwrite it. Use import could return to original configuration. If the volume
setting was also imported, user’s current data will be overwritten.
Figure 3.8.4.1
1.
2.
3.
Import: Import all system configurations excluding volume config.
Import Logical unit only: No system nor volume configurations,
import LUN configurations only.
Export: Export all configurations to a file.
- 66 -
Caution
“Import” will import all system configurations excluding
volume configuration; the current configurations will be
3.8.5
Shutdown
“Shutdown” displays “Reboot” and “Shutdown” buttons. Before power off,
it’s better to execute “Shutdown” to flush the data from cache to physical
disks. The step is necessary for data protection.
Figure 3.8.5.1
3.9 Logout
For security reason, “Logout” allows users logout when no user is operating
the system. Re-login the system; please enter username and password again.
- 67 -
Chapter 4 Advanced operation
4.1 Rebuild
If one physical disk of the VG which is set as protected RAID level (e.g.: RAID
3, RAID 5, or RAID 6) is FAILED or has been unplugged/removed, then the
status of VG is changed to degraded mode, the system will search/detect
spare disk to rebuild the degraded VG to a complete one. It will detect
dedicated spare disk as rebuild disk first, then global spare disk.
Netstor iSCSI series controllers support Auto-Rebuild. The following is the
scenario:
Take RAID 6 for example:
1.
When there is no global spare disk or dedicated spare disk in the
system, controller will be in degraded mode and wait until (A) there
is one disk assigned as spare disk, or (B) the failed disk is removed
and replaced with new clean disk, then the Auto-Rebuild starts. The
new disk will be a spare disk to the original VG automatically.
If the new added disk is not clean (with other VG information), it
would be marked as RS (reserved) and the system will not start
"auto-rebuild".
If this disk is not belonging to any existing VG, it would be FR (Free)
disk and the system will start Auto-Rebuild.
If user only removes the failed disk and plugs the same failed disk in
the same slot again, the auto-rebuild will start running. But
rebuilding in the same failed disk may impact customer data if the
status of disk is unstable. Netstor suggests all customers not to
rebuild in the failed disk for better data protection.
2.
When there is enough global spare disk(s) or dedicated spare disk(s)
for the degraded array, controller starts Auto-Rebuild immediately.
Moreover, in RAID 6, if there is another disk failure occurs during
- 68 -
rebuilding, controller will start the above Auto-Rebuild process as
well. Auto-Rebuild feature only works at that the status of VG is
"Online". It will not work at “Offline”. Thus, it will not conflict with
the “Roaming”.
3.
In degraded mode, the status of VG is “Degraded”. When rebuilding,
the status of VG/UDV will be “Rebuild”, the column “R%” in UDV will
display the ratio in percentage. After complete rebuilding, the status
will become “Online”. VG will become completely one.
Tips
“Set dedicated spare” is not available if there is no VG or
only VG of RAID 0, JBOD, because user can not set dedicated
Sometimes, rebuild is called recover; they are the same meaning. The
following table is the relationship between RAID levels and rebuild.
RAID 0
Disk striping. No protection for data. VG fails if any hard drive
fails or unplugs.
RAID 1
Disk mirroring over 2 disks. RAID 1 allows one hard drive fails
or unplugging. Need one new hard drive to insert to the
system and rebuild to be completed.
N-way
mirror
Extension to RAID 1 level. It has N copies of the disk. N-way
mirror allows N-1 hard drives failure or unplugging.
RAID 3
Striping with parity on the dedicated disk. RAID 3 allows one
hard drive failure or unplugging.
RAID 5
Striping with interspersed parity over the member disks. RAID
5 allows one hard drive failure or unplugging.
RAID 6
2-dimensional parity protection over the member disks. RAID
- 69 -
6 allows two hard drives failure or unplugging. If it needs to
rebuild two hard drives at the same time, it will rebuild the first
one, then the other in sequence.
RAID 0+1
Mirroring of RAID 0 volumes. RAID 0+1 allows two hard drive
failures or unplugging, but at the same array.
RAID 10
Striping over the member of RAID 1 volumes. RAID 10 allows
two hard drive failure or unplugging, but in different arrays.
RAID 30
Striping over the member of RAID 3 volumes. RAID 30 allows
two hard drive failure or unplugging, but in different arrays.
RAID 50
Striping over the member of RAID 5 volumes. RAID 50 allows
two hard drive failures or unplugging, but in different arrays.
RAID 60
Striping over the member of RAID 6 volumes. RAID 40 allows
four hard drive failures or unplugging, every two in different
arrays.
JBOD
The abbreviation of “Just a Bunch Of Disks”. No data
protection. VG fails if any hard drive failures or unplugs.
4.2 VG migration and expansion
To migrate the RAID level, please follow below procedures.
1.
2.
Select “/ Volume config / Volume group”.
Decide VG to be migrated, click the button “
column next the RAID level.
3.
Change the RAID level by clicking the down arrow“
”.
There will be a pup-up which shows if the HDD is not enough to
support the new setting of RAID level, click “
” in the RAID
”
to increase hard drives, then click “
“ to go back
to setup page. When doing migration to lower RAID level, such as
- 70 -
4.
5.
1.
the original RAID level is RAID 6 and user wants to migrate to RAID
0, the controller will evaluate whether this operation is safe or not,
and appear a message of "Sure to migrate to a lower protection
array?” to give user warning.
Double check the setting of RAID level and RAID PD slot. If there is
no problem, click “
“.
Finally a confirmation page shows the detail of RAID info. If there is
no problem, click “
“ to start migration.
Controller also pops up a message of “Warning: power lost
during migration may cause damage of data!” to give user
warning. When the power is abnormally off during the migration, the
data is in high risk.
Migration starts and it can be seen from the “status 3” of a VG with a
running square and an “M”. In “/ Volume config / User data
volume”, it displays an “M” in “Status 4” and complete percentage
of migration in “R%”.
Figure 4.2.1
Figure 4.2.2
(Figure 4.2.2: A RAID 0 with 2 physical disks migrates to RAID 5 with 3 physical disks.)
- 71 -
Figure 4.2.3
(Figure 4.2.3: A RAID 0 migrates to RAID 5, the complete percentage is 12%.)
To do migration, the total size of VG must be larger or equal to the original VG.
It does not allow expanding the same RAID level with the same hard disks of
original VG.
During the setting migration, if user doesn’t setup correctly, controller will pop
up warning messages. Below is the detail of messages.
1.
2.
3.
4.
5.
6.
Invalid VG ID: Source VG is invalid.
Degrade VG not allowed: Source VG is degraded.
Initializing/rebuilding operation's going: Source VG is initializing
or rebuilding.
Migration operation's going: Source VG is already in migration.
Invalid VG raidcell parameter: Invalid configuration. E.g., New
VG's capacity < Old VG's capacity, New VG's stripe size < Old VG's
stripe size. Or New VG's configuration == Old VG's configuration.
Invalid PD capacity: New VG's minimum PD capacity < Old VG's
minimum PD capacity.
Caution
VG Migration cannot be executed during rebuild or UDV
extension.
- 72 -
4.3 UDV Extension
To extend UDV size, please follow the procedures.
1.
2.
3.
4.
Select “/ Volume config / User data volume”.
Decide which UDV to extend, click the button “
” in the Size
column next the number.
Change the size. The size must be larger than the original, and then
click “
“ to start extension.
Extension starts. If UDV needs initialization, it will display an “I” in
“Status 3” and complete percentage of initialization in “R%”.
Figure 4.3.1
Figure 4.3.2
(Figure 4.3.2: Extend UDV-R0 from 5GB to 10GB.)
Tips
The size of UDV extension must be larger than original.
- 73 -
Caution
UDV Extension cannot be executed during rebuild or migration.
4.4 Disk roaming
Physical disks can be re-sequenced in the same system or move all physical
disks from system-1 to system-2. This is called disk roaming. Disk roaming
has some constraints as described in the followings:
1.
2.
Check the firmware of two systems first. It is better that both
systems have the same firmware version or newer.
All physical disks of related VG should be moved from system-1 to
system-2 together. The configuration of both VG and UDV will be
kept but LUN configuration will be cleared in order to avoid conflict
with system-2.
4.5 Support Microsoft MPIO and MC/S
MPIO (Multi-Path Input/Output) and MC/S (Multiple Connections per Session)
use multiple physical paths to create logical "paths" between the server and
the storage device. In this case, which one or more of these components fails,
causing the path to fail, multi-path logic uses an alternate path for I/O. So
applications can still access their data.
Microsoft iSCSI initiator supports multi-path. Please follow the procedures to
use MPIO feature.
1.
2.
3.
A host with dual LAN ports connects cables to controller.
Create a VG/UDV and attach this UDV to the host.
When installing “Microsoft iSCSI initiator”, please install MPIO
- 74 -
4.
5.
6.
7.
driver at the same time.
Logon to target separately on each port. When logon to target,
check “Enable multi-path”.
MPIO mode can be selected on Targets Æ Details Æ Devices Æ
Advanced in Microsoft iSCSI initiator.
Rescan disk.
There will be one disk running MPIO.
- 75 -
Appendix
A. Certification list
•
RAM
NR760A / NR340A / NR330A RAM Spec: 184pins, DDR333(PC2700),
Reg.(register) or UB(Unbufferred), ECC or Non-ECC, from 64MB to
1GB, 32-bit or 64-bit data bus width, x8 or x16 devices, 9 to 11 bits
column address.
Vendor
ATP
ATP
ATP
ATP
Unigen
Unigen
Unigen
Unigen
Unigen
Unigen
Unigen
Unigen
Unigen
Unigen
Unigen
•
Model
AG64L72T8SQC4S, 512MB DDR-400 (ECC) with Samsung
AG28L64T8SHC4S, 1GB DDR-400 with Samsung
AG28L72T8SHC4S, 1GB DDR-400 (ECC) with Samsung
AB28L72Q8SHC4S, 1GB DDR-400 (ECC, Reg.) with Samsung
UG732D6688KN-DH, 256MB DDR-333 (Unbufferred) with Hynix
UG732D7588KZ-DH, 256MB DDR-333 (ECC, Reg.) with Elpida
UG764D7588KZ-DH, 512MB DDR-333 (ECC, Reg.) with Elpida
UG7128D7588LZ-DH, 1GB DDR-333 (ECC, Reg.) with Hynix
UG7128D7488LN-GJF, 1GB DDR-400 (ECC) with Hynix
UG7128D7588LZ-GJF, 1GB DDR-400 (ECC, Reg.) with Hynix
UG7128D7588LZ-GJF, 1GB DDR-400 (ECC, Reg.) with Elpida
UG732D6688KS-DH, 256MB DDR-333 (Unbufferred, Low profile)
with Hynix
UG764D6688LS-DH, 512MB DDR-333 (Unbufferred, Low profile)
with Hynix
UG718D6688LN-GJF, 1GB DDR-400 with Hynix
UG718D6688LN-GJF, 1GB DDR-400 with Elpida
iSCSI Initiator (Software)
OS
Microsoft
Windows
Linux
Software/Release Number
Microsoft iSCSI Software Initiator Release v2.07
System Requirements:
1.
Windows 2000 Server with SP4
2.
Windows Server 2003 with SP2
3.
Windows Server 2003 R2 with SP2
4.
Windows Server 2008
The iSCSI Initiators are different for different Linux Kernels.
1.
For Red Hat Enterprise Linux 3 (Kernel 2.4), install linux-iscsi3.6.3.tar
- 76 -
2.
3.
Mac
For Red Hat Enterprise Linux 4 (Kernel 2.6), use the build-in
iSCSI initiator iscsi-initiator-utils-4.0.3.0-4 in kernel 2.6.9
For Red Hat Enterprise Linux 5 (Kernel 2.6), use the build-in
iSCSI initiator iscsi-initiator-utils-6.2.0.742-0.5.el5 in kernel
2.6.18
ATTO Xtend SAN iSCSI initiator v3.10
System Requirements:
1.
Mac OS X v10.5 or later
For ATTO Xtend SAN iSCSI initiator, it is not free. Please contact your
local distributor.
•
iSCSI HBA card
Vendor
HP
QLogic
QLogic
•
Model
NC380T (PCI-Express, Gigabit, 2 ports, TCP/IP offload, iSCSI
offload)
QLA4010C (PCI-X, Gigabit, 1 port, TCP/IP offload, iSCSI offload)
QLA4052C (PCI-X, Gigabit, 2 ports, TCP/IP offload, iSCSI offload)
NIC
Vendor
HP
HP
IBM
NC7170 (PCI-X, Gigabit, 2 ports)
NC360T (PCI-Express, Gigabit, 2 ports, TCP/IP offload)
NetXtreme 1000 T (73P4201) (PCI-X, Gigabit, 2 ports, TCP/IP
offload)
PWLA8492MT (PCI-X, Gigabit, 2 ports, TCP/IP offload)
Intel
•
Model
GbE Switch
Vendor
Dell
Dell
Dell
HP
•
Model
PowerConnect 5324
PowerConnect 2724
PowerConnect 2708
ProCurve 1800-24G
Hard drive
Netstor iSCSI Series support SATA I, II disks.
Vendor
Hitachi
Hitachi
Hitachi
Hitachi
Model
Deskstar 7K250, HDS722580VLSA80, 80GB, 7200RPM, SATA, 8M
Deskstar E7K500, HDS725050KLA360, 500GB, 7200RPM, SATA II,
16M
Deskstar 7K80, HDS728040PLA320, 40GB, 7200RPM, SATA II, 2M
Deskstar T7K500, HDT725032VLA360, 320GB, 7200RPM, SATA II,
16M
- 77 -
Hitachi
Maxtor
Maxtor
Samsung
Seagate
Seagate
Seagate
Seagate
Seagate
Seagate
Seagate
Seagate
Seagate
Seagate
Westem Digital
Westem Digital
Westem Digital
Westem Digital
Westem Digital
Westem Digital
Westem Digital
Deskstar P7K500, HDP725050GLA360, 500GB, 7200RPM, SATA II,
16M
DiamondMax Plus 9, 6Y080M0, 80GB, 7200RPM, SATA, 8M
DiamondMax 11, 6H500F0, 500GB, 7200RPM, SATA 3.0Gb/s, 16M
SpinPoint P80, HDSASP0812C, 80GB,7200RPM, SATA, 8M
Barracuda 7200.7, ST380013AS, 80GB, 7200RPM, SATA 1.5Gb/s, 8M
Barracuda 7200.7, ST380817AS, 80GB, 7200RPM, SATA 1.5Gb/s, 8M,
NCQ
Barracuda 7200.8, ST3400832AS, 400GB, 7200RPM, SATA 1.5Gb/s,
8M, NCQ
Barracuda 7200.9, ST3500641AS, 500GB, 7200RPM, SATA 3.0Gb/s,
16M, NCQ
Barracuda 7200.11, ST31000340AS, 1000GB, 7200RPM, SATA
3.0Gb/s, 32M, NCQ
NL35, ST3400633NS, 400GB, 7200RPM, SATA 3.0Gb/s, 16M
NL35, ST3500641NS, 500GB, 7200RPM, SATA 3.0Gb/s, 16M
Barracuda ES, ST3500630NS, 500GB, 7200RPM, SATA 3.0Gb/s, 16M
Barracuda ES, ST3750640NS, 750GB, 7200RPM, SATA 3.0Gb/s, 16M
Barracuda ES.2, ST31000340NS, 1000GB, 7200RPM, SATA 3.0Gb/s,
32M
Caviar SE, WD800JD, 80GB, 7200RPM, SATA 3.0Gb/s, 8M
Caviar SE, WD1600JD, 160GB, 7200RPM, SATA 1.5G/s , 8M
Raptor, WD360GD, 36.7GB, 10000RPM, SATA 1.5Gb/s, 8M
Caviar RE2, WD4000YR, 400GB, 7200RPM, SATA 1.5Gb/s, 16M, NCQ
RE2, WD4000YS, 400GB, 7200RPM, SATA 3.0Gb/s, 16M
Caviar RE16, WD5000AAKS, 500GB, 7200RPM, SATA 3.0Gb/s, 16M
RE2, WD5000ABYS, 500GB, 7200RPM, SATA 3.0Gb/s, 16M, NCQ
B. Event notifications
•
PD events
Level
•
Type
Description
INFO
Disk inserted
Disk <slot> is inserted into system.
WARNING
Disk removed
Disk <slot> is removed from system.
ERROR
HDD failure
Disk <slot> is disabled.
HW events
Level
Type
Description
WARNING
ECC error
Single-bit ECC error is detected.
ERROR
ECC error
Multi-bit ECC error is detected.
- 78 -
INFO
ECC info
ECC memory is installed.
INFO
ECC info
Non-ECC memory is installed.
INFO
SCSI info
Received SCSI Bus Reset event at the SCSI
Bus <number>.
•
EMS events
Level
Type
Description
INFO
Power installed
Power <number> is installed.
ERROR
Power absent
Power <number> is absent.
INFO
Power work
Power <number> is restored to work.
ERROR
Power warning
Power <number> is out of work.
WARNING
Power detect
PSU signal detection <number>.
INFO
Fan work
Fan <number> is restored to work.
ERROR
Fan warning
Fan <number> is out of work.
INFO
Fan installed
Fan <number> is installed.
ERROR
Fan not present
Fan <number> is not present.
WARNING
Thermal warning
System temperature <location> is a little bit
higher.
ERROR
Thermal critical
System Overheated <location>!!!
ERROR
Thermal critical
System Overheated <location>!!! The system
shutdown
will do the auto shutdown immediately.
Thermal ignore
Unable to update thermal value on <location>.
WARNING
value
WARNING
Voltage warning
System voltage <location> is a little bit
higher/lower.
ERROR
Voltage critical
System voltages <location> failed!!!
ERROR
Voltage critical
System voltages <location> failed!!! The system
shutdown
will do the auto shutdown immediately.
INFO
UPS info
UPS detection succeeded.
WARNING
UPS error
UPS detection failed.
ERROR
UPS error
AC loss for the system is detected.
ERROR
UPS error
UPS Power Low!!! The system will do the auto
shutdown immediately.
WARNING
SMART T.E.C.
Disk <slot> S.M.A.R.T. Threshold Exceed
Condition occurred for attribute <item>.
WARNING
SMART failure
Disk <slot>: Failure to get S.M.A.R.T
- 79 -
information.
•
RMS events
Level
INFO
Type
Console Login
Description
<username> login from <IP or serial console>
via Console UI.
INFO
Console Logout
<username> logout from <IP or serial console>
via Console UI.
•
INFO
Web Login
<username> login from <IP> via Web UI.
INFO
Web Logout
<username> logout from <IP> via Web UI.
LVM2 events
Level
Type
Description
INFO
VG created
VG <name> has been created.
WARNING
VG creation failed
Failed to create VG <name>.
INFO
VG deleted
VG <name> has been deleted.
INFO
VG renamed
VG <name> has been renamed to <name>.
INFO
UDV created
UDV <name> has been created.
WARNING
UDV creation failed
Failed to create UDV <name>.
INFO
UDV deleted
UDV <name> has been deleted.
INFO
UDV renamed
Name of UDV <name> has been renamed to
<name>.
Read-only caching
Cache policy of UDV <name> has been set as
enabled
read only.
Writeback caching
Cache policy of UDV <name> has been set as
enabled
write-back.
Write-through
Cache policy of UDV <name> has been set as
caching enabled
write-through.
INFO
UDV extended
Size of UDV <name> extends.
INFO
LUN attached
UDV <name> has been LUN-attached.
INFO
LUN attachment
Failed to attach LUN to UDV <name>.
INFO
INFO
INFO
failed
INFO
LUN detached
UDV <name> has been detached.
INFO
LUN detachment
Failed to attach LUN from bus <number>, SCSI
failed
ID <number>, lun <number>.
- 80 -
INFO
UDV initialization
UDV <name> starts initialization.
started
INFO
UDV initialization
UDV <name> completes the initialization.
finished
WARNING
UDV initialization
Failed to complete initialization of UDV <name>.
failed
INFO
UDV rebuild started
UDV <name> starts rebuilding.
INFO
UDV rebuild
UDV <name> completes rebuilding.
finished
WARNING
UDV rebuild failed
Failed to complete rebuild of UDV <name>.
INFO
UDV migration
UDV <name> starts migration.
started
INFO
UDV migration
UDV <name> completes migration.
finished
ERROR
UDV migration
Failed to complete migration of UDV <name>.
failed
INFO
VG migration
VG <name> starts migration.
started
INFO
VG migration
VG <name> completes migration.
finished
INFO
UDV rewrite started
Rewrite at LBA <address> of UDV %s starts.
INFO
UDV rewrite
Rewrite at LBA <address> of UDV %s
finished
completes.
WARNING
UDV rewrite failed
Rewrite at LBA <address> of UDV %s failed.
WARNING
VG degraded
VG <name> is under degraded mode.
WARNING
UDV degraded
UDV <name> is under degraded mode.
ERROR
VG failed
VG <name> is failed.
ERROR
UDV failed
UDV <name> is failed.
ERROR
Recoverable read
Recoverable read error occurred at LBA
error occurred
<address>-<address> of UDV <name>.
Recoverable write
Recoverable write error occurred at LBA
error occurred
<address>-<address> of UDV <name>.
Unrecoverable read
Unrecoverable read error occurred at LBA
error occurred
<address>-<address> of UDV <name>.
Unrecoverable
Unrecoverable write error occurred at LBA
write error occurred
<address>-<address> of UDV <name>.
PD config read
Config read failed at LBA <address>-<address>
failed
of PD <slot>.
ERROR
ERROR
ERROR
ERROR
- 81 -
ERROR
ERROR
PD config write
Config write failed at LBA <address>-<address>
failed
of PD <slot>.
Global CV
Failed to change size of the global cache.
adjustment failed
INFO
Global cache OK
The global cache is ok.
ERROR
Global CV creation
Failed to create the global cache.
failed
Dedicated spare
PD <slot> has been configured to VG <name>
configured
as a dedicated spare disk.
Global spare
PD <slot> has been configured as a global
configured
spare disk.
PD read error
Read error occurred at LBA <address>-
occurred
<address> of PD <slot>.
PD write error
Write error occurred at LBA <address>-
occurred
<address> of PD <slot>.
INFO
PD freed
PD <slot> has been removed from VG <name>.
INFO
VG imported
Configuration of VG<name> has been imported.
INFO
VG restored
Configuration of VG <name> has been restored.
INFO
UDV restored
Configuration of UDV <name> has been
INFO
INFO
ERROR
ERROR
restored.
•
iSCSI events
Level
INFO
Type
iSCSI login
Description
iSCSI login from <IP> succeeds.
succeeds
INFO
iSCSI login rejected
iSCSI login from <IP> was rejected, reason
[<string>]
INFO
iSCSI logout
iSCSI logout from <IP> was received, reason
[<string>].
•
Battery backup events
Level
INFO
Type
BBM sync data
Description
Abnormal shutdown detected, start flushing
battery-backuped data (<number> KB).
INFO
BBM sync data
Abnormal shutdown detected, flushing battery-
- 82 -
backuped data finishes.
•
INFO
BBM detected
Battery backup module is detected.
INFO
BBM is good
Battery backup module is good.
INFO
BBM is charging
Battery backup module is charging.
WARNING
BBM is failed
Battery backup module is failed.
INFO
BBM
Battery backup feature is <item>.
JBOD events
Level
INFO
Type
Description
Disk inserted
JBOD <number> disk <slot> is inserted into
system.
Warning
Disk removed
JBOD <number> disk <slot> is removed from
system.
ERROR
HDD failure
JBOD <number> disk <slot> is disabled.
INFO
JBOD inserted
JBOD <number> is inserted into system
WARNING
JBOD removed
JBOD <number> is removed from system
WARNING
SMART T.E.C
JBOD <number> disk <slot>: S.M.A.R.T.
Threshold Exceed Condition occurred for
attribute %s
WARNING
SMART Failure
JBOD <number> disk <slot>: Failure to get
S.M.A.R.T information
INFO
Dedicated spare
JBOD <number> PD <slot> has been
configured
configured to RG <name> as a dedicated spare
disk.
INFO
WARNING
WARNING
INFO
Global spare
JBOD <number> PD <slot>d has been
configured
configured as a global spare disk.
PD read error
Read error occurred at LBA <address>-
occurred
<address> of JBOD <number> PD <slot>.
PD write error
Write error occurred at LBA <address>-
occurred
<address> of JBOD <number> PD <slot>.
PD freed
JBOD <number> PD <slot> has been removed
from RG <name>.
•
System maintenance events
Level
Type
Description
- 83 -
INFO
System shutdown
System shutdown.
INFO
System reboot
System reboot.
INFO
FW upgrade start
Firmware upgrade start.
INFO
FW upgrade
Firmware upgrade success.
success
WARNING
FW upgrade failure
Firmware upgrade failure.
C. Known issues
1.
Microsoft MPIO is not supported on Windows XP or Windows 2000
Professional.
Workaround solution: Using Windows Server 2003 or Windows
2000 server to run MPIO.
D. Microsoft iSCSI Initiator
Here is the step by step to setup Microsoft iSCSI Initiator. Please visit
Microsoft website for latest iSCSI initiator. The following setup may not use
the latest Microsoft iSCSI initiator.
1.
2.
3.
Run Microsoft iSCSI Initiator.
Click “Discovery”.
Click “Add”. Input IP address or DNS name of iSCSI storage
device.
- 84 -
Figure D.2
4.
5.
Click “OK”.
Click “Targets”.
Figure D.3
6.
Click “Log On”. Check “Enable multi-path” if running MPIO.
Figure D.4
- 85 -
7.
8.
9.
Click “Advance…” if CHAP information is needed.
Click “OK”. The status would be “Connected”.
Done, it can connect to an iSCSI disk.
The following procedure is to log off iSCSI device.
1.
2.
3.
4.
Click “Details” in “Targets”.
Check the Identifier, which will be deleted.
Click “Log off”.
Done, the iSCSI device log off successfully.
Figure D.5
E. Installation steps for large volume (TB)
Introduction:
Netstor iSCSI Series are capable of supporting large volumes (>2TB). When
connecting Netstor iSCSI Storage to 64bit OS installed host/server, the
host/server is inherently capable for large volumes from the 64bit address. On
- 86 -
the other side, if the host/server is installed with 32bit OS, user has to change
the block size to 1KB, 2KB or 4KB to support volumes up to 4TB, 8TB or
16TB, for the 32bit host/server is not LBA (Logical Block Addressing) 64bit
supported. For detail installation steps, please refer to following steps below.
Step 1: Configure target
1.
Prepare the hard drivers which has a capacity over 2TB. Follow the
example in chapter 3 to create a VG/UDV. Then attach LUN.
Tips
If the OS is 64bit, user can set the block size to any available
value. If the OS is 32bit, user must change the block size to
larger values than 512B. There will be a confirmation pop-up
Figure E.1:
(Figure G.1: choose “OK” for 64bit OS, choose “Cancel” for 32bit OS, this step will
change block size to 4K automatically.)
2.
Click the button “
” in “No.” column to see “More
information”. Look at block size is 512B for 64bit OS setting, 4K for
32bit OS setting.
Step 2: Configure host/server
1.
Follow the installation guild provided by HBA vendor, install HBA
- 87 -
driver properly. For iSCSI models, please install the latest Microsoft
iSCSI initiator from the link below.
http://www.microsoft.com/downloads/details.aspx?FamilyID=12cb3c
1a-15d6-4585-b385-befd1319f825&DisplayLang=en
Step 3: Initialize/Format/Mount the disk
1.
Go to Start Æ Control Panel Æ Computer Management Æ Disk
Management, it displays a new disk.
Figure E.2
2.
Initialize the disk.
Figure E.3
3.
Convert to GPT disk for over 2TB capacity. For more detail
- 88 -
information about GPT, please visit
http://www.microsoft.com/whdc/device/storage/GPT_FAQ.mspx
Figure E.4
4.
Format the disk.
Figure E.5
5.
Done.
Figure E.6
6.
The new disk is ready to use, the available size = 2.72TB.
- 89 -
Figure E.7
Caution
If user setups 512B block size for VD and the host/server OS is
32bit, in the last step of formatting disk, user will find OS
7.
Wrong setting result: OS can not format disk sector after
2048GB(2TB).
Figure E.8
- 90 -
F. MPIO and MC/S setup instructions
Here is the instruction to setup MPIO or MC/S. The following network
diagrams are the examples. Please follow them to setup the environment.
Remind that host must have multi NICs which are set up as different IPs.
Figure F.1
The MPIO setup instructions are in the following:
1.
2.
3.
4.
5.
6.
7.
8.
Create VG/UDV, and then attach LUN.
Add the first “Target Portal” on Microsoft iSCSI initiator.
Add the second “Target Portal” on Microsoft iSCSI initiator.
Logon.
Enable “Enable multi-path” checkbox. Then click “Advanced…”.
Select the first “Source IP” and “Target Portal” to iSCSI data port 1.
Then click “OK”.
Logon again.
Enable “Enable multi-path” checkbox. Then click “Advanced…”.
- 91 -
9.
10.
11.
12.
13.
14.
15.
Select the second “Source IP” and “Target Portal” to iSCSI data port
2. Then click “OK”.
iSCSI device is connected. Click “Details”.
Click “Device” tab, then click “Advanced”.
Click “MPIO” tab, select “Load Balance Policy” to “Round
Robin”.
Click “Apply”.
Run “Device Manage” in Windows. Make sure MPIO device is
available.
Done.
The MC/S setup instructions are in the following:
1.
2.
Create VG/UDV, and then attach LUN.
Add the first “Target Portal” on Microsoft iSCSI initiator, For MC/S,
there is only ONE “Target Portals” in the “Discovery” tab.
3. Logon.
4. Then click “Advanced…”.
5. Select the first “Source IP” and “Target Portal” to iSCSI data port 1.
Then click “OK”.
6. After connected, click “Details”, then in the “Session” tab, click
“Connections”.
7. Choose “Round Robin” in “Load Balance Policy”.
8. “Add” Source Portal for the iSCSI data port 2.
9. Select the second “Source IP” and “Target Portal” to iSCSI data port
2. Then select “OK”.
10. Done.
System information
NR760A / NR340A / NR330A
SW version
2.3.1
- 92 -
Download PDF
Similar pages