Addonics ISC8P2G-S iSCSI Subsystem User Manual

Addonics ISC8P2G-S iSCSI Subsystem User Manual
Add to My manuals

Below you will find brief information for iSCSI Subsystem ISC8P2G-S. The Addonics ISC8P2G-S iSCSI subsystem is a high-performance hardware RAID controller. It has 2 GbE NIC ports, iSCSI jumbo frame support, RAID 6, 60 ready, Snapshot (QSnap) integrated on the subsystem and other features. The ISC8P2G-S iSCSI subsystem connects to the host system via iSCSI interface. It can be configured to any RAID level. The ISC8P2G-S iSCSI subsystem is the most cost -effective disk array controller with completely integrated high -performance and data -protection capabilities which meet or exceed the highest industry standards.

advertisement

Assistant Bot

Need help? Our chatbot has already read the manual and is ready to assist you. Feel free to ask any questions about the device, but providing details will make the conversation more productive.

iSCSI Subsystem ISC8P2G-S User Manual | Manualzz

Addonics iSCSI Subsystem

ISC8P2G-S

User Manual

Table of Contents

Chapter 1 RAID introduction ................................ ........ 4

1.1

Features ............................................................................ 4

1.2

Terminology....................................................................... 6

1.3

RAID levels ........................................................................ 8

1.4

Volume relationship diagr am........................................... 10

Chapter 2 Getting started ................................ ........... 11

2.1

Before starting ................................................................. 11

2.2

iSCSI introduction............................................................ 11

2.3

Management methods .................................................... 13

2.3.1

2.3.2

Web GUI ................................ ................................ ................................ .13

Console serial port ................................ ................................ ................. 14

2.3.3

Remote c ontrol – secure shell ................................ ................................ 14

2.4

Enclosure......................................................................... 14

2.4.1

2.4.2

2.4.3

LCM ................................ ................................ ................................ ........14

System buzzer ................................ ................................ ........................ 17

LED................................ ................................ ................................ .........17

Chapter 3 Web GUI guideline ................................ ..... 18

3.1

P-series GUI hierarchy .................................................... 18

3.2

Login ................................................................................ 19

3.3

Quick install ..................................................................... 21

3.4

System configuration ........ Error! Bookmark not defined.

3.4.1

3.4.2

3.4.3

3.4.4

3.4.5

3.4.6

3.4.7

3.4.8

System name ................................ ........... Error! Bookmark not defined.

IP address ................................ ............... Error! Bookmark not defined.

Language................................ ................. Error! Bookmark not defined.

Login con fig ................................ ............. Error! Bookmark not defined.

Password ................................ ................. Error! Bookmark not defined.

Date ................................ ......................... Error! Bookmark not defined.

Mail ................................ .......................... Error! Bookmark not defined.

SNMP ................................ ...................... Error! Bookmark not defined.

3.4.9

Messenger ................................ ............... Error! Bookmark not defined.

3.4.10

System log server ................................ .... Error! Bookmark not defined.

3.4.11

Event log ................................ .................. Error! Bookmark not defined.

3.5

iSCSI config ...................... Error! Bookmark not defined.

3.5.1

3.5.2

3.5.3

3.5.4

3.5.5

Entity property ................................ ......... Error! Bookmark not defined.

NIC................................ ........................... Error! Bookmark not defined.

Node ................................ ........................ Error! Bookmark not defined.

Session ................................ .................... Error! Bookmark not defined.

CHAP account ................................ ......... Error! Bookmark not defined.

3.6

Volume configuration ....... Error! Bookmark not defined.

3.6.1

3.6.2

3.6.3

3.6.4

3.6.5

Volume relationship diagram ................... Error! Bookmark not defined.

Physical disk ................................ ............ Error! Bookmark not defined.

Volume group ................................ .......... Error! B ookmark not defined.

User data volume ................................ .... Error! Bookmark not defined.

Cache volume ................................ ......... Error! Bookmark not defined.

3.6.6

3.6.7

Logical unit number ................................ . Error! Bookmark not defined.

Examples ................................ ................. Error! Bookmark not defined.

3.7

Enclosure management .................................................. 56

3.7.1

3.7.2

3.7.3

3.7.4

SES configuration ................................ ................................ ................... 56

Hardware monitor ................................ ................................ ................... 57

Hard drive S.M.A.R.T. function support ................................ ................. 58

UPS ................................ ................................ ................................ ........59

3.8

System maintenance ........ Error! Bookmark not defined.

3.8.1

3.8.2

3.8.3

3.8.4

3.8.5

Upgrade ................................ ................... Error! Bookmark not defined.

Info................................ ........................... Error! Bookmark not defined.

Reset to default ................................ ....... Error! Bookmark not defined.

Config import & export ............................. Error! Bookmark not defined.

Shutdown................................ ................. Error! Bookmark not defined.

3.9

Logout............................... Error! Bookmark not defined.

Chapter 4 Advanced operation ................................ ... 65

4.1

Rebuild ............................................................................ 65

4.2

VG migration and expansion Error! Bookmark not defined.

4.3

UDV Extension ................. Error! Bookmark not defined.

4.7

Support Microsoft MPIO and MC/S Error! Bookmark not defined.

Appendix ................................ ................................ ......... 77

A.

B.

C.

D.

E.

F.

G.

Certification list ................................................................ 77

Event notifications ........................................................... 80

Known issues .................................................................. 84

Microsoft iSCSI Initiator ................................................... 84

MPIO and MC/S setup instructions ................................. 89

QLogic QLA4010C setup instructions ........................... 110

Installation Steps for Large Volume (TB) ...................... 115

Chapter 1 RAID Introduction

1.1 Features

The Addonics ISC8P2G -S iSCSI subsystem is a high -performance hardware

RAID controller.

2 GbE NIC ports . iSCSI jumbo frame support .

RAID 6, 60 ready .

Snapshot (QSnap) integrated on the subsystem.

SATA II drives backward compatible .

One logic volume can be shared by as many as 32 hosts .

Host access control .

Configurable N -way mirror for high data protection .

On-line volume migration with no system downtime .

HDD S.M.A.R.T. enabled for SATA drives .

Header/data digest support .

Microsoft VSS, VDS support.

With proper configuration, ISC8P2G -S iSCSI subsystem can provide non -stop service with a high degree of fault tolerance using RAID technology and advanced array management features.

The ISC8P2G -S iSCSI subsystem connects to the host system via iSCSI interface. It can be configured to any RAID level. ISC8P2G -S provides reliable data protection for servers using RAID 6 . RAID 6 allows two HDD failures without any impact on the existing data. Data can be recovered from the remaining data and parity drives.

The ISC8P2G -S iSCS I subsystem is the most cost -effective disk array controller with completely integrated high -performance and data -protection capabilities which meet or exceed the highest industry standards. The best data solution for small/medium business (SMB) users.

Snapshot-on-the-box (QSnap) is a fully usable copy of a defined collection of data that contains an image of the data as it appeared at a point in time. It is a point-in-time data replication. It provides consistent and instant copies of data volumes without any system downtime. Addonics Snapshot-on-the-box can keep up to 32 snapshots for all data volumes. Rollback feature is provided for restoring the previous -snapshot data easily while continuously using the volume for further data access. Data access whic h includes read/write is working at the background without any impact to end users. The "on -the-box" implies that it does not require any proprietary agents installed at host side. The snapshot is

taken at target side and done by ISC8P2G -S. It will not con sume any host CPU.

The snapshot copies can be taken manually or be scheduled every hour or every day.

1.2 Terminology

The document uses the following terms:

RAID RAID is th e abbreviation of “Redundant Array of Independent

Disks. There are different RAID levels with different degree of data protection , data availability, and performance to the host environment.

PD

VG

UDV

CV

The Physical Disk is a member disk of one specific volume group.

Volume Group. A collection of removable media . One VG consists of a set of UDVs and owns one RAID level attribute.

User Data Volume. Each VG could be divided into several

UDVs. The UDVs from one VG share the same RAID level, but may have different volume capacity.

Cache Volume. ISC8P2G -S uses t he on board memory as cache. All RAM (except for the part which is occupied by the controller) can be used as cache. User can divide the cache for one UDV or share among all UDVs. Each UDV will be associated with one CV for data transaction. Each CV could be assigned different cache memory sizes.

LUN Logical Unit Number. A logical unit number (LUN) is a unique identifier used on an iSCSI connection which enables it to differentiate among separate devices (each of which is a logical unit).

Graphical User Interface. GUI

RAID width,

RAID copy,

RAID row

(RAID cell in one row)

RAID width, copy and row are used to describe one VG.

E.g.:

1. One 4-disk RAID 0 volume: RAID width= 4; RAID copy=1; RAID row=1.

2. One 3-way mirroring volume: RAID width=1; RAID copy=3; RA ID row=1.

WT

WB

3. One RAID 10 volume over 3 4 -disk RAID 1 volume:

RAID width=1; RAID copy=4; RAID row=3.

Write-Through cache write policy. A caching technique in which the completion of a write request is not signaled until data is safely stored on non -volatile media. Each data is synchronized in both data cache and the accessed physical disks.

Write-Back cache write policy. A caching technique in which the completion of a write request is signaled as soon as the data is in cache and actual writing to non -volatile media occurs at a later time. It speeds up system write performance but bears the risk where data may be inconsistent between data cache and the physical disks in one short time interval.

Set the volume to be Read-Only. RO

DS

GS

Dedicated Spare disks. The spare disks are only used by one specific VG. Others could not use the se dedicated spare disks for any rebuilding pu rposes.

Global Spare disks. GS is shared for rebuilding purpose. If some VGs need to use the global spare disks for rebuilding, they could get the spare disks out from the common spare disks pool for such requirement.

Dedicated Cache.

Global Cache.

DC

GC

DG DeGraded mode. Not all of the array’s member disks are functioning, but the array is able to respond to application read and write requests to its virtual disks.

S.M.A.R.T. Self-Monitoring Analysis and Reporting Technology.

SCSI

Small Computer Systems Interface.

WWN

HBA

MPIO

World Wide Name.

Host Bus Adapter.

Multi-Path Input/Output.

MC/S

S.E.S

SAF-TE

NIC iSCSI

LACP

MTU

Multiple Connections per Session

SCSI Enclosure Services.

SCSI Accessed Fault-Tolerant Enclosures.

Network Interface Card.

Internet Small Computer Systems Interface .

Link Aggregation Control Protocol.

Maximum Transmission Unit.

CHAP Challenge Handshake Authentication Protocol. An optional security mechanism to control access to an iSCSI storage system over the iSCSI data ports.

iSNS Internet Storage Name Service.

1.3 RAID levels

RAID 0 Disk striping. ISC8P2G -S RAID 0 needs at least two hard drives.

RAID 1 Disk mirroring over two disks. RAID 1 needs at least two hard drives.

Extension to RAID 1 level. It has N copies of the disk. N-way mirror

RAID 3

RAID 5

RAID 6

Striping with parity on the dedicated disk. RAID 3 needs at least three hard drives.

Striping with interspersed parity over the member disks. RAID

5 needs at least three hard drives.

2-dimensional parity protection over the m ember disks. RAID

6 needs at least f our hard drives.

RAID 0+1

RAID 10

RAID 30

RAID 50

RAID 60

JBOD

Mirroring of the member RAID 0 volumes. RAID 0+1 ne eds at least four hard drives.

Striping over the member RAID 1 volumes. RAID 10 needs at least four hard drives.

Striping over the member RAID 3 volumes . RAID 30 needs at least six hard drives.

Striping over the member RAID 5 v olumes. RAID 50 needs at least six hard drives.

Striping over the member RAID 6 volumes. RAID 60 needs at least eight hard drives.

The abbreviation of “Just a Bunch Of Disks. JBOD needs at least one hard drive.

1.4 Volume relationship diagram

LUN 1

VD 1

LUN 2

VD 2

LUN 3

Snap

VD

+

+

+

RG Global

CV

Dedicate d CV

PD 1 PD 2 PD 3 DS

RAM

Figure 1.4.1

This is the volume structure Addonics designed. It describes the relationship of

RAID components. One RG (RAID group) consists of a set of VDs (Virtual disk) and owns one RAID level attribute. Each RG can be divided into several VDs.

The VDs in one RG share the same RAID level, but may have different volume capacity. Each VD will be associated with one specific CV (Cache Volume) to execute the data transaction. Each CV can have different cache memor y size by user’s modification/setting. LUN (Logical Unit Number) is a unique identifier , in which users can access through SCSI commands.

Formatted

Chapter 2 Getting started

2.1 Before starting

Before starting, prepare the following items.

1. Check the “Certification list” in Appendix A to confirm the hardware setting is fully supported.

2. A server or worksation with a NIC or iSCSI HBA.

3. CAT 5e, or CAT 6 network cable s for web GUI IP port and iSCSI data ports. We recommend CAT 6 cables for best performance.

4. Prepare storag e system configuration plan.

5. Management (web GUI IP port) and iSCSI data ports network information. When using static IP, prepare static IP addresses, subnet mask, and default gateway.

6. Gigabit LAN switches. (Recommended)

7. CHAP security information, includi ng CHAP usernames and secrets.

(Optional)

8. Setup the hardware connection before powering up servers and

ISC8P2G -S iSCSI subsystem. Connect web GUI IP port cable , and iSCSI data port cables first.

2.2 iSCSI introduction

iSCSI (Internet SCSI) is a protocol which encapsulates SCSI (Small Computer

System Interface) commands and data in TCP/IP packets for linking storage devices with servers over common IP infrastructures. iSCSI provides high performance SANs over standard IP networks like LAN, WAN or the Internet .

IP SANs are true SA Ns (Storage Area Networks) which allow servers to attach to an infinite number of storage volumes by using iSCSI over TCP/IP networks. IP

SANs can scale the storage capacity with any type and brand of storage system.

IP-SANs also incl ude mechanisms for security, data replication, multi -path and high availability.

Storage protocol, such as iSCSI, has “two ends” in the connection. These ends are the initiator and the target. In iSCSI we call them iSCSI initiator and iSCSI target. The iS CSI initiator requests or initiates any iSCSI communication. It requests all SCSI operations like read or write. An initiator is usually located on the host/server side (either an iSCSI HBA or iSCSI Software initiator).

The iSCSI target is the storage dev ice itself or an appliance which controls and serves volumes or virtual volumes. The target is the device which performs SCSI command s or bridges it to an attached storage device. iSCSI targets can be disks, tapes, RAID arrays, tape libraries, and etc.

Host 2

Host 1

(initiator)

NIC

(initiator) iSCSI

HBA

IP SAN iSCSI device 1 iSCSI device 2

(target) (target)

Figure 2.2.1

The host side need s an iSCSI initiator. The initiator is a driver which handles the

SCSI traffic over iSCSI. The initiator can be software or hardware (HBA). R efer to the certification list of iSCSI HBA(s) in Appendix A. OS native initiators or other software initiators use the standard TCP/IP stack and Ethernet hardware , while iSCSI HBA (s) use their own iSCSI and TCP/IP stacks.

Hardware iSCSI HBA(s) would provide its initiator tool. Refer to the vendors ’ HBA user manual. Microsoft, Linux and Mac provide software iSCSI initiator driver.

Below are the available links:

1. Link to download the Microsoft iSCSI software initiator:

http://www.microsoft.com/downloads/details.aspx?familyid=12cb3c1a -

15d6-4585-b385-befd1319f825&displaylang=en

Refer to Appendix D for Microsoft iSCSI initiator installation procedure.

2. Linux iSCSI initiator is also available. For different kernels, there are different iSCSI drivers. Check A ppendix A f or software iSCSI initiator certification list. If user needs the latest Linux iSCSI initiator, please visit Open -iSCSI project for most update information. Linux-iSCSI

(sfnet) and Ope n-iSCSI projects merged in April 11, 2005 .

Open-iSCSI website: http://www.open -iscsi.org/

Open-iSCSI README: http://www.open -iscsi.org/docs/README

Google g roups: http://groups.google.com/group/open -iscsi/threads?gvc=2 http://groups.google.com/group/open -iscsi/topics

3. globalSAN iSCSI Initiator for OS X http://www.studionetworksolutions.com/products/product_detail.php?t= more&pi=11

ATTO iSCSI initiator is available for Mac.

Website: http://www.attotech.com/xtend.html

2.3 Management methods

There are three management methods to manage ISC8P2G-S iSCSI subsystem :

2.3.1 Web GUI

ISC8P2G -S supports graphical user interface (GUI) to manage the system. The default setting of the web GUI port IP is DHCP and the DHCP IP address is displayed on the LCM. A user can check the LCM for the IP address first, then open a web browser and type the DHCP address : (The DHCP addre ss is dynamic and user may need to check every time after reboot again.)

E.g., on LCM.

192.168.1.50

Addonics ISC8P2G -S • http://192.168. 1.50

Move the cursor on any of the function block located on the left side of the web browser, a dialog box opens to authenticate current use r.

Login as Administrator

Login name: admin

Default password: supervisor

To login as Read-Only account .

Login name: user

Default password: 1234

Note: It onl y allows seeing the configuration but cannot change any setting.

2.3.2 Console serial port (Optional)

Use NULL modem cable to connect console port.

The console setting is baud rate: 115200, 8 bits, 1 stop bit, and no parity.

Terminal type: vt100

Login name: admin

Default password: supervisor

2.3.3 Remote control – secure shell

SSH (secure shell) is requir ed for ISC8P2G -S to remote login. The SSH client software is available at the followin g web site:

SSHWinClient WWW: http://www.ssh.com/

Putty WWW: http://www.chiark.gre enend.org.uk/

Host name: 192.168.1.50 (Check your DHCP address for this field.)

Login name: admin

Default password: supervisor

E.g.

$ ssh [email protected]

Tips

ISC8P2G -S only support SSH for remote control. When using

SSH, the IP address and the password is required for login.

2.4 Enclosure

2.4.1 LCM

There are four buttons to control ISC8P2G -S LCM (LCD Control Module) , including:

(up), (down), ESC (Escape), and ENT (Enter).

After booting up the system, the following screen shows web GUI port IP and model name :

192.168.1.50

Addonics ISC8P2G -S•

Press “ENT”.

The following are LCM functions: “Alarm Mute ”, “Reset/Shutdown” , “Quick

Install”, “View IP Setting” , “Change IP Config” and “Reset to Default” . To shift between the menus, press (up) or (down) buttons.

When a WARNING or ERROR is detected within the device , the LCM displays the event log to provide users more detail s.

The following table shows each function ’s description.

System Info. Display system information.

Alarm Mute To m ute the alarm when an error occurs, select this function.

Reset/Shutdown To reset or shutdown the ISC8P2G -S.

Quick Install Three Quick steps to create a volume. Refer to section

3.3 for procedure using the web GUI.

Volume Wizard Smart steps to create a volume. Refer to next chapter for operation in web UI.

View IP Setting Display curr ent IP address, subnet mask, and gateway.

Change IP

Config

Set IP address, subnet mask, and gateway. There are 2 selections, DHCP (Get IP address from DHCP server) or set static IP.

Reset to Default Reset to default sets password to default: supervisor, and set IP address to default as DHCP setting.

Example:

Default IP address: 192.168.1.50 (DHCP)

Default subnet mask: 255.255.255.0

Default gateway : 192.168.1.254

The following is LCM menu hierarchy.

[System Info.]

[Alarm Mute]

[Firmware

Version x.x.x]

[RAM Size xxx MB]

[ •Yes No‚]

[Reset]

[Reset/Shutdown]

Addonics

Technology

•‚

[Quick Install]

[Volume Wizard]

[View IP Setting]

[Shutdown]

RAID 0

RAID 1

RAID 3

RAID 5

RAID 6

RAID 0+1 xxx GB

[Local]

RAID 0

RAID 1

RAID 3

RAID 5

RAID 6

RAID 0+1

[JBOD x] •‚

RAID 0

RAID 1

RAID 3

RAID 5

RAID 6

RAID 0+1

[IP Config]

[Static IP]

[IP Address]

[192.168.010.050]

[IP Subnet Mask]

[255.255.255.0]

[IP Gateway]

[192.168.010.254]

[DHCP]

[Change IP

Config]

[Static IP]

[Reset to Default] [ •Yes No‚]

[ •Yes

No ‚]

[ •Yes

No ‚]

[Apply The

Config]

[Use default algorithm]

[new x disk]

•‚ xxx BG

[ •Yes

No ‚]

[IP Address]

[IP Subnet

Mask]

[IP

Gateway]

[Apply IP

Setting]

[ •Yes

No ‚]

[Volume

Size] xxx GB

Adjust

Volume Size

Adjust IP address

Adjust

Submask IP

Adjust

Gateway IP

[ •Yes

No ‚]

[Apply

The

Config]

[ •Yes

No ‚]

[Apply

The

Config]

[ •Yes

No ‚]

Caution

Before powering off, it is recommended to execute

“Shutdown” to flush the data from cache to physical disks.

2.4.2 System buzzer

The system buzzer features ar e describe in the following:

1. The system buzzer alarm s for 1 sec ond when system boots up successfully.

2. The system buzzer alarm s continuously when an error event happens.

To stop t he alarm , use the alarm mute option .

3. The alarm will be muted automatically wh en the error situation is resolved. E.g., when a RAID 5 array is degraded and the alarm rings.

After a user changes/adds one physical disk for rebuilding, and when the rebuilding is done, the alarm will be muted automatically.

2.4.3 LED

The LED features are de scribe as foll ows:

1. POWER LED : Hardware activated LED when system is powered on.

2. BUSY LED : Hardware activated LED when the front -end channel is busy.

3. System STATUS LED : Indicates system status. When an error occurs or the RAID is degraded, the LED lights u p.

Chapter 3 Web GUI guideline

3.1 ISC8P2G-S Web GUI Hierarchy

The table below shows the hierarchy of ISC8P2G -S Web GUI.

Quick installation à Step 1 / Step 2 / Confirm

System configuration

System setting à System name / Date and time

IP address à MAC address / Address / DNS / port

Login setting

Mail setting

à Login configuration / Admin password / User password

à Mail

Notification setting

à SNMP / Messenger / System log server / Event log filter iSCSI configuration

Entity property

NIC

à Entity name / iSNS IP

à Aggregation / IP settings for iSCSI ports / Become default gateway / Enable jumbo frame

Node à Create / Authenticate / Rename / User / Delete

Session à Session information / Delete

CHAP account à Create / Delete

Volume configuration

Volume create Step 1 / Step 2 / Step 3 / Step 4 / Confirm wizard

Physical disk à

Set Free disk / Set Global spare / Set Dedicated spare / Set property / More information

RAID group à

Create / Migrate / Activate / Deactivate / Scrub /

Delete / Set disk property / More information

Virtual disk à

Create / Extend / Scrub / Delete / Set property /

Attach LUN / Detach LUN / List LUN / Set snapshot

Snapshot

Logical unit space / Cleanup snapshot / Take snapshot / Auto snapshot / List snapshot / More information

à

Cleanup snapshot / Auto snapshot / Take snapshot /

Export / Rollback / Delete

à

Attach / Detach

Enclosure management

SES à

Enable / Disable configuration

Hardware à

Auto shutdown monitor

S.M.A.R.T. à

S.M.A.R.T. information

(Only for SATA disks)

UPS à

UPS Type / Shutdown battery level / Shutdown delay /

Shutdown UPS

Maintenance

System à

System information information

Upgrade

Reset to default

à Browse the firmware to upgrade / Export configuration

à Sure to reset to factory default?

Import and à Import/Export / Import file export

Event log à Download / Mute / Clear

Reboot and shutdown

à

Reboot / Shutdown

Sure to logout? Logout

3.2 Login

On the web browser, ty pe the IP address shown on the LCM display.

After lo gin, you can ch oose the Quick installation function block on the left side of the window to do configuration.

Figure 3.2.3

There are four indicators at the top -right corner of the web GUI.

Figure 3.2.4

1. RAID light: Green means , the RAID array is correctly functioning .

Red represents RAID failure or degradation .

2. Temperature light: Green is normal. Red represents abnormal temperature.

3. Voltage light: Green is normal. Red represents abnormal voltage status.

4. UPS light: Green is normal. Red rep resents abnormal UPS status.

3.3 Quick Installation

The “Quick installation” function is used to create a volume.

The ISC8P2G -S Quick Install ation function has a smart policy. When the system is full, meaning all 8 HDD are connected and all HDD have the same size,

ISC8P2G -S’s Quick Install function lists all possible configurations and sizes among different RAID level options. The ISC8P2G -S Quick installation will use all available HDD for the RAID level which the user decides.

But when the system is inserted with different sizes of HDD, e.g., 6*200G HDD and 2*80G, ISC8P2G -S also lists all possible combinations of different RAID

Level and different sizes and you may observe there are some HDD not used

(Free Status).

Step 1: Click “Quick installat ion”, then choose the RAID level. After choosing the RAID level, then click “ ”. It will link to another page.

Step 2: Click “

Figure 3.3.1

” to use default algorithm.

Formatted

Step 3: Cl ick “ ” to select the default volume size.

Step 4: Confirm page. Click “ a VD will be created.

” if all setups are correct. Then

Done. You can start to use the system now.

Figure 3.3.2

(Figure 3.3.2: A RAID 0 Virtual disk with the VD name “QUICK83716”, named by system itself, with the total available volume size 297GB.)

Formatted

3.4 System configuration

“System config uration” is used for setting up the “System setting”, “IP

address”, “Login setting”, “Mail setting ”, and “Notification setting ”.

Figure 3.4.1

3.4.1 System setting

“System setting ” can set system name and date . Default “System name” is the model name, e .g.: ISC8P2G -S. You can modify the system name.

Formatted

Figure 3.4.1.1

Check “Change d ate and time ” to set up the current date, time, and time zone before using or synchronize time from NTP (Network Time Protocol) server.

Formatted

3.4.2 IP address

“IP address” enables you to change the IP address for remote administration usage. There are 2 options, DHCP (Get IP address from DHCP server) or static

IP. The default setting is DHCP. User can change the HTTP, HTTPS, and SSH port number when the default port number is not allowed on host/server.

3.4.3 Login setting

Figure 3.4.2.1

“Login setting” enables you to set single admin management , auto logout time and Admin/User password . The single admin management can prevent multiple users accessing the same ISC8P2G -S at the same time.

1. Auto logout: The options are (1) Disable; (2) 5 minutes; (3) 30 minutes; (4) 1 hour. The system will log out automatically when user is inactive for a period of time.

Formatted

2. Login lock: Disable/Enable. When the login lock is enabled, the system allows only one user to login or modify system settings.

Figure 3.4.3.1

Check “Change admin p assword” or “Change user p assword” to change admin or user password. The maximum length of the password is 12 chara cters.

Formatted

3.4.4 Mail setting

“Mail setting” Enter at most 3 mail addresses for receiving the ev ent notification. Some mail servers would check “Mail-from address” and need authentication for anti -spam. Fill the necessary fields and click “Send test mail” to test whether email functions are available. User can also select which levels of event logs w ill be sent via mail. Default setting only enables ERROR and

WARNING event logs.

Figure 3.4.4.1

Formatted

3.4.5 Notification setting

“Notification setting ” can set up SNMP trap for alerting via SNMP , pop-up message via Windows Messenger (not MSN) , alert via syslog protocol , and event log filter.

Figure 3.4.5.1

“SNMP” allows up to 3 SNMP trap addresses. Default community setting is

“public”. User can choose the event log levels and default setting only enables

INFO event log in SNMP. There are many S NMP tools. The following web sites are for your reference:

Formatted

SNMPc: http://www.snmpc.com/

Net-SNMP: http://net-snmp.sourceforge.net/

Using “Messenger” , user must enabl e the service “Messenger” in Windows

(Start à Control Panel à Administrative Tools à Serv ices à Messenger), and then event logs can be received. It allows up to 3 messenger addresses. User can choose the event log levels and default setting enables the WAR NING and

ERROR event logs.

Using “System log server” , user can choose the facility and the event log level.

The default port of syslog is 514. Th e default setting enables event le vel: INFO,

WARNING and ERROR event logs.

There are some syslog server tool s. The following web sites are for your reference:

WinSyslog: http://www.winsyslog.com/

Kiwi Syslog Daemon: http://www.kiwisyslog.com/

Most UNIX systems build in syslog daemon.

“Event log filter ” setting can enable event level on “Pop up events ” and “LCM”.

3.5 iSCSI configuration

“iSCSI configuration ” is designed for setting up the “Entity Property ”, “NIC”,

“Node”, “Session”, and “CHAP account ”.

Figure 3.5.1

Formatted

3.5.1 Entity property

“Entity property ” will enable you to view the entity name of the ISC8P2G-S, and setup “iSNS IP ” for iSNS (Internet Storage Name Service ). iSNS protocol allows automated discovery, management and configuration of iSCSI devices on a

TCP/IP netw ork. If using iSNS, you need to install a iSNS server in the SAN . Add an iSNS server IP address into iSNS server lists in order that iSCSI initiat or service can send queries. The entity name of ISC8P2G -S can not changed.

Figure 3.5.1.1

3.5.2 NIC

“NIC” can change IP address es of iSCSI data ports. The ISC8P2G -S has two gigabit LAN ports to transmit data.

Formatted

Figure 3.5.2.2

(Figure 3.5.2.2: ISC8P2G-S has 2 iSCSI data ports. Each of them is set to dynamic IP.)

IP settings:

User can change the IP address by moving cursor to the gray button of the LAN port, select “IP settings for iSCSI ports ” from the drop down menu . There are 2 selections, DHCP (Get IP address from DHCP server) or static IP.

Formatted

Figure 3.5.2.4

Formatted

Default gateway:

Default gateway can be changed by m oving cursor to the gray button of the LAN port, click “Become default gateway ”.

MTU / Jumbo frame:

MTU (Maximum Transmission Unit) size can be enabled by moving cursor to the gray button of LAN port, click “Enable jumbo frame ”.

Caution

The MTU size of the switching hub and HBA on the host must be enabled. Otherwise, the LAN connection cannot work properly.

3.5.3 Node

“Node” enables you to view the target name for iSCSI initiator. ISC8P2G -S supports single -node. The node name of ISC8P2G -S cannot be changed.

Figure 3.5.3.1

(Figure 3.5.3.1: ISC8P2G-S, single-mode.)

CHAP:

CHAP is the abbreviation of Challenge Handshake Authorization Protocol. CHAP is a strong authentication method used in point-to-point for user login. It’s a type of authentication in which the authentication server sends the client a key to be used for encrypting the username and p assword. CHAP enables the u sername and password to be transmitted in an encrypted form for protection.

To use CHAP authentication, follow the steps below.

1. Click “

2. Select “CHAP ”.

”.

3. Click “

Figure 3.5.3.7

”.

Figure 3.5.3.8

4. Go to “/ iSCSI configuration / CHAP account ” page to create CHAP account. Refer to next section for more detail.

5. In “Authenticate ” page, select “None” to disable CHAP.

Tips

After setting CHAP, the initiato r in host/server should be set the same CHAP account. Otherwise, user cannot login.

Formatted

Formatted

3.5.4 Session

“Session” can display iSCSI session and connection information, including the following items:

1. Host (Initiator Name)

2. Error Recovery Level

3. Error Recovery Count

4. Detail of Authentication status and Source IP: port number.

Figure 3.5.4.1

(Figure 3.5.4.1: iSCSI Session.)

To view more information, move c ursor to the gray button of sessi on number, click “List connection ”. It will list all connection(s) of the session.

Figure 3.5.4.2

(Figure 3.5.4.2: iSCSI Connection.)

3.5.5 CHAP account

“CHAP account ” allows you to manage a CHAP account for authentication.

ISC8P2G -S can create one CHAP only.

To setup CHAP account, follow the steps below.

1. Click “ ”.

2. Enter “User”, “Secret”, and “Confirm” secret again.

3. Click “

Figure 3.5.5.3

”.

Figure 3.5.5.4

(Figure 3.5.5.4: created a CHAP account named “chap1”.)

4. Click “Delete” to delete CHAP account.

3.6 Volume configuration

“Volume config uration” is designed for setting up volume configuration information which includes “Volume create wizard ”, “Physical disk” , “RAID

group”, “Virtual disk” , “Snapshot”, and “Logical unit” .

Figure 3.6.1

Formatted

3.6.1 Volume creation wizard

The “Volume creation wizard ” has a smart policy. When the system is full , meaning all 8 HDD are connected , and all HDD have the same size, ISC8P2G -

S’s Quick Install function lists all possible configurations and si zes among different RAID level options. The ISC8P2G -S Quick Install ation will use all available HDD for the RAID level which the user decides.

But when the system is inserted with different sizes of HDD, e.g., 6*200G HDD and 2*80G, ISC8P2G -S also lists al l possible combinations of different RAID

Level and different sizes and you may observe there are some HDD not used

(Free Status).

Step 1: Click “Quick installation” , then choose the RAID level. After choosing the RAID level, then click “ ”. It will link to another page.

Step 2: Click “

Figure 3.3.1

” to use default algorithm.

Formatted

Step 3: Click “ ” to select the default volume size.

Step 4: Confirm page. Click “ a VD will be created.

” if all setups are correct. Then

Done. You can start to use the system now.

Figure 3.3.2

3.6.2 Physical disk

“Physical disk” allows you to view the status of hard drives in the system. The followings are operational tips:

1. Move the cursor to the gray button next to the drive number under Slot .

It will show the functions that can be executed.

2. Active function s can be selected, but inactive functions will be grayed out.

For example, set PD slot number 11 to dedicated spare disk.

Step 1: Move the cursor to the gray button of PD 11, select “Set Dedicated

spare”, it will link to next page.

Formatted

Figure 3.6.2.1

Formatted

Step 2: Select the RGs which you want this drive to be set as a dedicated spare disk, then click “ ”.

Figure 3.6.2.2

Done. View “Physical disk ” page.

Formatted

Figure 3.6.2.3

(Figure 3.6.2.3: Physical disks of slot 1,2,3 are created for a RG named “RG-R5”. Slot 4 is set as dedicated spare disk of RG named “RG-R5”. The others are free disks.)

• PD column description:

Slot

Size (GB)

RG Name

Status

The position of the hard drive s. The button next to the number of slot shows the functions which can be executed.

Capacity of hard drive.

Related RAID group name.

The status of hard drive.

“Online” à the hard drive is online.

“Rebuilding ” à the hard drive is being rebuilt.

“Transition ” à the hard drive is being migrated or is replaced by another disk when rebuilding occurs .

“Missing” à the hard drive has already joined a RG but not plugged into the disk tray of current system.

Formatted

Health

Usage

Vendor

Serial

Type

The health of hard drive.

“Good” à the hard drive is good.

“Failed” à the hard drive failed.

“Error Alert ” à S.M.A.R.T. error alert.

“Read Errors ” à the hard drive has unrecoverable read errors.

“RD” à RAID Disk. This hard drive has been set to

RAID.

“FR” à FRee disk. This hard drive is free for use.

“DS” à Dedicated Spare. This hard drive has been set to the dedicated spare of the RG.

“GS” à Global Spare. This hard dri ve has been set as a global spare of all RGs.

“RS” à ReServe. The hard drive contains th e RG information but cannot be used. It may be caused by an uncompleted RG set, or hot plugging of this disk while there is data transfer.

In order to pro tect the data in the disk, the status changes to reserve. It can be reused after setting it to “FR” manually.

Hard drive vendor.

Hard drive serial number.

Hard drive type.

“SATA” à SATA disk .

“SATA2” à SATA II disk.

“SAS” à SAS disk.

Write cache Hard drive write cache is enabled or disabled.

Standby HDD auto spin down to save power. The default value is disabled.

• PD operations description:

Set Free disk Make the selected hard drive to be free for use.

Set Global spare

Set

Dedicated spares

Set the sele cted hard drive to global spare of all RGs.

Set hard drive to dedicated spare of selected RGs.

Set property Change the status of write cache and standby .

Write cache options:

“Enabled” à Enable disk write cache .

“Disabled ” à Disable disk write cache.

Standby options:

“Disabled ” à Disable spindown .

“30 sec / 1 min / 5 min / 30 min ” à Enable hard drive auto spindown to save power.

More information

3.6.3 RAID group

Shows hard drive detail ed information.

Step 1: Click “

“RAID group” allows you to view the status of each RAID group. The following is an example to create a RG.

”, enter “Name”, choose “RAID level ”, click

” to select PD. Then click “ ”.

Figure 3.6.3.1

Formatted

Step 2: Confirm page. Click “ ” if all setups are correct .

Figure 3.6.3.2

(Figure 3.6.3.2: There is a RAID 0 with 4 physical disks, named “RG-R0”, total size is

135GB. Another is a RAID 5 with 3 physical disks, named “RG-R5”.)

Done. View “RAID group ” page.

• RG column description:

No. Number of RAID group. The button next to the No. shows the function which can be executed.

Name RAID group name.

Total(GB) Total capacity of this RAID group.

Free(GB) Free capacity of this RAID group.

#PD The number of physical disks in RAID group.

#VD The number of virtual disks in RAID group.

Status The status of RAID group.

“Online” à the RAID group is online.

“Offline” à the RAID group is offline.

“Rebuild” à the RAID group is rebuilding .

Formatted

Health

RAID

“Migrate” à the RAID group is being migrated .

“Scrub” à the RAID group is being scrubbed .

The health of RAID group .

“Good” à the RAID group is good.

“Failed” à a hard drive failed.

“Degraded ” à RAID volume failure. The reason could be disk failure.

The RAID level of the RAID group.

• RG operations description:

Create

Migrate

Activate

Deactivate

Scrub

Create a RAID g roup.

Migrate a RAID group . Refer to next chapter for more detail.

Activate a RAID group; it can be executed when RG status is offline. This is for online roaming purpose.

Deactivate a RAID group ; it can be executed when RG status is online . This is for online roaming purpos e.

Scrub a RAID group . It’s a parity regeneration. It supports RAID 3 / 5 / 6 / 30 / 50 / 60 only.

Delete a RAID group .

Delete

Set disk property

More information

Change the disk status of write cache and standby .

Write cache options:

“Enabled” à Enable disk write cache .

“Disabled ” à Disable disk write cache.

Standby options:

“Disabled ” à Disable spindown .

“30 sec / 1 min / 5 min / 30 min ” à Enable hard drive auto spindown to save power.

Show RAID g roup detailed information.

3.6.4 Virtual disk

“Virtual disk” allows you to view the status of each Virtual disk. The f ollowing is an example to create a VD.

Step 1: Click “ ”, enter “Name”, choose “RG name ”,

“Stripe height (KB) ”, “Block size (B) ”, “Read/Write” mode, “Priority”, “Bg

rate” (Background task priority), change “Capacity (GB) ” if necessary. Then click “ ”.

Step 2: Confirm page. Click “

Figure 3.6.4.1

” if all setups are correct .

Done. View “Virtual disk ” page.

• VD column description:

No.

Name

Size(GB)

Right

Priority

Bg rate

Status

Number of this Virtual disk. The button next to the VD

No. shows the functions which can be executed.

Virtual disk name .

Total capacity of th e Virtual disk.

“WT” à Write Through.

“WB” à Write Back.

“RO” à Read Only.

“HI” à HIgh priority.

“MD” à MiD priority.

“LO” à LOw priority.

Background task prior ity.

“4 / 3 / 2 / 1 / 0 ” à Default value is 4 . The higher number the background priority of a VD has, the more background I/O will be scheduled to execute.

The status of Virtual disk.

“Online” à the Virtual disk is online.

“Offline” à the Virtual disk is offline.

“Initiating ” à the Virtual disk is being initialized.

“Rebuild” à the Virtual disk is being rebuilt .

“Migrate” à the Virtual disk is being migrated .

“Rollback ” à the Virtual disk is being rolled back .

“Scrub” à the Virtual disk is being scrubbed.

Health

R %

RAID

#LUN

Snapshot

(MB)

#Snapshot

RG name

The health of Virtual disk .

“Optimal” à the Virtual disk is operating and has experienced no disk failures that would compromise the RG .

“Degraded ” à At least one disk which is part of the

Virtual disk has been marked as failed or has been unplugged.

“Missing” à the Virtual disk has been marked as missing by the system.

“Failed” à the Virtual disk has experienced enough disk failures that would compromise the VD for unrecoverable data loss to occur.

“Part optimal ” à the Virtual disk has experienced disk failures.

Ratio of initializing or re building.

The level of RAID that the Virtual disk is using.

Number of LUN (s) that the Virtual disk is attached to.

The Virtual disk size used for snapshot. The number means “Used snapshot space” / “Total snapshot

space”. The unit is in MegaBytes (MB).

Number of snapshot( s) that the Virtual disk has taken.

The Virtual disk’s RG name

• VD operations description:

Extend

Scrub

Delete

Extend a Virtual disk capacity.

Scrub a Virtual disk. It ’s a parity regeneration. It supports RAID 3 / 5 / 6 / 30 / 50 / 60 only.

Delete a Virtual disk .

Set property Change the VD name, access rights, priority and bg rate.

Access Rights options:

“WT” à Write Through.

“WB” à Write Back.

“RO” à Read Only.

Priority options:

“HI” à HIgh priority.

“MD” à MiD priority.

“LO” à LOw priority.

Bg rate options:

“4 / 3 / 2 / 1 / 0 ” à Default value is 4 . The higher number the background priority of a VD has, the more background I/O will be scheduled to execute.

Attach LUN Attach to a LUN.

Detach LUN Detach to a LUN.

List LUN List attached LUN(s).

Set snapshot space

Set snapshot space for executing snapshot. Refer to next chapter for more detail.

Cleanup snapshot

Take snapshot

Clean all snapshot VD r elated to the Virtual disk and release snapshot space.

Take a snapshot on the Virtual disk.

Auto snapshot

Set auto snapshot on the Virtual disk .

List snapshot List all snapshot VD related to the Virtual disk.

More information

Show Virtual disk detail information .

3.6.5 Snapshot

“Snapshot” allow you to view the status of snapshot. Refer to next chapter for more detail about snapshot concept. The following is an example to create a snapshot.

Figure 3.6.5.2

(Figure 3.6.5.2: “ VD-01” snapshot space has been created, snapshot space is 15360MB, and used 263MB for saving snapshot index.)

Step 3: Take a snapshot. I n “/ Volume config uration / Snapshot”, click

”. It will link to next page. Enter a snapshot name.

Formatted

Figure 3.6.5.3

Step 4: Export the snapshot VD. Move cursor to the gray button next to the

Snapshot VD number; click “Export”. Enter a capacity for snapshot VD. If size is zero, the exported snapshot VD will be read only. Otherwise, the exported snapshot VD can be read/written, and the size will be the maximum capacity to read/write.

Formatted

Figure 3.6.5.4

Formatted

Figure 3.6.5.5

(Figure 3.6.5.5: This is the list of “VD-01”. There are two snapshots in “VD-01”. Snapshot

VD “SnapVD-01” is exported to read only, “SnapVD-02” is exported to read/write.)

Step 5: Attach a LUN for snapshot VD. Refer to the next section for attaching a

LUN.

Done. Snapshot VD can be used.

• Snapshot column description:

No.

Name

Used (MB)

Exported

Right

Number of this snapshot VD . The button next to the snapshot VD No. showS the functions which can be executed.

Snapshot VD n ame.

The amount of snapshot space that has been used.

Snapshot VD is exported or not.

“RW” à Read / Write. The snapshot VD can be read / written to.

“RO” à Read Only. The snapshot VD can be read only.

#LUN Number of LUN (s) that snapshot VD is attached to.

Created time Snapshot VD created time.

Formatted

• Snapshot operations description:

Export

Rollback

Delete

Attach

Detach

Export the snapshot VD.

Rollback the snapshot VD to the original.

Delete the snapshot VD .

Attach to a LUN.

Detach to a LUN.

List LUN List attached LUN(s).

3.6.6 Logical unit

“Logical unit” allow you to view the status of attached logical unit number of each VD.

3.6.7 Example

The following is an example for creating volumes. Example 1 is to create two

VDs and set a global spare disk.

• Example 1

Example 1 is to create two VDs in one RG, each VD uses global cache volume.

Global cache volume is created after system boots up automatically. So, no action is ne eded to set CV. Then set a global spare disk. Eventually, delete all of them.

Step 1: Create RG (RAID group).

To create the RAID group, follow the steps below:

Figure 3.6.7.1

1. Select “/ Volume config uration / RAID group” .

2. Click “ “.

3. Input a RG Name, choose a RAID level from the list, click

“ to choose the RAID PD slot (s), then click

“.

4. Check the outcome. Click “ correct.

5. Done. A RG has been created.

“ if all setups are

Formatted

Figure 3.6.7.4

(Figure 3.6.7.4: Create VDs named “VD-R5-1” and “VD-R5-2”. Regarding to “RG-R5”, the size of “VD-R5-1” is 100GB, the size of “VD-R5-2” is 48GB. “VD-R5-1” is initialing about

3%. There is no LUN attached.)

Step 3: Attach LUN to VD.

Figure 3.6.7.6

(Figure 3.5.8.6: VD-R5-1 is attached to LUN 0. VD-R5-2 is attached LUN 1.)

Step 4: Set global spare disk.

Figure 3.6.7.7

(Figure 3.5.8.7: Slot 4 is set as global spare disk.)

Step 5: Done. They can be used as disks.

Delete VDs, RG, follow the steps listed below.

Step 6: Detach LUN from VD .

In “/ Volume configuration / Logical unit” ,

Formatted

Figure 3.6.7.8

1. Move cursor to the gray button next to the LUN; click “Detach”. There will pop up a confirmation page.

2. Choose “OK”.

3. Done.

Step 7: Delete VD (Virtua l disk).

To delete the Virtual disk , follow the p rocedures:

1. Select “/ Volume config uration / Virtual disk ”.

2. Move cursor to the gray button next to the VD number; click “Delete”.

There will pop up a confirmation page , click “OK”.

3. Done. Then, the VDs are d eleted.

Tips

When deleting VD, the attached LUN(s) related to this VD will be detached automatically.

Formatted

Step 8: Delete RG (RAID group).

To delete the RAID group, follow the steps below:

1. Select “/ Volume config uration / RAID group” .

2. Select a RG which is no VD related on this RG, otherwise the VD(s) on this RG must be deleted first.

3. Move cursor to the gray button next to the RG number click “Delete”.

4. There will pop up a confirmation page , click “OK”.

5. Done. The RG has been deleted.

Tips

The action of deleting one RG will succeed only when all of the related VD(s) are deleted in this RG. Otherwise, it will have an error when deleting this RG .

Step 9: Free global spare disk .

To free global spare disks, follow the steps below.

1. Select “/ Volume configuration / Physical disk” .

2. Move cursor to the gray but ton next to the PD slot; click “Set Free

disk”.

Step 10: Done, all volumes have been deleted.

3.7 Enclosure management

The “Enclosure management” function allows managing the encl osure information including “SES config”, “Hardware monitor ”, “S.M.A.R.T. ” and

“UPS”. The enclosure management provide s sensors for different purposes, such as temperature sensors, voltage sensors, hard disks, fan sensors, power sensors, and LED status. And because the hardware characteristics are different among these sensors, different sensors have differen t polling intervals. Below are the detail s for the polling time intervals:

1. Temperature sensors: 1 minute.

2. Voltage sensors: 1 minute.

3. Hard disk sensors : 10 minutes.

4. Fan sensors: 10 seconds, when there are continuous 3 times of error,

ISC8P2G -S sends ERROR event log.

5. Power sensors: 10 seconds, when there are continuous 3 times of error, ISC8P2G-S sends ERROR event log.

6. LED status: 10 seconds.

Figure 3.7.1

3.7.1 SES configuration

SES represents SCSI Enclosure Services, one of the enclosure management standards. The “SES config” function allows you to enable or disable the management of S ES.

Figure 3.7.1.1

(Figure 3.7.1.1: Enable SES in LUN 0, and can be accessed from every host.)

The SES client software is available at t he following web site:

SANtools: http://www.santools.com/

3.7.2 Hardware monitor

Select “Hardware monitor ” function to view informa tion on current voltage and temperature.

3.7.3 Hard drive S.M.A.R.T. function support

S.M.A.R.T. (Self-Monitoring Analysis and Reporting Technology) is a diagnostic tool for hard drives to give advanced warning of drive failures. S.M.A.R.T. provides users a chance to take actions before possible drive failure.

S.M.A.R.T. measures many attrib utes of the hard drive all the time and determine the hard drives which are close to failure. The advanced identification of possible hard drive failure s can allow users to back up hard drive or replace the hard drive.

The “S.M.A.R.T.” function will displ ay S.M.A.R.T. information of hard drives. The number value is the current value and the number in parenthesis is the threshold value. The threshold values of hard drive vendors are different; refer to vendors’ specification for details.

S.M.A.R.T. informa tion is only supported on SATA drive. SAS drive does not have S.M.A.R.T. information. It will show N/A in this web page.

3.7.4 UPS

Select “UPS” function. It will set UPS ( Uninterruptible Power Supply) parameters.

Figure 3.7.4.1

Currently, the system only support s and communicate s with smart -UPS function of APC ( American Power Conversion Corp .) UPS. Check detail s from http://www.apc.com/ .

First, connect the system and APC UPS via RS -232. T hen set up the shutdown value.

UPS Type

Select UPS Type. Choose Smart -UPS for APC. None for other vendors or no UPS.

Shutdown

Battery Level

(%)

When value is below the setting level, the system will shutdown. Setting level to “0” will disable UPS function.

Shutdown

Delay (s)

Shutdown

UPS

If power failure has occurred, and the system can not return to the set value on the set period, the system will shutdown. Setting delay to “0” will disable the function.

Select ON . When power on the UPS is almost depleted, the UPS will shutdown by itself after the

ISC8P2G -S shutdown successfully. After power comes back on , the UPS will start working and notify system to boot up. Selecting OFF will not turn off the UPS automatically.

Status The status of UPS.

“Detecting…”

“Running”

“Unable to detect UPS ”

“Communication lost ”

“UPS reboot in progress ”

“UPS shutdown in progress ”

“Batteries failed. Please change them NOW! ”

Battery Level

(%)

Current percentage of battery level.

3.8 System maintenance

“Maintenance” allows the operation of system functions which include “System

information” to show the system version, “Upgrade” to the latest firmware,

“Reset to factory default” to reset all controller configuration values to factory settings, “Import and export” to import and export all controller configuration,

“Event log” to view system event log to record critical e vents, and “Reboot and

shutdown” to either reboot or shutdown the system.

3.8.1 System information

Figure 3.8.1

“System i nformation” can display system information (including firmware version), CPU type, installed system memory, and controller serial number.

Formatted

3.8.2 Upgrade

“Upgrade” allow you to upgrade the firmware. Prepare new firmware file named

“xxxx.bin” in l ocal hard drive, then click “ ” to select the file. Click

“ ”, it will pop up a message “Upgrade system now? If you want to downgrade to the previous FW later (not recommend), export your system configuration in advance”, click “Cancel” to export system co nfiguration in advance, then click “OK” to start to upgrade firmware.

Figure 3.8.2.1

Figure 3.8.2.2

When upgrading, there is a progress bar running. After upgrade is completed, the system must be manually rebooted to make the new firmware take effec t.

3.8.3 Reset to factory default

“Reset to factory default” allows user to reset controller to factory default setting.

Default gateway: 192.168.10.254

3.8.4 Import and export

“Import and export” allows user to save system configuration values.

Figure 3.8.4.1

1. Import: Import all system configurations excluding volume configuration.

2. Export: Export all configurations to a file.

Caution

“Import” will import all system configurations excluding volume configuration; the current configurations will be replaced.

3.8.5 Event log

“Event log” allows you to view the event messages. Check the checkbox of

INFO, WARNING, and ERROR to choose the level of display event log. Clicking

“ ” button will save the whol e event log as a text file with file name “log-ModelName -SerialNumber -Date-Time.txt” (e.g., log -ISC8P2G -S-

20090123-145542.txt). Click ”

Click “

” button to clear event log.

” button to stop alarm if there are system alerts.

Figure 3.8.5.1

The event l og is displayed in reverse order which means the latest event log is on the first page. The event logs are actually saved in the first four hard drives; each hard drive has one copy of event log. For one controller, there are four copies of event logs to m ake sure users can check event log any time when there is/are failed disk(s).

3.8.6 Reboot and shutdown

“Reboot and s hutdown” displays “Reboot” and “Shutdown” buttons. Before turning the power off, it’s best to execute “Shutdown” to flush the data from cache to physical disks. This step is necessary for data protection.

Chapter 4 Advanced operation

4.1 Rebuild

If one of the physical disk o n the VG which was set to a protected RAID level

(e.g.: RAID 3, RAID 5, or RAID 6) has FAILED or has been unplugged/removed, then, the VG status is changed to degraded mode. The system will search/detect for a spare disk to rebuild the degraded V G. It will look for a dedicated spare disk first and if none is found, it will check if a global spare disk has been set up and use this disk for rebuild.

ISC8P2G -S supports Auto -Rebuild funct ion. If the RAID level set on the VG is protected, such as RAID 3, RAID 5, RAID 6, and etc, ISC8P2G -S starts Auto -

Rebuild as shown on the scenario below:

Take RAID 6 for example:

1. When there is no global spare disk or dedicated spare disk on the system ,

ISC8P2G -S will be in degraded mode and wait until (A) there is one disk assigned as spare disk, or (B) the failed disk is removed and replaced with new clean disk, then the Auto -Rebuild starts. The new disk will be a spare disk to the original VG automat ically. a. If the new added disk is not clean (with data on it), it would be marked as RS (reserved) and the system will not start "auto -rebuild". b. If this disk does not belong to any existing VG, it would be FR (Free) disk and the system will start Au to-Rebuild function. c. If user only removes the failed disk and plugs the same failed disk in the same slot again, the auto -rebuild will start. But rebuilding th e array using the same failed disk may impact customer data later because of the unstable dis k status. We suggest all customers not to rebuild the array using the same failed disk for better data protection.

2. When there is enough global spare disk (s) or dedicated spare disk (s) for the degraded array, ISC8P2G -S starts Auto -Rebuild immediately. A nd in RAID 6, if there is another disk failure happening during the time of rebuilding, ISC8P2G -S will start the above Auto -Rebuild scenario as well. And the Auto -Rebuild feature only works at "RUNTIME". It will not work during downtime. Thus, it will not conflict with the “Roaming” function.

In degraded mode, the status of VG is “DG”.

When rebuilding, the status of PD/VG/UDV is “R”; and “R%” in UDV will display the ratio in percentage. After complete rebuilding, “R” and “DG” will disappear.

Tips

The list box doesn ’t exist if there is no VG or only VG of RAID

0, JBOD. This is because user cannot set dedicated spare disk for these RAID levels.

Sometimes, rebuild is called recover. These two have the same meaning. The table below lists the rel ationship between RAID levels and rebuild.

RAID 0 Disk striping . No protection of data . VG fails if any hard drive fails or gets unplug.

RAID 1 Disk mirroring over 2 disks. RAID 1 allows one hard drive failure or unplugging. Need one new hard drive to insert to the system for rebuild to be completed.

N-way mirror

RAID 3

Extension to RAID 1 level. It has N copies of the disk. N -way mirror allows N -1 hard drive f ailures or unplugging.

Striping with parity on the dedicated disk. RAID 3 allows one hard drive fail ure or unplugging.

RAID 5

RAID 6

RAID 0+1

RAID 10

Striping with interspersed parity over the member disks. RAID

5 allows one hard drive fail ure or unplugging.

2-dimensional parity protection over the member disks. RAID

6 allows two hard drive failure or unplugging. If it needs to rebuild two hard drives at the same time, it will rebuild the first one, then the other.

Mirroring of the members of the RAID 0 volumes. RAID 0+1 allows two hard drives to fail or gets unplugged, but the y need to be part of the same array.

Striping over the members of the RAID 1 volumes. RAID 10 allows two hard drives to fail or gets unplugged, but the y need to be part of different arrays.

RAID 30

RAID 50

RAID 60

Striping over the members of the RAID 3 volumes . RAID 30 allows two hard drives to fail or gets unplugged, but the y need to be part of different arrays.

Striping over the member RAID 5 volumes . RAID 50 allows two hard d rives to fail or gets unplugg ed, but the y need to be part of different arrays.

Striping over the member RAID 6 volumes. RAID 40 allo ws four hard drives to fail or gets unplugged, but each two need to be part of different arrays.

JBOD The abbrevia tion of “Just a Bunch Of Disks. No protection of data. VG fails if any hard drive fails or gets unplug.

4.2 RG migration

To migrate the RAID level, follow the steps below.

1. Select “/ Volume config uration / RAID group” .

2. Move cursor to the gray button next to the RG number ; click “Migrate”.

3. Change the RAID level by clicking the down arrow to “RAID 5”. There will be a pup -up which indicates that HDD is not enough to support the new setting of RAID level, click “ ” to increase hard drives, then click “ “ to go back to setup page.

When doing migration to lower RAID level, such as the original RAID level is RAID 6 and user wants to migrate to RAID 0, system will evaluate whether this operation is safe or not, and appear a message of "Sure to migrate to a lower protection array?” to give user warning.

Figure 4.2.1

4. Double check the setting of RAID level and RAID PD slot. If there is no problem, click “ “.

Formatted

5. Finally a confirmation page shows the detail of RAID info rmation. If there is no problem, click “ “ to start migration.

System also pops up a message of “Warning: power lost during

migration may cause damage of data!” to give user warning. When the power is abnormally off during the migration, the data is in high risk.

6. Migration starts and it can be seen fro m the “status” of a RG wit h

“Migrating ”. In “/ Volume config uration / Virtual disk” , it display s a

“Migrating” in “Status” and complete percentage of migration in

“R%”.

Figure 4.2.2

(Figure 4.2.2: A RAID 0 with 4 physical disks migrates to RAID 5 with 5 physical disks.)

Formatted

Figure 4.2.3

(Figure 4.2.3: A RAID 0 migrates to RAID 5, the complete percentage is 14%.)

To do migration, the total size of RG must be larger or equal to the original RG. It does not allow expanding the same RAID level with the sam e hard disks of original RG.

The operation is not allowed when RG is being migrated. System would reject following operations:

1. Add dedicated spare.

2. Remove a dedicated spare.

3. Create a new VD.

4. Delete a VD.

5. Extend a VD.

6. Scrub a VD.

7. Perform yet another migrat ion operation.

8. Scrub entire RG.

9. Take a new snapshot.

10. Delete an existing snapshot.

11. Export a snapshot.

12. Rollback to a snapshot.

Caution

RG Migration cannot be executed during rebuild or VD extension.

Formatted

4.3 VD Extension

To extend VD size , follow the steps below.

1. Select “/ Volume config uration / Virtual disk ”.

2. Move cursor to the gr ay button next to the VD number; click “Extend”.

3. Change the size. The size must be larger than the original, and then click “ “ to start extension .

Figure 4.3.1

4. Extension starts. If VD needs initialization, it will display an “Initiating” in “Status” and complete percentage of initialization in “R%”.

Formatted

Figure 4.3.2

(Figure 4.3.2: Extend VD-R5 from 20GB to 40GB.)

Tips

The size of VD extension must be larger than original.

Caution

VD Extension cannot be executed during rebuild or migration.

Formatted

4.4 Snapshot (QSnap) / Rollback

Snapshot-on-the-box (QSnap) captures the instant state of data in the target volume in a logical sense. The underlying logic is Copy -on-Write -- moving out the data which would be written to certain location where a write action occurs since the time of data capture. The certain location, named as “ Snap VD” , is essentially a new VD which can be attached to a LUN provisioned to a host as a disk like other ordinary VDs in the system. Rollback restores the data back to its previous state at a time the snapshot was captured. Snap VD is allocated within the same RG in which the snapshot is taken, we suggest to reserve 20% of RG size or more for snapshot space. Refer to Figure 4.4.1 for snapshot concept.

Figure 4.4.1

Caution

Snapshot / rollback features need 512MB RAM at least. Please also refer to RAM certification l ist in Appendix A.

Formatted

4.4.1 Create snapshot volume

To take a snapshot of the data, follow the steps below.

1. Select “/ Volume config uration / Virtual disk” .

Figure 4.4.1.1

(Figure 4.4.1.1: This is Snap VD, but it is not exported.)

7. Move cursor to the gray button next to the Snapshot V D number ; click

“Export”. Enter a capacity for snapshot VD. If size is zero, the exported snapshot VD will be read only. Otherwise, the exported snapshot VD can be read/written, and the size will be the maximum capacity to read/write.

8. Attach a LUN for snap shot VD. Please refer to the previous chapter for attaching a LUN.

9. Done. It can be used as a disk.

Formatted

Figure 4.4.1.2

(Figure 4.4.1.2: This is the list of “VD-01”. There are two snapshots in “VD-01”. Snapshot

VD “SnapVD-01” is exported to read only, “SnapVD-02” is exported to read/write.)

10. There are two methods to clean all snapshots. In “/ Volume

configuration / Virtual disk” , move cursor to the gray button next to the VD number; click “Cleanup snapshot ”. Or in “/ Volume

configuration / Snapshot ”, click “ ”.

11. Cleanup will delete all snapshots related to the VD and release snapshot space.

Formatted

Snapshot has some constraints:

1. Minimum RAM size of enabling snapshot is 512MB.

2. For performance and future rollback, system saves snapshot with names in sequences. For example, three snapshots has been taken and named “ SnapVD-01”(first), “SnapVD-02” and “SnapVD-03”(last).

When deleting “ SnapVD-02”, both of “ SnapVD-02” and “SnapVD-03” will be deleted because “ SnapVD-03” is related to “ SnapVD-02”.

3. For resource management, maximum number of snapshots is 32.

4. If the snapshot space is full, system will send a warning message that the space is full and the new taken snapshot will replace the oldest snapshot in rotational sequence by executing auto snapshot, but new snapshot can not be taken m anually because system does not know which snapshot VDs can be deleted.

4.4.2 Auto snapshot

The snapshot copies can be taken manually or be schedule either hourly or daily.

Follow the steps below.

1. There are two methods to set auto snapshot. In “/ Volume

configuration / Virtual disk” , move cursor to the gray button next to the VD number ; click “Auto snapshot ”. Or in “/ Volume

configuration / Snapshot ”, click “ ”.

2. The auto snapshot can be set monthly, weekly, daily, or hourly.

3. Done. It will take snapshots automatically.

Tips

Daily snapshot wi ll be taken every 00:00. Weekly snapshot will be taken every Sunday 00:00. Monthly snapshot will be taken every first day of month 00:00.

4.4.3 Rollback

The data in snapshot VD can be roll back to original VD. Follow the steps below.

1. Select “/ Volume c onfiguration / Snapshot”.

2. Move cursor to the gray button next to the Snap VD number which user wants to rollback the data ; click “Rollback”.

3. Done, the data in snapshot VD will rollback to original VD.

Rollback has some constraints as describe d”

1. Minimum RAM size of enabling rollback is 512MB.

2. When making a rollback, the original VD cannot be accessed for a while. At the same time, the system connects to original VD and the snapshot VD, and then starts rollback.

3. During rollback, data from snapshot VD to ori ginal VD can be accessed. At the same time, the other related snapshot VD(s) can not be accessed.

4. After rollback, the other snapshot VD(s) after the VD rollback is completed will be deleted.

Caution

Before executing rollback, it is best to dismount iSCSI drive for data to be flashed from cache to disks. The system sends a pop-up message when user executes rollback function .

4.5 Disk roaming

Physical disks can be re -sequenced in the same system or all physical disks be moved from ISC8P2G -S system -1 to system -2. This is called disk roaming.

System can execute disk roaming online. Follow the steps below.

1. Select “/ Volume config uration / RAID group ”.

2. Move cursor to the gray bu tton next to the RG number ; click

“Deactivate ”.

3. Move all PDs related to the RG to another system.

4. Move cursor to the gray button next to the RG number ; click

“Activate”.

5. Done.

Disk roaming has some constraints as described:

1. Check the firmware of two syst ems first. It is best that both ISC8P2G -S systems have the same firmware version or newer.

2. All physical disks of related RG should be moved from system -1 to system-2 together. The configuration of both RG and VD will be kept but LUN configuration will be cleared in order to avoid conflict with system-2.

4.6 Support Microsoft MPIO and MC/S

MPIO (Multi-Path Input/Output) and MC/S (Multiple Connections per Session) use multiple physical path s to create logical "paths" between the server and the storage device. In the case where one or more of these components fails, causing the path to fail, multi -path logic uses an alternate path for I/O so applications can still access their data.

Microsoft iSCSI initiator supports multi -path. Please f ollow the procedures to use

MPIO feature.

1. A host with dual LAN ports connects cables to controller.

2. Create a RG/VD and attach this VD to the host .

3. When installing “Microsoft iSCSI initiator ”, please install MPIO driver at the same time.

4. Logon to target separately on each port. When logon to target, check

“Enable multi -path”.

5. MPIO mode can be selected on Targets à Details à Devices à

Advanced in Microsoft iSCSI initiator.

6. Rescan disk.

7. There will be one disk running MPIO.

Appendix

A. Certification list

• RAM

ISC8P2G -S RAM Spec: 184pins, DDR333(PC2700), Reg.(register) or

UB(Unbufferred) , ECC or Non -ECC, from 64MB to 1GB, 32 -bit or 64-bit data bus width, x8 or x16 devices, 9 to 11 bits column a ddress.

• iSCSI Initiator (Software)

OS

Microsoft

Windows

Linux

Mac

Software/Release Number

Microsoft iSCSI Software Initiator Release v2.05

System Requirements:

1. Windows XP Professional with SP2

2. Windows 2000 Server with SP4

3. Windows Server 2003 with SP1

4. Windows Server 2003 R2

The iSCSI Initiators are different for different Linux Kernels.

1. For Red Hat Enterprise Linux 3 (Kernel 2.4), install linux-iscsi-

3.6.3.tar

2. For Red Hat Enterprise Linux 4 (Kernel 2.6), use the build-in iSCSI initiator iscsi-initiator-utils-4.0.3.0-4 in k ernel 2.6.9

3. For Red Hat Enterprise Linux 5 (Kernel 2.6), use the build-in iSCSI initiator iscsi-initiator-utils-6.2.0.695-0.7.e15 in kernel

2.6.18

ATTO XTEND 2.0x SAN / Mac iSCSI Initiator

GlobalSAN iSCSI Initiator v3.0

System Requirements:

1. Mac® OS X v10.3.5 or later

For ATTO initiator, it is not free. Please contact your local distributor for

ATTO initiator.

• iSCSI HBA card

Vendor

Adaptec

HP

QLogic

QLogic

Model

ASC-7211C (PCI-X, Gigabit, 1 port, TCP/IP offload, iSCSI offload)

NC380T (PCI-Express, Gigabit, 2 ports, TCP/IP offload, iSCSI offload)

QLA4010C (PCI-X, Gigabit, 1 port, TCP/IP offload, iSCSI offload)

QLA4052C (PCI-X, Gigabit, 2 ports, TCP/IP offload, iSCSI offload)

For detailed setup steps of Q logic QLA4010C , please refer to Appendix G:

QLogic QLA4010C setup instructions.

• NIC

Vendor

D-Link

HP

HP

IBM

Intel

Intel

Intel

Model

DGE-530T (PCI, Gigabit, 1 port)

NC7170 (PCI-X, Gigabit, 2 ports)

NC360T (PCI-Express, Gigabit, 2 ports, TCP/IP offload)

NetXtreme 1000 T (73P4201) (PCI-X, Gigabit, 2 ports, TCP/IP offload)

PWLA8490MT (PCI-X, Gigabit, 1 port, TCP/IP offload)

PWLA8492MT (PCI-X, Gigabit, 2 ports, TCP/IP offload)

PWLA8494MT (PCI-X, Gigabit, 4 ports, TCP/IP offload)

• GbE Switch

Vendor

Dell

Dell

Dell

HP

D-Link

PowerConnect 5324

PowerConnect 2724

PowerConnect 2708

ProCurve 1800-24G

DGS-3024

Model

• Hard drive

ISC8P2G -S support SATA I, II disks.

Vendor

Hitachi

Hitachi

Hitachi

Hitachi

Hitachi

Maxtor

Maxtor

Samsung

Seagate

Seagate

Seagate

Model

Deskstar 7K250, HDS722580VLSA80, 80GB, 7200RPM, SATA, 8M

Deskstar 7K80, HDS728080PLA380, 80GB, 7200RPM, SATA II, 8M

Deskstar E7K500, HDS725050KLA360, 500G, 7200RPM, SATA II, 16M

Deskstar 7K80, HDS728040PLA320, 40G, 7200RPM, SATA II, 2M

Deskstar T7K500, HDT725032VLA360, 320G, 7200RPM, SATA II, 16M

DiamondMax Plus 9, 6Y080M0, 80G, 7200RPM, SATA, 8M

DiamondMax 11, 6H500F0, 500G, 7200RPM, SATA 3.0Gb/s, 16M

SpinPoint P80, HDSASP0812C, 80GB•7200RPM, SATA, 8M

Barracuda 7200.7, ST380013AS, 80G, 7200RPM, SATA 1.5Gb/s, 8M

Barracuda 7200.7, ST380817AS, 80G, 7200RPM, SATA 1.5Gb/s, 8M,

NCQ

Barracuda 7200.8, ST3400832AS, 400G, 7200RPM, SATA 1.5Gb/s,

8M, NCQ

Seagate

Seagate

Seagate

Seagate

Seagate

Barracuda 7200.9, ST3500641AS, 500G, 7200RPM, SATA 3.0Gb/s,

16M, NCQ

NL35, ST3400633NS, 400G, 7200RPM, SATA 3.0Gb/s, 16M

NL35, ST3500641NS, 500G, 7200RPM, SATA 3.0Gb/s, 16M

Barracuda ES, ST3500630NS, 500G, 7200RPM, SATA 3.0Gb/s, 16M

Barracuda ES, ST3750640NS, 750G, 7200RPM, SATA 3.0Gb/s, 16M

Seagate Barracuda ES.2, ST31000340NS, 1000G, 7200RPM, SATA 3.0Gb/s,

32M

Westem Digital Caviar SE, WD800JD, 80GB, 7200RPM, SATA 3.0Gb/s, 8M

Westem Digital Caviar SE, WD1600JD, 160GB, 7200RPM, SATA 1.5G/s , 8M

Westem Digital Raptor, WD360GD, 36.7GB, 10000RPM, SATA 1.5Gb/s, 8M

Westem Digital Caviar RE2, WD4000YR, 400GB, 7200RPM, SATA 1.5Gb/s, 16M, NCQ

Westem Digital RE2, WD4000YS, 400GB, 7200RPM, SATA 3.0Gb/s, 16M

Westem Digital Caviar RE16, WD5000AAKS, 500GB, 7200RPM, SATA 3.0Gb/s, 16M

Westem Digital RE2, WD5000ABYS, 500GB, 7200RPM, SATA 3.0Gb/s, 16M, NCQ

B. Event notifications

• PD/S.M.A.R.T. events

Level

Info

Info

Warning

Type

Disk inserted

Disk removed

S.M.A.R.T. threshold exceed condition

Warning S.M.A.R.T. information

Description

Info: Disk <slot> is inserted.

Info: Disk <slot> is removed.

Warning: Disk <slot> S.M.A.R.T. threshold exceed condition occurred for attribute of

1. read error rate

2. spin up time

3. reallocated sector count

4. seek error rate

5. spin up retries

6. calibration retries

Warning: Disk <slot>: Failure to get S.M.A.R.T information

• Physical HW events

Level

Warning

Error

Info

Info

Error

Error

Warning

Type

ECC error

ECC error

ECC DIMM

Installed

Description

Warning: Single-bit ECC error is detected.

Error: Multi-bit ECC error is detected.

Info: ECC Memory is installed.

Non-ECC installed Info: Non-ECC Memory is installed.

Host chip failure Error: Host channel chip failed.

Drive chip failure Error: Drive channel chip failed.

Ethernet port failure Warning: GUI Ethernet port failed.

• HDD IO events

Level

Warning

Warning

Warning

Warning

Type

Disk error

Disk error

HDD failure

Channel error

Description

Error: Disk <slot> read block error.

Error: Disk <slot> writes block error.

Error: Disk <slot> is failed.

Error: Disk <slot> IO incomplete.

• SES events

Level

Info

Warning

Info

Info

Type Description

SES load conf. OK Info: SES configuration has been loaded.

SES Load Conf.

Failure

Error: Failed to load SES configuration. The

SES device is disabled.

SES is disabled

SES is enabled

Info: The SES device is disabled.

Info: The SES device is enabled

• Environmental events

Level

Info

Info

Info

Warning

Error

Warning

Error

Warning

Info

Error

Info

Error

Error

Error

Info

Warning

Error

Type

Admin Login OK

Description

Info: Admin login from <IP or serial console> via iSCSI data port login

<Web UI or Console UI>.

Admin Logout OK Info: Admin logout from <IP or serial console> via <Web UI or Console UI>.

Info: iSCSI login from <IQN> (<IP:Port

Number>) succeeds. iSCSI data port login reject

Warning: iSCSI login from <IQN> (<IP:Port

Number>) was rejected, reason of

1. initiator error

2. authentication failure

3. authorization failure

4. target not found

Thermal critical

Thermal warning

Voltage critical

Voltage warning

PSU restore

5. unsupported version

6. too many connections

7. missing parameter

8. session does not exist

9. target error

10. out of resources

11. unknown

Error: System Overheated!!! The system will do the auto shutdown immediately.

Warning: System temperature is a little bit higher.

Error: System voltages failed!!! The system will do the auto shutdown immediately

Warning: System voltage is a little bit higher/lower.

Info: Power <number> is restored to work.

PSU Fail

Fan restore

Fan Fail

Error: Power <number> is out of work.

Info: Fan <number> is restore to work.

Error: Fan <number> is out of work.

Fan non-exist

AC Loss

Error: System cooling fan is not installed.

Error: AC loss for the system is detected.

UPS Detection OK Info: UPS detection succeed

UPS Detection Fail Warning: UPS detection failed

AC Loss Error: AC loss for the system is detected

Error

Info

Warning

Info

Warning

Info

Info

UPS power low Error: UPS Power Low!!! The system will do the auto shutdown immediately.

Info: Management LAN Port is active. Mgmt Lan Port

Active

Mgmt Lan Port Warning: Fail to manage the system via the

Failed

RTC Device OK

LAN Port.

Info: RTC device is active.

RTC Access Failed Warning: Fail to access RTC device

Reset Password

Reset IP

Info: Reset Admin Password to default.

Info: Reset network settings set to default.

System config events

Level

Info

Info

Error

Warning

Type

Sys Config.

Description

Info: Default system configurations restored.

Defaults Restored

Sys NVRAM OK

Sys NVRAM IO

Failed

Info: The system NVRAM is active.

Error: Can’t access the system NVRAM.

Sys NVRAM is full Warning: The system NVRAM is full.

System maintenance events

Level

Info

Error

Info

Info

Info

Error

Type

Firmware

Upgraded

Firmware

Upgraded Failed

Description

Info: System firmware has been upgraded

Error: System firmware upgrade failed.

System reboot Info: System has been rebooted

System shutdown Info: System has been shutdown.

System Init OK Info: System has been initialized OK.

System Init Failed Error: System cannot be initialized in the last boot up.

LVM events

Level

Info

Warning

Info

Info

Warning

Info

Info

Warning

Info

Type

VG Created OK

Description

Info: VG <name> has been created.

VG Created Fail

VG Deleted

UDV Created OK

Warning: Fail to create VG <name>.

Info: VG <name> has been deleted.

Info: UDV <name> has been created.

UDV Created Fail

UDV Deleted

Warning: Fail to create UDV <name>.

Info: UDV <name> has been deleted.

UDV Attached OK Info: UDV <name> has been LUN-attached.

UDV Attached Fail Warning: Fail to attach LUN to UDV <name>.

UDV Detached OK Info: UDV <name> has been detached.

Info

Error

Info

Info

Info

Info

Info

Info

Info

Info

Info

Info

Error

Error

Error

Error

Warning

Info

Info

Warning

Info

Info

Warning

Warning

Warning

Info

Warning

Warning

Warning

Warning

Warning

UDV Detached Fail Warning: Fail to detach LUN from Bus

<number> SCSI_ID <number> LUN <number>.

Info: UDV <name> starts rebuilding. UDV_OP Rebuild

Started

UDV_OP Rebuild Info: UDV <name> completes rebuilding.

Finished

UDV_OP Rebuild

Fail

UDV_OP Migrate

Started

UDV_OP Migrate

Finished

UDV_OP Migrate

Failed

VG Degraded

UDV Degraded

UDV Init OK

UDV_OP Stop

Initialization

UDV IO Fault

Warning: Fail to complete UDV <name> rebuilding.

Info: UDV <name> starts migration.

Info: UDV <name> completes migration.

Warning: Fail to complete UDV <name> migration.

Warning: VG <name> is under degraded mode.

Warning: UDV <name> is under degraded mode.

Info: UDV <name> completes the initialization.

Warning: Fail to complete UDV <name>

VG Failed

UDV Failed initialization.

Error: IO failure for stripe number <number> in

UDV <name>.

Error: Fail to access VG <name>.

Error: Fail to access UDV <name>.

Error: Fail to adjust the size of the global cache. Global CV

Adjustment Failed

Global Cache

Global CV Creation

Failed

UDV Rename

Info: The global cache is OK.

Error: Fail to create the global cache.

VG Rename

Set VG Dedicated

Spare Disks

Set Global Disks

Info: UDV <name> has been renamed as

<name>.

Info: VG <name> has been renamed as

<name>.

Info: Assign Disk <slot> to be VG <name> dedicated spare disk.

Info: Assign Disk <slot> to the Global Spare

Disks.

UDV Read-Only

WRBK Cache

Policy

WRTHRU Cache

Policy

High priority UDV

Mid Priority UDV

Info: UDV <name> is a read-only volume.

Info: Use the write-back cache policy for UDV

<name>.

Info: Use the write-through cache policy for UDV

<name>.

Info: UDV <name> is set to high priority.

Info: UDV <name> is set to mid priority.

Low Priority UDV

PD configuration

Info: UDV <name> is set to low priority.

Error: PD <slot> lba <#> length <#> config read/write error <read | write> failed.

PD read/write error Error: PD <#> lba <#> length <#> <read | write> error.

UDV recoverable read/write error

Error: UDV <name> stripe <#> PD <#> lba <#> length <#> <read | write> recoverable

UDV unrecoverable read/write error

Error: UDV <#> stripe <#> PD <#> lba <#> length <#> <read | write> unrecoverable

Info UDV stripe rewrite start/fail/succeed

Info: UDV <name> stripe <#> rewrite column bitmap <BITMAP> <started | failed | finished>.

C. Known issues

1. Microsoft MPIO is not supported on Windows XP or Windows 2000

Professional.

Workaround solution: Using Windows Server 2008 , 2003 or

Windows 2000 server to run MPIO.

D. Microsoft iSCSI Initiator

Here is the step -by-step procedure to setup Microsoft iSCSI Initiator. Visit

Microsoft website for latest iSCSI initiator.

1. Run Micro soft iSCSI Initiator version 2.0 8. See Figure D.1.

2. Click “Discovery ”.

Figure D.1

3. Click “Add”. Input IP address or DNS name of ISC8P2G . Please see

Figure D.2.

Figure D.2

4. Click “OK”. Please see Figure D.3.

5. Click “Targets”. Please see Figure D.4.

Figure D.4

6. Click “Log On”. Please see Figure D.5. Check “Enable multi -path” if running MPIO.

Figure D.5

7. Click “Advance …” if CHAP information is needed. Please see Figure

D.6.

The following steps shows how to log off iSCSI dri ve.

1. Click “Details”. Please see Figure D.8.

E. MPIO and MC/S setup instructions

Here is the step -by-step procedure to setup MPIO. There are 2 kinds of scenarios for MPIO. Please see Figure F.1. We suggest using scenari o 2 for better performance. ž Network diagram of MPIO.

Figure F.1

Below are the setup instructions.

Microsoft MPIO is NOT supported on Windows XP or Windows 2000

Professional.

Workaround solution: Using Windows Server 2003, 2008 or Windows

2000 server to run MPIO. You have to enable MPIO or install MPIO driver on the server before doing this instruction.

On a Windows Server 2008, to install MPIO

1. In the Server Manager console tree, click Features node.

2. In the Features pane, under Features Su mmary, click Add

Features.

3. In the Add Features wizard, select Multipath I/O check box, and click Next.

4. Follow the steps on the Add Features wizard.

1. Create a VG with RAID 5, using 3 HDDs.

Figure F.2

2. Create a UDV by using RAID 5 VG.

3. Run Microsoft iSCSI initiator and check the Initiator Node Name.

5. The volume config setting is done .

Figure F.6

6. Check iSCSI settings. The IP address of iSCSI data port 1 is

192.168.1.113, port 2 is 192.168.1.112 for example.

8. Input the IP address of iSCSI data port 1 (192.16 8.1.112 as mentioned in previous page ).

10. Input the IP address of iSCSI data port 2 (192.16 8.1.113 as mentioned in previous page ).

12. Log on.

14. Select Target Portal to iSCSI data port 1 (192.168.1.112). Then click

“OK”

16. Enable “Enable multi -path” checkbox. Then click “Advanced…” .

18. The iSCSI drive is connected.

20. The MPIO Properties window opens.

22. Check the option of Add support for iSCSI device and click on Add button.

Figure F.21

23. The system will ask you to reboot to make the change take e ffect.

24. After reboot, log on the iSCSI target again. Under Disk Driver in Device

Manager, notice that the Addonics iSCSI is now a Multi -Path Disk Device.

26. Click “Details”.

28. The Device Details window opens

Figure F.26

29. Click “MPIO” tab, select “Fail Over Only” to “Round Robin” .

30. Click “Apply”. Both connections Type now becomes Active.

The MC/S setup instructions is very similar to MPIO. Detailed steps are presented below. For the target side setting, the steps are exactly the same as

MPIO. Please refer to Figure F.1 to Figure F.8 .

1. Create a VG with RAID 5, using 3 HDDs.

2. Create a UDV by using RAID 5 VG.

3. Run Microsoft iSCSI initiator 2.0 8 and check the Initiator Node Name.

4. Attach LUN to R5 UDV. Input the Initiator Node Name in Host field.

5. The volume config setting is done.

6. Check iSCSI settings. The IP addr ess of iSCSI data port 1 is

192.168.1.112 and port 2 is 192.168.1. 113 for example.

7. Add Target Portals on Microsoft iSCSI initiator 2.03.

8. Input the IP address of iSCSI data port 1 (192.168.1. 112 as m entioned in previous page s). For MC/S, there is only ONE “Target Portal” in the

“Discovery” tab.

Figure F.1

9. Click Log On button.

10. Then click “Advanced…” .

12. After connected , click “Details”, then in the “Session” tab, click

“Connections” .

Figure F.4

13. Choose “Round Robin” for Load Balance Policy

14. “Add” Source Portal for the iSCSI data port 2(192.168.1.113)

15. Click on the Advance button. Select Local adapter, Source IP, and

Target Portal to iSCSI data port 2 (192.168.1.113). Then select “OK”.

Click OK.

16. The MC/S setting is done .

Figure F.8

E. QLogic QLA4010C setup instructions

The following is the step -by-step setup of Q logic QLA4010C.

1. Log on the iSCSI HBA Manager and the current state shows “No

Connection Active” .

3. Disable “Immediate Data” and enable “Initial R2T” .

5. Click “Save settings” and click “Yes” on next page.

7. Check the parameters . “Initial R2T” are must be enabled.

9. Then, run “Computer Management” in Windows. Make sure the disk appears.

G. Installation Steps for Large Volume (TB)

Introduction:

The ISC8P2G -S is capable of support ing large volumes (>2TB). When connecting controllers to 64bit OS installed host/server, the host/server is inherently capable for large v olumes from the 64bit address. On the other side, if the host/server is installed with 32bit OS, user has to change the block size to

1KB, 2KB or 4KB to support volumes up to 4TB, 8TB or 16TB , for the 32bit host/server is not LBA (Logical Block Addressing ) 64-bit supported. For detailed installation steps, check below.

Step A: configure your target

1. Go to / Volume config / Volume group , create a VG.

Figure H.1

2. Choose RAID level an d disks.

4. A RAID 6 VG is created.

Figure H.7:

(Figure H.7: choose “OK” for 64bit OS, choose “Cancel” for 32bit OS, this step will

change block size to 4K automatically.)

7. A 2.793TB UDV is created.

Figure H.8: a 2793G UDV is created.

8. Check the detail information.

Figure H.9

(Figure H.9: block size = 512B, for 64bit OS setting.)

Figure H.10

(Figure H.10: block size = 4K, for 32bit OS setting.)

9. Attach LUN.

Figure H.11

Step B: configure your host/server

1. You need to setup software iscsi initiat or or iSCSI HBA first.

2. Below is the configuration for Windows Server 2003 R2 with Microsoft iscsi initiator. Please install the latest Microsoft iscsi initiator from the link below. http://www.microsoft.com/downloads/details.aspx?familyid=12cb3c1a -

15d6-4585-b385-befd1319f825&displaylang=en

Step C: Initialize/Format/Mount the disk

1. Go to Start à Control Panel à Computer Management à Device

Manger à Disk drives

3. Initialize disk.

Figure H.18

4. Convert to GPT disk for over 2TB capacity. For more detail information about GPT, please visit http://www.microsoft.com/whdc/device/storage/GPT_FAQ.mspx

5. Format disk.

Figure H.19

6. Format disk is done.

Figure H.21

7. The new disk is ready, available size = 2.72TB.

8. Wrong setting result: OS cannot format area after 2048GB(2TB).

Figure H.23

advertisement

Key Features

  • 2 GbE NIC ports
  • iSCSI jumbo frame support
  • RAID 6, 60 ready
  • Snapshot (QSnap) integrated on the subsystem
  • SATA II drives backward compatible
  • One logic volume can be shared by as many as 32 hosts
  • Host access control
  • Configurable N-way mirror for high data protection
  • On-line volume migration with no system downtime
  • HDD S.M.A.R.T. enabled for SATA drives

Frequently Answers and Questions

What is the maximum number of hosts that can share one logic volume?
The ISC8P2G-S supports sharing of one logic volume by as many as 32 hosts.
What are the RAID levels supported by the ISC8P2G-S?
The ISC8P2G-S supports RAID levels 0, 1, 3, 5, 6, 0+1, 10, 30, 50, 60 and JBOD
How many physical disks can the ISC8P2G-S handle?
The ISC8P2G-S allows you to connect up to 8 physical disks.
What is Snapshot (QSnap)?
Snapshot (QSnap) is a fully usable copy of a defined collection of data that contains an image of the data as it appeared at a point in time, It provides consistent and instant copies of data volumes without any system downtime. The ISC8P2G-S can keep up to 32 snapshots for all data volumes.
How do I manage the ISC8P2G-S?
The ISC8P2G-S can be managed through the Web GUI, Console serial port, or SSH (Secure Shell)
What is the default password for the admin account?
The default password for the admin account is "supervisor".

Related manuals