Sans Digital ELITERAID ER316FD+B User manual

Add to My manuals
91 Pages

advertisement

Sans Digital ELITERAID ER316FD+B User manual | Manualzz

ELITERAID

ER316FD+B

Detailed User’s Manual v 1.0

Copyright

No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior written consent.

Trademarks

All products and trade names used in this document are trademarks or registered trademarks of their respective holders.

Changes

The material in this document is for information only and is subject to change without notice.

About This Manual

Welcome to your Redundant Array of Independent Disks System User’s Guide.

This manual covers everything you need to know in learning how to install or configure your RAID system. This manual also assumes that you know the basic concepts of RAID technology.

It includes the following information :

Chapter 1

Introduction

Introduces you to Disk Array’s features and general technology concepts.

Chapter 2

Configuring

Quick Setup

Provides a simple way to setup your Disk Array.

Customizing Setup

Provides step-by-step instructions to help you to do setup or re-configure your Disk Array.

Chapter 3

Array Maintenance

Adding Cache Memory

Provides a detailed procedure to increase cache memory from the default amount of 512MB to higher.

Updating Firmware

Provides step-by-step instructions to help you to update the firmware to the latest version.

Hot Swap Components

Describes all hot swap modules on Disk Array and provides the detailed procedure to replace them.

Table of Contents

Chapter 1

Introduction

1.3.1 Overview.......................................................................................................

1-10

1.3.2 Three ways to connect (FC Topologies)................................................

1-10

1.3.3 Basic elements..........................................................................................

1.3.4 LUNMasking...............................................................................................

1-12

1-13

1-13

1.4.1 RAID set........................................................................................................

1-13

1.4.2 Volume Set...................................................................................................

1-14

1.4.3 Easy of Use features..................................................................................

1-14

1.4.4 HighAvailability............................................................................................ 1-17

Chapter 2

Configuring

2.1 Configuring through a Terminal..............................................................................

2.2 Configuring the Subsystem Using the LCD Panel.........................................

2.4 Web browser-based Remote RAID management via R-Link ethernet.......

2-9

2-10

2.6 Raid Set Functions...............................................................................................

2-15

2.6.1 Create Raid Set..........................................................................................

2-17

2-19

2-19

2.6.2 Delete Raid Set............................................................................................ 2-20

2.6.3 Expand Raid Set........................................................................................... 2-22

2.6.4 Offline Raid Set........................................................................................... 2-25

2.6.5 Activate Incomplete Raid Set................................................................... 2-26

2.6.6 Create Hot Spare........................................................................................ 2-28

2.6.7 Delete Hot Spare......................................................................................... 2-28

2.6.8 Rescue Raid Set.......................................................................................... 2-29

2.7 Volume

2-30

2.7.1 Create Volume Set......................................................................................

2.7.2 Create Raid30/50/60....................................................................................

2.8.4 Identify Enclosure....................................................................................

2-30

2-33

2.7.3 Delete Volume Set......................................................................................

2.7.4 Modify Volume Set........................................................................................

2.8.3 Delete Pass-Through Disk........................................................................

2-34

2-35

2.7.4.1 Volume Expansion.......................................................................

2.7.5 Volume Set Migration..................................................................................

2.8.2 Modify Pass-Through Disk.........................................................................

2-35

2-37

2.7.6 Check Volume Set........................................................................................

2.7.7 Scheduled Volume Checking......................................................................

2.8.1 Create Pass-Through Disk........................................................................

2-38

2-39

2.7.8 Stop Volume Set Check..............................................................................

2.7.9 Volume Set Host Filters..............................................................................

2-40

2-40

2-41

2-41

2-42

2-43

2-43

2.8.5 Identify Selected Drive.................................................................................

2-44

2.9.1 System Configuration.................................................................................

2-45

2-45

2.9.2 Fibre Channel Configuration......................................................................

2-48

2.9.2.1View/Edit Host Name List...............................................................

2-49

2.9.2.2View/Edit Volume Set Host Filters................................................

2-51

2.9.3 EthernetConfig................................................................................................

2-55

2.9.4 Alert By Mail Config......................................................................................

2-56

2.9.5 SNMP Configuration.....................................................................................

2-57

2.9.6 NTP Configuration........................................................................................

2-59

2.9.7 View Events..................................................................................................

2-60

2.9.8 Generate Test Events.................................................................................

2-61

2.9.9 Clear Events Buffer......................................................................................

2-62

2.9.10 Modify Password.....................................................................................

2-62

2.9.11 Upgrade Firmware......................................................................................

2-63

2.9.12 Restart Controller......................................................................................

2-63

2.10.1 RaidSet Hierarchy.....................................................................................

2.10.2 System Information .............................................................................

2-64

2-64

2-65

2.10.3 Hardware Monitor......................................................................................

2-66

2.11 Creating a new RAID or Reconfiguring an Existing RAID................................

Appendix A

Technical Specification...................................................

2-67

A-1

Chapter 1

Introduction

The RAID subsystem is a Fibre channel-to-SAS / SATA II RAID (Redundant

Arrays of Independent Disks) disk array subsystem. It consists of a RAID disk array controller and sixteen (16) disk trays.

The subsystem is a “Host Independent” RAID subsystem supporting RAID lev- els 0, 1, 0+1, 3, 5, 6, 30, 50, 60 and JBOD. Regardless of the RAID level the subsystem is configured for, each RAID array consists of a set of disks which to the user appears to be a single large disk capacity.

One unique feature of these RAID levels is that data is spread across separate disks as a result of the redundant manner in which data is stored in a RAID array. If a disk in the RAID array fails, the subsystem continues to function without any risk of data loss. This is because redundant information is stored separately from the data. This redundant information will then be used to re- construct any data that was stored on a failed disk. In other words, the sub- system can tolerate the failure of a drive without losing data while operating independently of each other.

The subsystem is also equipped with an environment controller which is ca- pable of accurately monitoring the internal environment of the subsystem such as its power supplies, fans, temperatures and voltages.

The disk trays allow you to install any type of 3.5-inch hard drive. Its modular design allows hot-swapping of hard drives without interrupting the subsystem’s operation.

1.1 Key Features

Subsystem Features:

Z

Features an Intel IOP341 800Mhz 64-BIT RISC I/O processor

Z

Build-in 512MB cache memory, expandable up to 4GB

Z

4Gb Fibre channel, dual loop optical SFP LC (short wave) host port

Z

Smart-function LCD panel

Z

Supports up to sixteen (16) 1" hot-swappable SAS / SATA II hard drives

Z

Redundant load sharing hot-swappable power supplies

Z

High quality advanced cooling fans

Z

Local audible event notification alarm

Z

Supports password protection and UPS connection

Z

Built-in R-Link LAN port interface for remote management & event notifica- tion

Z

Dual host channels support clustering technology

Z

Real time drive activity and status indicators

RAID Function Features:

Z

Supports RAID levels 0, 1, 0+1, 3, 5, 6, 30, 50, 60 and JBOD

Z

Supports hot spare and automatic hot rebuild

Z

Allows online capacity expansion within the enclosure

Z

Support spin down drives when not in use to extend service (MAID)

Z

Transparent data protection for all popular operating systems

Z

Bad block auto-remapping

Z

Supports multiple array enclosures per host connection

Z

Multiple RAID selection

Z

Array roaming

Z

Online RAID level migration

1.2 RAID Concepts

RAID Fundamentals

The basic idea of RAID (Redundant Array of Independent Disks) is to combine multiple inexpensive disk drives into an array of disk drives to obtain performance, capacity and reliability that exceeds that of a single large drive. The array of drives appears to the host computer as a single logical drive.

Six types of array architectures, RAID 1 through RAID 6, were originally defined, each provides disk fault-tolerance with different compromises in features and performance. In addition to these five redundant array architectures, it has become popular to refer to a non-redundant array of disk drives as a RAID 0 array.

Disk Striping

Fundamental to RAID technology is striping. This is a method of combining multiple drives into one logical storage unit. Striping partitions the storage space of each drive into stripes, which can be as small as one sector (512 bytes) or as large as several megabytes. These stripes are then interleaved in a rotating sequence, so that the combined space is composed alternately of stripes from each drive. The specific type of operating environment deter- mines whether large or small stripes should be used.

Most operating systems today support concurrent disk I/O operations across multiple drives. However, in order to maximize throughput for the disk subsystem, the I/O load must be balanced across all the drives so that each drive can be kept busy as much as possible. In a multiple drive system without striping, the disk I/O load is never perfectly balanced. Some drives will contain data files that are frequently accessed and some drives will rarely be accessed.

By striping the drives in the array with stripes large enough so that each record falls entirely within one stripe, most records can be evenly distributed across all drives. This keeps all drives in the array busy during heavy load situations. This situation allows all drives to work concurrently on different I/O operations, and thus maximize the number of simultaneous I/O operations that can be performed by the array.

Definition of RAID Levels

RAID 0

is typically defined as a group of striped disk drives without parity or data redundancy. RAID 0 arrays can be configured with large stripes for multi-user environments or small stripes for single-user systems that access long sequential records. RAID 0 arrays deliver the best data storage efficiency and performance of any array type. The disadvantage is that if one drive in a RAID 0 array fails, the entire array fails.

RAID 1

, also known as disk mirroring, is simply a pair of disk drives that store duplicate data but appear to the computer as a single drive. Although striping is not used within a single mirrored drive pair, multiple RAID 1 arrays can be striped together to create a single large array consisting of pairs of mirrored drives. All writes must go to both drives of a mirrored pair so that the information on the drives is kept identical. However, each individual drive can perform simultaneous, independent read operations. Mirroring thus doubles the read performance of a single non-mirrored drive and while the write performance is unchanged. RAID 1 delivers the best performance of any redundant array type. In addition, there is less performance degradation during drive failure than in RAID 5 arrays.

RAID 3

sector-stripes data across groups of drives, but one drive in the group is dedicated to storing parity information. RAID 3 relies on the embedded ECC in each sector for error detection. In the case of drive failure, data recovery is accomplished by calculating the exclusive OR (XOR) of the information recorded on the remaining drives. Records typically span all drives, which optimizes the disk transfer rate. Because each I/O request accesses every drive in the array,

RAID 3 arrays can satisfy only one I/O request at a time. RAID 3 delivers the best performance for single-user, single-tasking environments with long records.

Synchronized-spindle drives are required for RAID 3 arrays in order to avoid performance degradation with short records. RAID 5 arrays with small stripes can yield similar performance to RAID 3 arrays.

Under

RAID 5

parity information is distributed across all the drives. Since there is no dedicated parity drive, all drives contain data and read operations can be overlapped on every drive in the array. Write operations will typically access one data drive and one parity drive. However, because different records store their parity on different drives, write operations can usually be overlapped.

RAID 6

is similar to RAID 5 in that data protection is achieved by writing parity information to the physical drives in the array. With RAID 6, however, two sets of parity data are used. These two sets are different, and each set occupies a capacity equivalent to that of one of the constituent drives. The main advantages of RAID 6 is High data availability – any two drives can fail without loss of critical data.

Dual-level RAID

achieves a balance between the increased data availability inherent in RAID 1 and RAID 5 and the increased read performance inherent in disk striping (RAID 0). These arrays are sometimes referred to as

RAID 0+1

or

RAID 10 and RAID 0+5 or RAID 50.

In summary:

Z

RAID 0 is the fastest and most efficient array type but offers no fault- tolerance. RAID 0 requires a minimum of two drives.

Z

RAID 1 is the best choice for performance-critical, faulttolerant environments. RAID 1 is the only choice for fault-tolerance if no more than two drives are used.

Z

RAID 3 can be used to speed up data transfer and provide fault-tolerance in single-user environments that access long sequential records. However,

RAID 3 does not allow overlapping of multiple I/O operations and requires synchronized-spindle drives to avoid performance degradation with short records. RAID 5 with a small stripe size offers similar performance.

Z

RAID 5 combines efficient, fault-tolerant data storage with good performance characteristics. However, write performance and performance during drive failure is slower than with RAID 1. Rebuild operations also require more time than with RAID 1 because parity information is also reconstructed. At least three drives are required for

RAID 5 arrays.

Z

RAID 6 is essentially an extension of RAID level 5 which allows for additional fault tolerance by using a second independent distributed parity scheme (two-dimensional parity). Data is striped on a block level across a set of drives, just like in RAID 5, and a second set of parity is calculated and written across all the drives; RAID

6 provides for an extremely high data fault tolerance and can sustain multiple simultaneous drive failures. Perfect solution for mission critical applications.

RAID Management

The subsystem can implement several different levels of RAID technology.

RAID levels supported by the subsystem are shown below.

RAID

Level

Description

Min

Drives

0

Block striping is provide, which yields higher performance than with individual drives. There is no redundancy.

1

1

Drives are paired and mirrored. All data is 100% duplicated on an equivalent drive. Fully redundant.

2

3

Data is striped across several physical drives. Parity protection is used for data redundancy.

3

5

Data is striped across several physical drives. Parity protection is used for data redundancy.

3

6

Data is striped across several physical drives. Parity protection is used for data redundancy. Requires N+2 drives to implement because of two-dimensional parity scheme

0 + 1

Combination of RAID levels 0 and 1. This level provides striping and redundancy through mirroring.

4

4

30

Combination of RAID levels 0 and 3. This level is best implemented on two RAID 3 disk arrays with data striped across both disk arrays.

50

RAID 50 provides the features of both RAID 0 and RAID 5. RAID 50 includes both parity and disk striping across multiple drives.

RAID 50 is best implemented on two RAID 5 disk arrays with data striped across both disk arrays.

60

RAID 60 combines both RAID 6 and RAID 0 features. Data is striped across disks as in RAID 0, and it uses double distributed parity as in RAID 6. RAID 60 provides data reliability, good overall performance and supports larger volume sizes.

RAID 60 also provides very high reliability because data is still available even if multiple disk drives fail (two in each disk array)

6

6

8

1.3 Fibre Functions

1.3.1 Overview

Fibre Channel is a set of standards under the auspices of ANSI (American

National Standards Institute). Fibre Channel combines the best features from

SCSI bus and IP protocols into a single standard interface, including high- performance data transfer (up to 400 MB per second), low error rates, multiple connection topologies, scalability, and more. It retains the SCSI command-set functionality, but use a Fibre Channel controller instead of a SCSI controller to provide the network interface for data transmission. In today’s fast-moving com- puter environments, Fibre Channel is the serial data transfer protocol choice for high-speed transportation of large volumes of information between workstation, server, mass storage subsystems, and peripherals.

Physically, the Fibre Channel can be an interconnection of multiple communi- cation points, called N_Ports. The port itself only manages the connection between itself and another such end-port which, which could either be part of a switched network, referred to as a Fabric in FC terminology, or a point-to- point link. The fundamental elements of a Fibre Channel Network are Port and node. So a node can be a computer system, storage device, or Hub/Switch.

This chapter describes the Fibre-specific functions available in the

Fibre channel RAID controller. Optional functions have been implemented for

Fibre channel operation only available in the Web browser-based RAID manager. The LCD and VT-100 can’t configure the options available for

Fibre channel RAID controller.

1.3.2 Three ways to connect (FC Topologies)

A topology defines the interconnection scheme. It defines the number of de- vices that can be connected. Fibre Channel supports three different logical or physical arrangements (topologies) for connecting the devices into a network: a

Point-to-Point a

Arbitrated Loop(AL) a

Switched (Fabric)

The physical connection between devices varies from one topology to another.

In all of these topologies, a transmitter node in one device sends information to a receiver node in another device. Fibre Channel networks can use any combi- nation of point-to-point, arbitrated loop(FC_AL), and switched fabric topologies to provide a variety of device sharing options.

Point-to-point

A point-to-point topology consists of two and only two devices connected by N- ports of which are connected directly. In this topology, the transmit Fibre of one device connects to the receiver Fibre of the other device and vice versa.

The connection is not shared with any other devices. Simplicity and use of the full data transfer rate make this Point-to-point topology an ideal extension to the standard SCSI bus interface. The point-to-point topology extends SCSI connectivity from a server to a peripheral device over longer distances .

Arbitrated Loop

The arbitrated loop (FC-AL) topology provides a relatively simple method of connecting and sharing resources. This topology allows up to 126 devices or nodes in a single, continuous loop or ring. The loop is constructed by daisy- chaining the transmit and receive cables from one device to the next or by using a hub or switch to create a virtual loop. The loop can be self-contained or incorporated as an element in a larger network. Increasing the number of devices on the loop can reduce the overall performance of the loop because the amount of time each device can use the loop is reduced. The ports in an arbitrated loop are referred as L-Ports.

Switched Fabric

A switched fabric a term is used in a Fibre channel to describe the generic switching or routing structure that delivers a frame to a destination based on the destination address in the frame header. It can be used to connect up to

16 million nodes, each of which is identified by a unique, world-wide name.

In a switched fabric, each data frame is transferred over a virtual point-to- point connection. There can be any number of full-bandwidth transfers occur- ring through the switch. Devices do not have to arbitrate for control of the

network; each device can use the full available bandwidth.

A fabric topology contains one or more switches connecting the ports in the

FC network. The benefit of this topology is that many devices (approximately

2-24) can be connected. A port on a Fabric switch is called an F-Port (Fabric

Port). Fabric switches can function as an alias server, Multicast server, broadcast server, quality of service facilitator and directory server as well.

1.3.3 Basic elements

The following elements are the connectivity of storages and Server compo- nents using the Fibre channel technology.

Cables and connectors

There are different types of cables of varies lengths for use in a Fibre Chan- nel configuration. Two types of cables are supported : Copper and optical

(fiber). Copper cables are used for short distances and transfer data up to 30 meters per link. Fiber cables come in two distinct types: Multi-

Mode fiber (MMF) for short distances (up to 2km), and Single-

Mode Fiber(SMF) for longer distances (up to 10 kilometers). The controller default supports two short wave multi-mode fibre optical SFP connector.

Fibre Channel Adapter

Fibre Channel Adapter is devices that connect to a workstation, or server and control the electrical protocol for communications.

Hubs

Fibre Channel hubs are used to connect up to 126 nodes into a logical loop.

All connected nodes share the bandwidth of this one logical loop. Each port on a hub contains a Port Bypass Circuit(PBC) to automatically open and close the loop to support hot plug.

Switched Fabric

Switched fabric is the highest performing device available for interconnecting large numbers of devices, increasing bandwidth, reducing congestion and providing aggregate throughput. .

Each device connected to a port on the switch, enabling an on-demand con-

nection to every connected device. Each node on a Switched fabric uses an aggregate throughput data path to send or receive data.

1.3.4 LUN Masking

LUN masking is a RAID system-centric enforced method of masking multiple

LUNs behind a single port. By using World Wide Port Names (WWPNs) of server HBAs, LUN masking is configured at the RAID-array level. LUN mask- ing also allows disk storage resource sharing across multiple independent servers. A single large RAID device can be sub-divided to serve a number of different hosts that are attached to the RAID through the SAN fabric with LUN masking. So that only one or a limited number of servers can see that LUN, each LUN inside the RAID device can be limited.

LUN masking can be done either at the RAID device (behind the RAID port) or at the server HBA. It is more secure to mask LUNs at the RAID device, but not all RAID devices have LUN masking capability. Therefore, in order to mask LUNs, some HBA vendors allow persistent binding at the driver-level.

1.4 Array Definition

1.4.1 RAID Set

A RAID Set is a group of disks containing one or more volume sets. It has the following features in the RAID subsystem controller:

1. Up to 128 RAID Sets are supported per RAID subsystem controller.

2. It is impossible to have multiple RAID Sets on the same disks.

A Volume Set must be created either on an existing RAID set or on a group of available individual disks (disks that are not yet a part of an raid set). If there are pre-existing raid sets with available capacity and enough disks for specified RAID level desired, then the volume set will be created in the exist- ing raid set of the user’s choice. If physical disks of different capacity are grouped together in a raid set, then the capacity of the smallest disk will become the effective capacity of all the disks in the raid set.

1.4.2 Volume Set

A Volume Set is seen by the host system as a single logical device. It is orga- nized in a RAID level with one or more physical disks. RAID level refers to the level of data performance and protection of a Volume Set. A Volume Set ca- pacity can consume all or a portion of the disk capacity available in a

RAID Set. Multiple Volume Sets can exist on a group of disks in a RAID Set.

Addi- tional Volume Sets created in a specified RAID Set will reside on all the physi- cal disks in the RAID Set. Thus each Volume Set on the RAID Set will have its data spread evenly across all the disks in the RAID Set. Volume Sets of differ- ent RAID levels may coexist on the same RAID Set.

In the illustration below, Volume 1 can be assigned a RAID 5 level of opera- tion while Volume 0 might be assigned a RAID 0+1 level of operation.

1.4.3 Easy of Use features

1.4.3.1 Instant Availability/Background Initialization

RAID 0 and RAID 1 volume set can be used immediately after the creation. But the RAID 3, 5, 6, 30, 50 and 60 volume sets must be initialized to generate the parity. In the Normal Initialization, the initialization proceeds as a background task, the volume set is fully accessible for system reads and writes. The oper- ating system can instantly access to the newly created arrays without requiring a reboot and waiting the initialization complete. Furthermore, the RAID volume set is also protected against a single disk failure while initialing.

In Fast Initialization, the initialization proceeds must be completed before the volume set ready for system accesses.

1.4.3.2 Array Roaming

The RAID subsystem stores configuration information both in NVRAM and on the disk drives It can protect the configuration settings in the case of a disk drive or controller failure. Array roaming allows the administrators the ability to move a completely raid set to another system without losing RAID configura-

tion and data on that raid set. If a server fails to work, the raid set disk drives can be moved to another server and inserted in any order.

1.4.3.3 Online Capacity Expansion

Online Capacity Expansion makes it possible to add one or more physical drive to a volume set, while the server is in operation, eliminating the need to store and restore after reconfiguring the raid set. When disks are added to a raid set, unused capacity is added to the end of the raid set. Data on the existing volume sets residing on that raid set is redistributed evenly across all the disks. A contiguous block of unused capacity is made available on the raid set. The unused capacity can create additional volume set. The expan- sion process is illustrated as following figure.

The RAID subsystem controller redistributes the original volume set over the original and newly added disks, using the same fault-tolerance configuration.

The unused capacity on the expand raid set can then be used to create an additional volume sets, with a different fault tolerance setting if user need to change.

1.4.3.4 Online RAID Level and Stripe Size Migration

User can migrate both the RAID level and stripe size of an existing volume set, while the server is online and the volume set is in use. Online RAID level/ stripe size migration can prove helpful during performance tuning activities as well as in the event that additional physical disks are added to the RAID subsystem. For example, in a system using two drives in RAID level 1, you could add capacity and retain fault tolerance by adding one drive. With the addition of third disk, you have the option of adding this disk to your existing RAID logical drive and migrating from RAID level 1 to 5. The result would be parity fault tolerance and double the available capacity without taking the sys- tem off.

1.4.4 High availability

1.4.4.1 Creating Hot Spares

A hot spare drive is an unused online available drive, which is ready for re- placing the failure disk drive. In a RAID level 1, 0+1, 3, 5, 6, 30, 50 or 60 raid set, any unused online available drive installed but not belonging to a raid set can define as a hot spare drive. Hot spares permit you to replace failed drives without powering down the system. When RAID subsystem detects a hard drive failure, the system will automatic and transparent rebuilds using hot spare drives. The raid set will be reconfigured and rebuilt in the background, while the RAID subsystem continues to handle system request. During the au- tomatic rebuild process, system activity will continue as normal, however, the system performance and fault tolerance will be affected.

!

Important:

The hot spare must have at least the same or more capacity as the drive it replaces.

1.4.4.2 Hot-Swap Disk Drive Support

The RAID subsystem has built the protection circuit to support the replace- ment of UDMA hard disk drives without having to shut down or reboot the system. The removable hard drive tray can deliver “hot swappable,” fault- tolerant RAID solutions at prices much less than the cost of conventional SCSI hard disk RAID subsystems. We provide this feature for subsystems to provide the advanced fault tolerant RAID protection and “online” drive replacement.

1.4.4.3 Hot-Swap Disk Rebuild

A Hot-Swap function can be used to rebuild disk drives in arrays with data redundancy such as RAID level 1, 0+1, 3, 5, 30, 50 and 60. If a hot spare is not available, the failed disk drive must be replaced with a new disk drive so that the data on the failed drive can be rebuilt. If a hot spare is available, the rebuild starts automatically when a drive fails. The RAID subsystem auto-

Introduction

1-22

matically and transparently rebuilds failed drives in the background with userdefinable rebuild rates. The RAID subsystem will automatically restart the system and the rebuild if the system is shut down or powered off abnormally during a reconstruction procedure condition. When a disk is Hot Swap, al- though the system is functionally operational, the system may no longer be fault tolerant. Fault tolerance will be lost until the removed drive is replaced and the rebuild operation is completed.

Introduction

Chapter 2

Configuring

The subsystem has a setup configuration utility built in containing important information about the configuration as well as settings for various optional functions in the subsystem. This chapter explains how to use and make changes to the setup utility.

Configuration Methods

There are three methods of configuring the subsystem. You may configure through the following methods:

• VT100 terminal connected through the controller’s serial port

• Front panel touch-control keypad

• Web browser-based Remote RAID management via the R-Link ethernet port

Important:

!

The subsystem allows you to access the utility using only one method at a time. You cannot use both methods at the same time.

2.1 Configuring through a Terminal

Configuring through a terminal will allow you to use the same configuration options and functions that are available from the LCD panel. To start-up:

1. Connect a VT100 compatible terminal or a PC operating in an equivalent terminal emulation mode to the monitor port located at the rear of the subsystem.

Note:

You may connect a terminal while the subsystem’s power is on.

2. Power-on the terminal.

3. Run the VT100 program or an equivalent terminal program.

Configuring

4. The default setting of the monitor port is 115200 baud rate, 8 data bit, non-parity, 1 stop bit and no flow control.

6. Open the File menu, and then open Properties.

Configuring

7. Open the Settings Tab.

8. Open the Settings Tab. Function, arrow and ctrl keys act as: Terminal

Keys, Backspace key sends: Crtl+H, Emulation: VT100, Telnet terminal:

VT100, Back scroll buffer lines: 500. Click OK.

9. Now, the VT100 is ready to use. After you have finished the VT100 Termi- nal setup, you may press “ X “ key (in your Terminal) to link the RAID subsystem and Terminal together. Press “X’ key to display the disk array

Monitor Utility screen on your VT100 Terminal.

10. The Main Menu will appear.

Keyboard Function Key Definitions

“ A “ key - to move to the line above

“ Z “ key - to move to the next line

“ Enter “ key - Submit selection function

“ ESC “ key - Return to previous screen

“ L ” key - Line draw

“ X ” key - Redraw

Configuring

Main Menu

The main menu shows all function that enables the customer to execute ac- tions by clicking on the appropriate link.

Note:

The password option allows user to set or clear the raid subsystem’s password protection feature. Once the password has been set, the user can only monitor and configure the raid subsystem by providing the cor- rect password. The password is used to protect the internal RAID sub- system from unauthorized entry. The controller will check the password only when entering the Main menu from the initial screen. The RAID subsystem will automatically go back to the initial screen when it does not receive any command in twenty seconds. The RAID subsystem password is default setting at 0000 by the manufacture.

VT100 terminal configuration Utility Main Menu Options

Select an option and the related information or submenu items display beneath it. The submenus for each item are explained on the section 2.3. The configu- ration utility main menu options are:

Option Description

Quick Volume And Raid Set

Setup

Create a RAID configurations which is consist of the number of physical disk installed

Raid Set Functions Create a customized raid set

Volume Set Functions

Physical Drive Functions

Create a customized volume set

View individual disk information

Raid System Functions

Fibre Channel Config

Ethernet Configuration

Views System Events

Clear Event Buffer

Hardware Monitor

System Information

Setting the raid system configurations

Setting the Fibre Channel configurations

Setting the Ethernet configurations

Record all system events in the buffer

Clear all event buffer information

Show all system environment status

View the controller information

Configuring

2.2 Configuring the Subsystem Using the LCD Panel

The LCD Display front panel function keys are the primary user interface for the Disk Array. Except for the “Firmware update” ,all configuration can be per- formed through this interface.The LCD provides a system of screens with ar- eas for information, status indication, or menus. The LCD screen displays up to two lines at a time of menu items or other information. The RAID subsystem password is default setting at 0000 by the manufacture.

Function Key Definitions

The four function keys at back panel LCD perform the following functions :

Parts

Function

Up or Down arrow buttons

Select button

Exit button

Use the Up or Down arrow keys to go through the information on the LCD screen. This is also used to move between each menu when you configure the subsystem.

This is used to enter the option you have selected.

Press this button to return to the previous menu.

2.3 Menu Diagram

The following tree diagram is a summary of the various configuration and set- ting functions that can be accessed through the LCD panel menus or the termi- nal monitor.

Raid 0

Greater Two TB Volume Support

Selected Capacity

Select Stripe Size

No, Use 64Bit LBA, use 4k Block

4K,8K,16K,32K,64K,128K

Raid 1 or 0+1

Create Vol / Raid Set

Greater Two TB Volume Support

Yes, No

No, Use 64Bit LBA, use 4k Block

Selected Capacity

Select Stripe Size

4K,8K,16K,32K,64K,128K

Create Vol / Raid Set

Yes, No

Quick Volume / Raid Setup

Initialization Mode

Foreground, Background, No Init

Raid 0+1 +Spare

Greater Two TB Volume Support

No, Use 64Bit LBA, use 4k Block

Selected Capacity

Select Stripe Size

4K,8K,16K,32K,64K,128K

Create Vol / Raid Set

Yes, No

Initialization Mode

Foreground, Background, No Init

Raid 3

Greater Two TB Volume Support

No, Use 64Bit LBA, use 4k Block

Selected Capacity

Create Vol / Raid Set

Yes, No

Initialization Mode

Raid 5

Greater Two TB Volume Support

Foreground, Background, No Init

No, Use 64Bit LBA, use 4k Block

Selected Capacity

Select Stripe Size

4K,8K,16K,32K,64K,128K

Create Vol / Raid Set

Yes, No

Initialization Mode

Foreground, Background, No Init

Raid 3 + Spare

Greater Two TB Volume Support

No, Use 64Bit LBA, use 4k Block

Selected Capacity

Create Vol / Raid Set

Yes, No

Initialization Mode

Raid 5 + Spare

Greater Two TB Volume Support

Foreground, Background, No Init

No, Use 64Bit LBA, use 4k Block

Selected Capacity

Select Stripe Size

4K,8K,16K,32K,64K,128K

Create Vol / Raid Set

Yes, No

Initialization Mode

Raid 6

Greater Two TB Volume Support

Foreground, Background, No Init

No, Use 64Bit LBA, use 4k Block

Selected Capacity

Select Stripe Size

4K,8K,16K,32K,64K,128K

Create Vol / Raid Set

Yes, No

Initialization Mode

Raid 6 + Spare

Greater Two TB Volume Support

Foreground, Background, No Init

No, Use 64Bit LBA, use 4k Block

Selected Capacity

Select Stripe Size

4K,8K,16K,32K,64K,128K

Create Vol / Raid Set

Yes, No

Initialization Mode

Configuring

Foreground, Background, No Init

Raid Set Function

Create Raid Set

Select IDE Drives for Raid Set

Ch01 ~ Ch016

Create Raid Set

Yes, No

Edit The Raid Set Name

Delete Raid Set

Select Raid Set To Delete

Delete Raid Set

Are you sure?

Yes, No

Yes, No

Expand Raid Set

Select IDE Drives for Raid Set Expansion

Select Drives IDE Channel

Expand Raid Set

Chxx ~ Ch016

Yes, No

Offline Raid Set

Select Raid Set To Offline

Offline Raid Set

Are You Sure?

Yes, No

Yes, No

Activate Raid Set

Select Raid Set To Active

Activate Raid Set

Are You Sure?

Yes, No

Yes, No

Create Hot Spare Disk

Select Drives for Hot spare,

Max 3 Hot spare supported

Create Hot Spare

Delete Hot Spare Disk

Chxx ~ Ch016

Yes, No

Select The Hot Spare Device To Be Deleted

Delete Hot Spare

Yes, No

Rescue Raid Set

Enter The operation key

Raid Set Information

Select Raid Set To Display

Volume Set Function

Create Volume Set

Create Volume From Raid Set

Volume Creation

Greater Two TB Volume Support,

Volume Name, Raid Level,

Capacity, Stripe Size, Fibre Host#,

LUN Base, Fibre LUN, Cache Mode,

Tag Queuing

Create Volume

Yes, No

Initialization Mode

Foreground, Background, No Init

Create Raid 30/50/60

Create Raid30/50/60 Free (capacity)

Select multiple Raid Set to Create

Volume Creation

Greater Two TB Volume Support,

Volume Name, Raid Level,

Capacity, Stripe Size, Fibre Host#,

LUN Base, Fibre LUN, Cache Mode,

Tag Queuing

Create Volume

Yes, No

Initialization Mode

Foreground, Background, No Init

Delete Volume Set

Delete Volume From Raid Set

Select Volume To Delete

Delete Volume Set

Are you sure?

Yes, No

Modify Volume Set

Modify Volume From Raid Set

Select Volume To Modify

Volume Modification

Modify Volume

Yes, No

Greater Two TB Volume Support,

Volume Name, Raid Level,

Capacity, Stripe Size,

Fibre Host#, LUN Base,

Fibre LUN, Cache Mode,

Tag Queuing

Yes, No

Are you sure?

Check Volume Set

Check Volume From Raid Set

Select Volume To Check

Stop Volume Check

Check The Volume

Stop All Volume Check

Yes, No

Are you sure?

Yes, No

Yes, No

Yes, No

Display Volume Info.

Display Volume Info in Raid

Select Volume To Display

Configuring

Physical Drives

View Drive Information

Select The Drives

Create Pass Through Disk

Select The Drives

Fibre Host#, LUN Base,

Fibre LUN, Cache Mode,

Tag Queuing

Modify Pass Through Disk

Select The Drives Fibre Host#, LUN Base,

Fibre LUN, Cache Mode,

Tag Queuing

Delete Pass Through Disk

Select The Drives

Delete Pass Through

Are you sure?

Yes, No

Yes, No

Identify Selected Drive

Select The Drives

Identify Enclosure

Select The Enclosure

Raid System Function

Mute The Alert Beeper

Alert Beeper Setting

Yes, No

Disabled, Enabled

Change Password

Enter New Password

Re-Enter Password

Yes, No

JBOD / RAID Function

Save The Password

RAID, JBOD

Configured AS JBOD?

Yes, No

Are you sure?

Background Task Priority

Yes, No

UltraLow(5%), Low(20%),

Medium(50%),High(80%)

Save The Settings

SATA NCQ Support

HDD Read Ahead Cache

Yes, No

Enable, Disable

Enable, Disable Maxtor,

Disable

Volume Data Read Ahead Normal, Aggressive,

Conservative, Disabled

Stagger Power on

0.4, 0.7, 1.0, 1.5, 2.0, 2.5,

3.0, 3.5, 4.0, 4.5, 5.0, 5.5, 6.0

Spin Down Idle HDD

Disable, 1( for test) , 3, 5, 10,

15, 20, 30, 40, 60

Enabled, Disabled

Controller Fan Detection

Save The Settings

Yes, No

Disk Write Cache Mode

Auto, Enabled, Disabled

Capacity Truncation

Update Firmware

Restart Controller

Are you sure?

Yes, No

To Multiples of 10G,

To Multiples of 1G,

Disabled

Yes, No

Fibre Channel Config

Ethernet Configuration

View System Events

Clear Event Buffer

Hardware Monitor

System Information

Channel 0 Speed

Channel 0 Topology

Auto, 1Gb, 2Gb, 4Gb

Auto, Loop, Point-Point, Fabric

Channel 0 Hard Loop ID

Auto, 0 ~ 125

Channel 1 Speed

Channel 1 Topology

Auto, 1Gb, 2Gb, 4Gb

Auto, Loop, Point-Point, Fabric

Channel 1 Hard Loop ID

Auto, 0 ~ 125

DHCP Function

Disabled, Enabled

Local IP Address

HTTP Port Number: 80

Telnet Port Number: 23

SMTP Port Number: 25

Ethernet Address

Show System Events

Clear Event Buffer

Yes, No

The Hard Monitor Information

The System Information

Configuring

2.4 Web browser-based Remote RAID management via R-

Link ethernet port

Configuration of the internal RAID subsystem with remote RAID management is a web browser-based application, which utilizes the browser installed on your oper- ating system. Web browser-based remote RAID management can be used to man- age all the raid function.

To configure internal RAID subsystem on a remote machine, you need to know its

IP Address. Launch your web browser by entering http://[IP Address] in the remote web browser.

!

Important:

The Ethernet default IP is “192.168.001.100” and DHCP function is

“enable”. You can configure correct IP Address through the LCD panel or the terminal “Ethernet Configuration” menu.

Note that you must be logged in as administrator with local admin rights on the remote machine to remotely configure it. The RAID subsystem controller default

User Name is “admin” and the Password is “0000”.

Main Menu

The main menu shows all function that enables the customer to execute ac- tions by clicking on the appropriate link.

Individual Category

Description

Quick Function

Create a RAID configuration, which is consist of the number of physical disk installed; it can modify the volume set Capacity, Raid

Level, and Stripe Size.

Raid Set Functions

Create a customized raid set.

Volume Set Functions

Create customized volume sets and modify the existed volume sets parameter.

Physical Drives

Create pass through disks and modify the existed pass through drives parameter. It also provides the function to identify the respect disk drive.

System Controls

Setting the raid system configurations

Information

View the controller and hardware monitor information. The Raid Set Hierarchy can also view through the RaidSet Hierarchy item.

Configuring

Configuration Procedures

Below are a few practical examples of concrete configuration procedures.

2.5 Quick Create

The number of physical drives in the raid subsystem determines the RAID levels that can be implemented with the raid set. You can create a raid set associated with exactly one volume set. The user can change the raid level, capacity, Volume Initialization Mode and stripe size. A hot spare option is also created depending upon the existing configuration.

If volume size over 2TB, it will be provided one option “Create Two TB Volume

Support” Automatically as above menu. There are three model for option “No”,

“64bit LBA”, “4K Block”.

Greater Two TB Volume Support:

No: keep the volume size with max. 2TB limitation.

64bit LBA: max. size 512TB, for Unix or Linux.

4K Block: max. size 16TB, use with “basic disk “ under OS Window 2000,

2003 or XP. It cannot be used by with dynamic disk.

Click on the Confirm The Operation and click on the Submit button in the

Quick Create screen, the raid set and volume set will start to initialize.

After you complete the Quick create function, you should refer to section 2.9.2

Fibre channel configuration to complete RAID configuration. Then you can use RaidSet Hierarchy feature to view the fibre channel volume set host filters information (refer to section 2.10.1).

Note: In Quick Create your volume set is automatically configured based on the number of disks in your system. Use the Raid Set Function and Volume Set

Function if you prefer to customize your system.

Configuring

2.6 Raid Set Functions

Use the Raid Set Function and Volume Set Function if you prefer to customize your system. User manual configuration can full control of the raid set setting, but it will take longer to complete than the Quick

Volume/Raid Setup configuration. Select the Raid Set Function to manually configure the raid set for the first time or deletes existing raid set and reconfigures the raid set. A raid set is a group of disks containing one or more volume sets.

2.6.1 Create Raid Set

To create a raid set, click on the Create Raid Set link. A “Select The Drive

For RAID Set” screen is displayed showing the drive connected to the current controller. Click on the selected physical drives with the current raid set. Enter

1 to 15 alphanumeric characters to define a unique identifier for a raid set. The default raid set name will always appear as Raid Set. #.

Click on the Confirm The Operation and click on the Submit button in the screen, the raid set will start to initialize.

2.6.2 Delete Raid Set

To delete a raid set, click on the Delete Raid Set link. A “Select The RAID SET

To Delete” screen is displayed showing all raid set existing in the current controller.

Click the raid set number you which to delete in the select column to delete screen.

Click on the Confirm The Operation and click on the Submit button in the screen to delete it.

Note:

Cannot delete RaidSet when contains Raid30/50/60 volume. You must delete Raid30/50/60 volume first.

Configuring

2.6.3 Expand Raid Set

Use this option to expand a raid set, when a disk is added to your system.

This function is active when at least one drive is available.

To expand a raid set, click on the Expand Raid Set link. Select the target raid set, which you want to expand it.

Click on the available disk and Confirm The Operation, and then click on the

Submit button in the screen to add disks to the raid set.

Note:

1. Once the Expand Raid Set process has started, user cannot stop it. The process must be completed.

2. If a disk drive fails during raid set expansion and a hot spare is available, an auto rebuild operation will occur after the raid set ex- pansion completes.

Configuring

Migrating occurs when a disk is added to a raid set. Migration status is dis- played in the raid status area of the Raid Set information when a disk is added to a raid set. Migrating status is also displayed in the associated volume status area of the volume set Information when a disk is added to a raid set.

Note:

Cannot expand RaidSet when contains Raid30/50/60 volume.

Configuring

2.6.4 Offline Raid Set

If user wants to move the Raid Set, when the RAID subsystem is power on.

User can use the Offline Raid Set option to offline the raid set. To prevent removing the wrong drive, the disk LEDs will light when the Offline Raid Set is selected. After user complete the function, the HDD State will change to offline

Mode.

To offline a raid set, click on the Offline Raid Set link. A “Select The RAID SET

To Offline” screen is displayed showing all raid set existing in the current controller.

Click the raid set number you which to offline in the select column.

Click on the Confirm The Operation, and then click on the Submit button in the screen to offline the raid set.

2.6.5 Activate Incomplete Raid Set

When one of the disk drive is removed in power off state, the raid set state will change to Incomplete State. If user wants to continue to work, when the RAID subsystem is power on. User can use the Activate Raid Set option to active the raid set. After user complete the function, the Raid State will change to

Degraded Mode.

To activate the incomplete the raid set, click on the Activate Raid Set link. A

Select The RAID SET To Activate” screen is displayed showing all raid set exist- ing in the current controller. Click the raid set number you which to activate in the select column.

Configuring

Click on the Submit button in the screen to activate the raid set that has removed one of disk drive in the power off state. The RAID subsystem will continue to work in degraded mode.

2.6.6 Create Hot Spare

When you choose the Create Hot Spare option in the Raid Set Function, all unused physical devices connected to the current controller appear:

Select the target disk by clicking on the appropriate check box. Click on the

Confirm The Operation, and click on the Submit button in the screen to create the hot spares.

The create Hot Spare option gives you the ability to define a global hot spare.

2.6.7 Delete Hot Spare

Select the target Hot Spare disk to delete by clicking on the appropriate check box.

Click on the Confirm The Operation, and click on the Submit button in the screen to delete the hot spares.

Configuring

2.6.8 Rescue Raid Set

If you try to Rescue Missing RAID Set, please contact our engineer for assistance.

2.7 Volume Set Function

A volume set is seen by the host system as a single logical device. It is orga- nized in a RAID level with one or more physical disks. RAID level refers to the level of data performance and protection of a volume set. A volume set capac- ity can consume all or a portion of the disk capacity available in a raid set.

Multiple volume sets can exist on a group of disks in a raid set. Additional volume sets created in a specified raid set will reside on all the physical disks in the raid set. Thus each volume set on the raid set will have its data spread evenly across all the disks in the raid set.

2.7.1 Create Volume Set

The following is the volume set features:

1.Volume sets of different RAID levels may coexist on the same raid set.

2.Up to 128 volume sets in a raid set can be created by the RAID subsystem controller.

To create volume set from raid set system, move the cursor bar to the main menu and click on the Create Volume Set link. The Select The Raid Set To

Create On It screen will show all raid set number. Click on a raid set number that you want to create and then click on the Submit button.

The new create volume set allows user to select the Volume name, capacity,

RAID level, strip size, Fibre channel/LUN, Cache mode, and tag queuing.

Configuring

Volume Name:

The default volume name will always appear as Volume ---VOL#. You can rename the volume set name providing it does not exceed the 15 characters limit.

Raid Level:

Set the RAID level for the Volume Set. Highlight Raid Level and press Enter.

The available RAID levels for the current Volume Set are displayed. Select a

RAID level and press Enter to confirm.

Capacity:

The maximum volume size is default in the first setting. Enter the appropriate volume size to fit your application.

Greater Two TB Volume Support: If volume size over 2TB, it will be pro- vided one option “Creater TwoTB Volume Support” Automatically.

No: still keep the volume size with max. 2TB limitation.

64bit LBA: the max. size 512TB, for Unix or Linux.

4K Block: the max. size 16TB , just use with “ basic disk manager “ under

OS Window 2000, 2003 or XP. Noted that can’t be used by with dynamic disk manager.

Initialization Mode:

Set the Initialization Mode for the Volume Set. Foreground mode is faster completion and background is instant available. No init mode is for rescuing volume. If you try to Rescue Missing volume set, please contact our engineer for assistance.

Strip Size:

This parameter sets the size of the stripe written to each disk in a RAID 0, 1,

0+1, or 5 logical drive. You can set the stripe size to 4 KB, 8 KB, 16 KB, 32

KB, 64 KB, or 128 KB.

A larger stripe size produces better-read performance, especially if your com- puter does mostly sequential reads. However, if you are sure that your com- puter does random reads more often, select a small stripe size

Note: RAID level 3 can’t modify strip size.

Cache Mode:

The RAID subsystem supports Write-Through Cache and Write-Back Cache.

Tag Queuing:

The Enabled option is useful for enhancing overall system performance under multi-tasking operating systems. The Command Tag (Drive Channel) function con- trols the Fibre command tag queuing support for each drive channel. This function should normally remain enabled. Disable this function only when using older Fibre drives that do not support command tag queuing

Fibre Channel/LUN Base/LUN:

Fibre Channel: Two 4Gbps Fibre channel can be applied to the internal RAID subsystem. Choose the Fibre Host# option 0, 1 and 0&1 cluster.

Configuring

LUN Base: Each fibre device attached to the Fibre card, as well as the card itself, must be assigned a unique fibre ID number. A Fibre channel can connect up to 126 (0 to 125) devices. The RAID subsystem is as a large Fibre device.

We should assign an LUN base from a list of Fibre LUNs.

LUN: Each Fibre LUN base can support up to 8 LUNs. Most Fibre Channel host adapter treats each LUN like a Fibre disk.

Volumes To Be Created: use this option to create the same attribution volume. Up to 128 volume sets can be created.

2.7.2 Create Raid30/50/60

To create RAID 30/50/60 from raid set system, move the cursor bar to the main menu and click on the Create Raid30/50/60 link. The Select

Multiple RaidSet For Raid30/50/60 screen will show all raid set number.

Click on a raid set number that you want to create and then click on the

Submit button.

Max 8 RaidSet Supported.

2.7.3 Delete Volume Set

To delete Volume from raid set system function, move the cursor bar to the main menu and click on the Delete Volume Set link. The Select The Vol-

ume Set To Delete screen will show all raid set number. Click on a raid set number and the Confirm The Operation and then click on the Submit button to show all volume set item in the selected raid set. Click on a volume set number and the Confirm The Operation and then click on the

Submit button to delete the volume set.

Configuring

2.7.4 Modify Volume Set

To modify a volume set from a raid set:

(1). Click on the Modify Volume Set link.

(2). Click on the volume set from the list that you wish to modify. Click on the

Submit button.

The following screen appears.

Use this option to modify volume set configuration. To modify volume set attribute values from raid set system function, move the cursor bar to the volume set at- tribute menu and click on it. The modify value screen appears. Move the cursor bar to an attribute item, and then click on the attribute to modify the value. After you complete the modification, Click on the Confirm The Operation and click on the Submit button to complete the action. User can modify all values except the capacity.

2.7.4.1 Volume Expansion

Volume Capacity (Logical Volume Concatenation Plus Re-stripe)

Use this raid set expands to expand a raid set, when a disk is added to your system. (refer to section 2.6.3)

The expand capacity can use to enlarge the volume set size or create another volume set. The modify volume set function can support the volume set expansion function. To expand volume set capacity value from raid set system function, move the cursor bar to the volume set Volume capacity item and entry the capacity size.

Click on the Confirm The Operation and click on the Submit button to complete the action. The volume set start to expand.

Note:

Cannot expand volume capacity in Raid30/50/60 volume.

Configuring

Migrating occurs when a volume set is migrating from one RAID level to another, a volume set strip size changes, or when a disk is added to a raid set. Migration status is displayed in the volume status area of the

RaidSet Hierarchy screen when one RAID level to another, a Volume set strip size changes or when a disk is added to a raid set.

Note:

Cannot modify RAID level and stripe size in Raid30/50/60 volume.

2.7.6 Check Volume Set

To check a volume set from a raid set:

(1). Click on the Check Volume Set link.

(2). Click on the volume set from the list that you wish to check. Click on Con- firm The Operation and click on the Submit button.

Use this option to verify the correctness pf the redundant data in a volume set.

For example, in a system with dedicated parity, volume set check means com- puting the parity of the data disk drives and comparing the results to the con- tents of the dedicated parity disk drive. The checking percentage can also be viewed by clicking on RaidSet Hierarchy in the main menu.

Configuring

2.7.7 Scheduled Volume Checking

To check a volume set by schedule :

(1). Click on the Scheduled Volume Checking link.

(2). Select desired schedule that you wish to check volume set. Click on Con- firm The Operation and click on the Submit button.

Scheduler: Disabled, 1 Day(For Testing), 1Week, 2Weeks, 3Weeks, 4Weeks,

8Weeks, 12Weeks, 16Weeks, 20Weeks and 24Weeks.

Check After System Idle: No, 1 Minute, 3 Minutes, 5 Minutes, 10 Minutes, 15

Minutes, 20 Minutes, 30 Minutes, 45Minutes and 60 Minutes.

2.7.8 Stop VolumeSet Check

Use this option to stop the Check Volume Set function.

2.7.9 Volume Set Host Filters

Use this option to View/Edit Host Filters. Refer to section 2.9.2.2 View/Edit

Volume Set Host Filters for more information. You should complete the Fibre

Channel Configuration first before you use this option.

Configuring

2.8 Physical Drive

Choose this option from the Main Menu to select a physical disk and to per- form the operations listed below.

2.8.1 Create Pass-Through Disk

To create pass-through disk, move the mouse cursor to the main menu and click on the Create Pass-Through link. The relative setting function screen appears.

Disk is no controlled by the internal RAID subsystem firmware and thus cannot be a part of a volume set. The disk is available to the operating system as an individual disk. It is typically used on a system where the operating system is on a disk not controlled by the RAID firmware. User can also select the cache mode, Tagged Command Queuing and Fibre channel/LUN Base/LUN for this volume.

2.8.2 Modify Pass-Through Disk

Use this option to modify the Pass-Through Disk Attribute. User can modify the cache mode, Tagged Command Queuing and Fibre channel/LUN Base/LUN on an existed pass through disk.

To modify the pass-through drive attribute from the pass-through drive pool, move the mouse cursor bar to click on Modify Pass-Through link. The Select

The Pass Through Disk For Modification screen appears Click on the

Pass- Through Disk from the pass-through drive pool and click on the Submit button to select drive.

The Enter Pass-Through Disk Attribute screen appears, modify the drive at- tribute values, as you want.

Configuring

2.8.3 Delete Pass-Through Disk

To delete pass-through drive from the pass-through drive pool, move the mouse cursor bar to the main menus and click on Delete Pass

Through link. After you complete the selection, Click on the Confirm The

Operation and click on the Submit button to complete the delete action.

2.8.4 Identify Enclosure

To identify the Enclosure, move the mouse cursor bar to click on Identify Enclo-

sure link. The Select The Enclosure For identification screen appears Click on the enclosure from the enclosure pool and Flash method. After completing the selection, click on the Submit button to identify selected enclosure. All of the disk LED will Flash when the enclosure is selected

2.8.5 Identify Drive

To prevent removing the wrong drive, the selected disk LED will light for physi- cally locating the selected disk when the Identify Drive is selected.

To identify the selected drive from the drives pool, move the mouse cursor bar to click on Identify Drive link. The Select The Device For identification screen appears Click on the device from the drives pool and Flash method. After complet- ing the selection, click on the Submit button to identify selected drive.

Configuring

2.9 System Configuration

2.9.1 System Configuration

To set the raid system function, move the cursor bar to the main menu and click on the Raid System Function link. The Raid System Function menu will show all items. Select the desired function.

System Beeper Setting:

The Alert Beeper function item is used to Disabled or Enable the RAID sub- system controller alarm tone generator.

Background Task Priority:

The Raid Rebuild Priority is a relative indication of how much time the control- ler devotes to a rebuild operation. The RAID subsystem allows user to choose the rebuild priority (ultraLow, Low, Medium, High) to balance volume set ac- cess and rebuild tasks appropriately. For high array performance, specify a

Low value.

JBOD/RAID Configuration

The RAID subsystem supports JBOD and RAID configuration.

SATA NCQ Support:

NCQ is a command protocol in Serial ATA that can only be implemented on native

Serial ATA hard drives. It allows multiple commands to be outstanding within a drive at the same time. Drives that support NCQ have an internal queue where outstanding commands can be dynamically rescheduled or re-ordered, along with the necessary tracking mechanisms for outstanding and completed portions of the workload. Disabled or Enable the SATA NCQ function.

HDD Read Ahead Cache:

This option allows the users to disable the cache of the HDDs on the RAID subsystem. To some HDD models, disabling the cache in the HDD is neces- sary to prove the RAID subsystem functions correctly.

Volume Data Read Ahead:

This option allows the users to select the Volume Data Read Ahead. The set- ting values are Normal, Aggressive, Conservative and Disabled .

HDD Queue Depth:

The queue depth is the number of I/O operations that can be run in parallel on a device. This HDD Queue Depthe are 1, 2, 4, 8, 16, and 32.

Stagger Power On Control:

This option allows the power supplier to power up in order each HDD on the

RAID subsystem. In the past, all the HDDs on the RAID subsystem are pow- ered up altogether at the same time. The power transfer time (lag time) from the last HDD to the next one can be set within the range of 0.4 to 6.0.

Spin Down Idle HDD (Minutes):

This option is to spin down hard drives after they have been idle for a select- able amount of time. The setting values are Disabled, 1(For Test), 3, 5, 10, 15,

20, 30, 40, and 60.

Configuring

Disk Write Cache Mode:

The RAID subsystem supports auto, enabled and disabled. When the RAID sub- system with BBM (battery backup module) the auto option will Enable disk write cache. Contrariwise, the auto option will Disable disk write cache.

Disk Capacity Truncation Mode:

This RAID subsystem use drive truncation so that drives from differing vendors are more likely to be able to be used as spares for each other. Drive trunca- tion slightly decreases the usable capacity of a drive that is used in redundant units.

Multiples Of 10G: If you have 120 GB drives from different vendors; chances are that the capacity varies slightly. For example, one drive might be 123.5

GB, and the other 120 GB. This drive Truncation mode Multiples Of 10G uses the same capacity for both of these drives so that one could replace the other.

Multiples Of 1G: If you have 123 GB drives from different vendors; chances are that the capacity varies slightly. For example, one drive might be 123.5

GB, and the other 123.4 GB. This drive Truncation mode Multiples Of 1G uses the same capacity for both of these drives so that one could replace the other.

No Truncation: It does not truncate the capacity.

2.9.2 Fibre Channel Config

To set the Fibre Channel function, move the cursor bar to the main menu and click on the Fibre Channel Config. The Raid System Fibre Channel Function menu will show all items. Select the desired function.

Distinct WWNN for Each Channel

Use this option to enable dual channel support on Mac machines.

WWNN (World Wide Node Name)

The WWNN of the FC RAID is shown at top of the Config Frame. This is an eight- byte unique address factory assigned to the FC RAID, common to both FC channels.

WWPN (World Wide Port Name)

Each FC channel has its unique WWPN, which is also factory assigned. Usually, the WWNN:WWPN tuple is used to uniquely identify a port in the Fabric.

Configuring

Channel Speed

Each FC Channel can be configured as 1 Gbps/sec, 2 Gbps/sec, 4 Gbps/sec or use “Auto” option for auto speed negotiation between 1G/4G. The controller de- fault is “Auto”, which should be adequate under most conditions. The Channel

Speed setting takes effect for the next connection. That means a link down or bus reset should be applied for the change to take effect. The current connection speed is shown at end of the row. You have to click the “Fibre Channel Config” link again from the Menu Frame to refresh display of current speed.

Channel Topology

Each FC Channel can be configured as Fabric, Point-to-Point, Loop or Auto

Topology. The controller default is “Auto” topology, which takes precedence of

Loop topology. Firmware restart is needed for any topology change to take effect.

The current connection topology is shown at end of the row. You have to click the

“Fibre Channel Config” link again from the Menu Frame to refresh display of current topology. Note that current topology is shown as “None” when no suc- cessful connection is made for the channel.

Hard Loop ID

This setting is effective only under Loop topology. When enabled, you can manu- ally set the Loop ID in the range from 0 to 125. Make sure this hard assigned ID is not conflicted with any other devices on the same loop; otherwise the channel will be disabled. It is a good practical to disable the hard loop ID and let the loop itself auto arrange the Loop ID.

2.9.2.1

View/Edit Host Name List

To set up LUN masking for each volume, a host list should be established at first.

This is done by clicking “View/Edit Host Name List” link at bottom of “Fibre Chan- nel Config” page (refer to section 2.9.2). Only hosts that will be used as include/ exclude filters are necessary to be added.

The subsystem provides two way to add a host to the list:

1. Select WWN From Detected Host.

2. Key in: first enter the WWPN (exact 16 hex digits) of the host in the “Host

WWN” text field. Optional host nick name (up to 23 ASCII characters) can be given for descriptive purpose.

Choose “Add” operation, then Confirm/Submit to complete the add operation.

The added host will be shown in the upper half of the Config Frame. Up to 20 hosts can be added.

To delete a host from the list, select the radio button in front of the host list.

Choose “Delete” operation, then Confirm/Submit to complete the delete operation.

Once volumes are created and the host name list is established, Volume Set Host

Filters can be specified by clicking “View/Edit Volume Set Host Filters” link at bottom of “Fibre Channel Config” page. Volume Set Host Filters can also be speci- fied by clicking “Volume Set Functions” ¬ “Volume Set Host Filters” from the Menu Frame. Select the volume for LUN masking and then click the submit button.

Configuring

2.9.2.2 Volume Set Host Filters

Volume Set Host Filters can be specified by clicking “View/Edit Volume Set Host

Filters” link at bottom of “Fibre Channel Config” page. Volume Set Host Filters can also be specified by clicking “Volume Set Functions” ¬ “Volume Set Host

Filters” from the Menu Frame.

To add a host filter entry, first select the host to be include/exclude from Host

WWN list.

Adjust Range Mask, Filter Type, Access Mode fields. Choose “Add” operation, then Confirm/Submit to complete the add operation. The added host filter entry will be shown in the upper half of the Config Frame. Up to 8 host filter entries can be added.

To delete a host filter entry from the list, select the radio button in front of the host entry.

Choose “Delete” operation, then Confirm/Submit to complete the delete operation.

Range Mark

Some times it is convenient to combine some corelated WWNs into one ID range as a single filter entry. This ID range is obtained by “AND’ing” the host WWN and the Range Mask (which are both 64-bit entities). For example, if hosts

0x210000e0_8b03dc84 and 0x210000e0_8b03dc85 have the same access control,

Configuring

they can be combined as a single filter entry (using either WWN as host WWN) with Range Mask setting as 0xFFFFFFFF_FFFFFFFE. Note that under most circumstances, the Range Mask is left as 0xFFFFFFFF_FFFFFFFF, which means that only a single WWN is specified for that filter entry.

Filter Type

Each filter entry can be set to include or exclude certain host(s) from data access.

If a node’s WWN falls in an ID range specified as Exclude, the related volume set will be “invisible” to this node and no data access is possible.

If a node’s WWN falls in an ID range specified as Include and does not fall in any ID range specified as Exclude, this node will be allowed to access the data of the related volume set.

The access mode can be specified as normal “Read/Write” or restricted “Read

Only”.

If a node’s WWN falls in none of the ranges and there is at least one Include-type entry specified, this node is considered as Excluded; otherwise, it is considered as Included.

Note that when no Filter Entries are specified for a volume set, any node can access the volume set as there is no LUN masking.

Access mode

For certain applications, it is desired to limit the data access as “Read Only” such that the data on the volume won’t be accidentally modified. This can be done by setting the Access Mode of the Included ID range as “Read Only”. However, some

Operating Systems (e.g. Linux) may ignore this “Write Protect” attribute and still issue write commands to the protected volumes.

“Data Protected” error will be response to these write commands then. It is sug- gested to mount the volumes as “Read Only” for a consistent behavior, if possible.

Configuring

2.9.3 EtherNet Config

To set the EtherNet function, move the cursor bar to the main menu and click on he EtherNet Config. The Raid System EtherNet Function menu will show all items. Select the desired function.

2.9.4 Alert By Mail Config

To set the Event Notification function, move the cursor bar to the main menu and click on the Alert By Mail Config. The Raid System Event Notification Function menu will show all items. Select the desired function. When an abnormal condi- tion occurs, an error message will be email to administrator that a problem has occurred. Events are classified to 4 levels (urgent, serious, warning, message).

Configuring

2.9.5 SNMP Configuration

The SNMP gives users independence from the proprietary network management schemes of some manufacturers and SNMP is supported by many WAN and LAN manufacturers enabling true LAN/ WAN management integration.

To set the SNMP function, move the cursor bar to the main menu and click on he SNMP Configuration. The Raid System SNMP Function menu will show all items. Select the desired function.

SNMP Trap Configurations: Type the SNMP Trap IP Address. The Port de- fault is 162.

SNMP System Configuration:

Community: The default is Public.

(1)sysContact.0; (2)sysLocation.0; (3)sysName.0: SNMP parameter (31 bytes max). If this 3 categories are selected during initial setting then when an error occurs SNMP will send out a message that includes the 3 categories within

the message. This allows user to easily define which RAID unit is having problem. Once this setting is done, alert by mail configuration will also work in the same way.

SNMP Trap Notification Configurations: Select the desired function.

After you complete the addition, Click on the Confirm The Operation and click on the Submit button to complete the action.

Configuring

2.9.6 NTP Configuration

NTP stands for Network Time Protocol, and it is an Internet protocol used to synchronize the clocks of computers to some time reference. NTP is an

Internet standard protocol. You can directly type your NTP Server IP Address to have the RAID subsystem can work with it.

To set the NTP function, move the cursor bar to the main menu and click on he NTP Configuration. The Raid System NTP Function menu will show all items. Select the desired function.

Key in NTP server IP , select Time Zone , get NTP time. Setting Automatic

Daylight Saving by the region. “NTP Time Got At” is NTP got last time.

After you complete the addition, Click on the Confirm The Operation and click on the Submit button to complete the action.

2.9.7 View Events

To view the RAID subsystem controller’s information, move the mouse cursor to the main menu and click on the System Information link. The Raid Sub- system events Information screen appears.

Choose this option to view the system events information: Timer,

Device, Event type, Elapse Time and Errors.

Configuring

2.9.8 Generate Test Events

If you want to generate test events, move the cursor bar to the main menu and click on he Generate Test Events. Click on the Confirm The Operation, and click on the Submit button in the screen to create the hot spares. Then click on the View Events/Mute Beeper to view the test event.

2.9.9 Clear Events Buffer

Use this feature to clear the entire events buffer information.

2.9.10 Modify Password

To set or change the RAID subsystem password, move the mouse cursor to Raid

System Function screen, and click on the Change Password link. The Modify

System Password screen appears.

Configuring

The password option allows user to set or clear the raid subsystem’s pass- word protection feature. Once the password has been set, the user can only monitor and configure the raid subsystem by providing the correct password.

The password is used to protect the internal RAID subsystem from unautho- rized entry. The controller will check the password only when entering the

Main menu from the initial screen. The RAID subsystem will automatically go back to the initial screen when it does not receive any command in ten seconds.

To disable the password, press Enter key only in both the Enter New Password and Re-Enter New Password column. Once the user confirms the operation and clicks the Submit button. The existing password will be cleared. No pass- word checking will occur when entering the main menu from the starting screen.

2.9.11 Upgrade Firmware

Please reference the section 4.2 for more information.

2.9.12 Restart Controller

2.10 Information Menu

2.10.1 RaidSet Hierarchy

Use this feature to view the internal raid subsystem current raid set, current vol- ume set and physical disk configuration. Click the volume set number you which to View in the select column. Then you can view the Volume Set Information and

Fibre Channel Volume Set Host Filters.

Configuring

To view the RAID subsystem controller’s information, move the mouse cursor to the main menu and click on the System Information link. The Raid Subsystem

Information screen appears.

Use this feature to view the raid subsystem controller’s information. The control- ler name, firmware version, serial number, main processor, CPU data/Instruction cache size and system memory size/speed appear in this screen.

2.10.3 Hardware Monitor

To view the RAID subsystem controller’s hardware monitor information, move the mouse cursor to the main menu and click the Hardware Monitor link. The Hard- ware Information screen appears.

The Hardware Monitor Information provides the temperature, fan speed (chassis fan) and voltage of the internal RAID subsystem. All items are also unchangeable.

The warning messages will indicate through the LCD, LED and alarm buzzer.

Item

Warning Condition

Controller Board Temperature

HDD Temperature

> 70 Celsius

> 65 Celsius

Controller Fan Speed

< 1500 RPM

Power Supply +12V

< 10.5V or > 13.5V

Power Supply +5V

< 4.7V or > 5.4V

Power Supply +3.3V

< 3.0V or > 3.6V

DDR Supply Voltage +2.5V

< 2.25V or > 2.75V

CPU Core Voltage +1.3V

< 1.17V or > 1.43V

DDR Termination Power +1.25V

2-89

< 1.125V or > 1.375V

Configuring

2.11 Creating a New RAID or Reconfiguring an Existing

RAID

You can configure raid sets and volume sets using Quick Create or Raid Set

Functions/Volume Set Functions configuration method. Each configuration method requires a different level of user input. The general flow of operations for raid set and volume set configuration is:

Step Action

1

2

3

4

5

Designate hot spares/pass-through (optional).

Choose a configuration method.

Create raid set using the available physical drives.

Define volume set using the space in the raid set.

Initialize the volume set and use volume set in the HOST OS.

Configuring

2-90

Appendix A

Technical Specification

RAID processor Intel IOP341 RISC

RAID level

Cache memory

No. of channels (host+disk) 2+16

Host bus interface FC-AL (4Gb/s * 2)

Data transfer

Drive bus interface

0, 1, 0+1, 3, 5, 6, 30, 50, 60 and JBOD

512MB~4GB DDR2 ECC SDRAM

Up to 400MB / sec

SAS or 3Gb/s S-ATA II

Hot swap disk bays

Hot swap power supply

16

460W * 2 w/PFC

Cooling fan

On-line expansion

Host Independent

Failed drive indicators

4

Yes

Multiple RAID selection

Yes

Failed disk auto rebuild

Array Roaming

Bad block auto-remapping

Online RAID level migration

Yes

Yes

Yes

Yes

Audible alarm Yes

Yes

Yes

Technical Specification

A-1

advertisement

Related manuals

advertisement