RAID FIBRE TO S-ATA
Installation Reference Guide
Revision 1.0
P/N: PW0020000000217
Copyright
No part of this publication may be reproduced, stored in a retrieval system, or
transmitted in any form or by any means, electronic, mechanical, photocopying,
recording or otherwise, without the prior written consent.
Trademarks
All products and trade names used in this document are trademarks or registered trademarks of their respective holders.
Changes
The material in this documents is for information only and is subject to change
without notice.
FCC Compliance Statement
This equipment has been tested and found to comply with the limits
for a Class B digital device, pursuant to Part 15 of the FCC rules. These
limits are designed to provide reasonable protection against harmful
interference in residential installations. This equipment generates, uses,
and can radiate ra- dio frequency energy, and if not installed and used in
accordance with the
instructions,
may
cause
harmful
interference
to
radio
communications. However, there is not guarantee that interference will not
occur in a particular installation. If this equipment does cause interference
to radio or television equipment reception, which can be determined by
turning the equipment off and on, the user is encouraged to try to correct
the interference by one or more of the following measures:
1.
2.
3.
4.
Reorient or relocate the receiving antenna
Move the equipment away from the receiver
Plug the equipment into an outlet on a circuit different from that to
which the receiver is powered.
Consult the dealer or an experienced radio/television technician for
help
All external connections should be made using shielded cables.
About This Manual
Welcome to your Redundant Array of Independent Disks System User’s Guide.
This manual covers everything you need to know in learning how to install or
configure your RAID system. This manual also assumes that you know the basic
concepts of RAID technology. It includes the following information :
Chapter 1
Introduction
Introduces you to Disk Array’s features and general technology concepts.
Chapter 2
Getting Started
Helps user to identify parts of the Disk Array and prepare the hardware for configuration.
Chapter 3
Configuring
Quick Setup
Provides a simple way to setup your Disk Array.
Customizing Setup
Provides step-by-step instructions to help you to do setup or re-configure your Disk Array.
Chapter 4
Array Maintenance
Adding Cache Memory
Provides a detailed procedure to increase cache memory from the default amount of 128MB to
higher.
Updating Firmware
Provides step-by-step instructions to help you to update the firmware to the latest version.
Hot Swap Components
Describes all hot swap modules on Disk Array and provides the detailed procedure to replace
them.
Table of Contents
Chapter 1
1.1
1.2
1.3
1.4
Chapter 2
2.1
2.2
2.3
2.4
2.5
2.6
2.7
Chapter 3
3.1
3.2
3.3
3.4
3.5
3.6
Introduction
Key Features.......................................................................................................... 1-2
RAID Concepts...................................................................................................... 1-3
Fibre Functions..................................................................................................... 1-10
1.3.1 Overview....................................................................................................... 1-10
1.3.2 Three ways to connect (FC Topologies)................................................ 1-10
1.3.3 Basic elements.......................................................................................... 1-12
1.3.4 LUNMasking............................................................................................... 1-13
ArrayDefinition....................................................................................................... 1-13
1.4.1 RAID set........................................................................................................ 1-13
1.4.2 Volume Set................................................................................................... 1-14
1.4.3 Easy of Use features.................................................................................. 1-14
1.4.4 HighAvailability............................................................................................ 1-17
Getting Started
Unpacking the subsystem.........................................................................................
Identifying Parts of the subsystem.....................................................................
2.2.1 Front View......................................................................................................
2.2.2 Rear View.....................................................................................................
Connecting to Host...............................................................................................
Powering-on the subsystem..............................................................................
Install Hard Drives............................................................................................
Connecting UPS...............................................................................................
Connecting to PC or Terminal..........................................................................
2-1
2-3
2-3
2-6
2-8
2-9
2-10
2-12
2-13
Configuring
Configuring through a Terminal..............................................................................
Configuring the Subsystem Using the LCD Panel.........................................
MenuDiagram.......................................................................................................
Web browser-based Remote RAID management via R-Link ethernet.......
Quick Create..........................................................................................................
Raid Set Functions...............................................................................................
3.6.1 Create Raid Set..........................................................................................
3.6.2 Delete Raid Set............................................................................................
3.6.3 Expand Raid Set...........................................................................................
3.6.4 Activate Incomplete Raid Set...................................................................
3.6.5 Create Hot Spare........................................................................................
3.6.6 Delete Hot Spare.........................................................................................
3-1
3-9
3-10
3-15
3-17
3-19
3-19
3-20
3-21
3-23
3-25
3-25
3.7
3.8
3.9
3.10
3.11
Chapter 4
4.1
4.2
4.3
Appendix A
3.6.7 Rescue Raid Set..........................................................................................
Volume Set Function..............................................................................................
3.7.1 Create Volume Set......................................................................................
3.7.2 Delete Volume Set......................................................................................
3.7.3 Modify Volume Set........................................................................................
3.7.3.1 Volume Expansion.......................................................................
3.7.4 Volume Set Migration..................................................................................
3.7.5 Check Volume Set........................................................................................
3.7.6 Stop Volume Set Check..............................................................................
3.7.7 Volume Set Host Filters..............................................................................
PhysicalDrive.........................................................................................................
3.8.1 Create Pass-Through Disk........................................................................
3.8.2 Modify Pass-Through Disk.........................................................................
3.8.3 Delete Pass-Through Disk........................................................................
3.8.4 Identify Selected Drive.................................................................................
System Configuration...........................................................................................
3.9.1 System Configuration.................................................................................
3.9.2 Fibre Channel Configuration......................................................................
3.9.2.1 View/Edit Host Name List...............................................................
3.9.2.2 View/Edit Volume Set Host Filters................................................
3.9.3 Ethernet Config.............................................................................................
3.9.4 Alert By Mail Config......................................................................................
3.9.5 SNMP Configuration.....................................................................................
3.9.6 NTP Configuration........................................................................................
3.9.7 View Events..................................................................................................
3.9.8 Generate Test Events.................................................................................
3.9.9 Clear Events Buffer......................................................................................
3.9.10 Modify Password.....................................................................................
3.9.11 Upgrade Firmware......................................................................................
Information Menu....................................................................................................
3.10.1 RaidSet Hierarchy.....................................................................................
3.10.2 System Information .............................................................................
3.10.3 Hardware Monitor......................................................................................
Creating a new RAID or Reconfiguring an Existing RAID................................
3-26
3-27
3-27
3-30
3-31
3-31
3-33
3-34
3-34
3-35
3-36
3-36
3-37
3-38
3-38
3-39
3-39
3-42
3-43
3-45
3-49
3-50
3-51
3-53
3-54
3-55
3-56
3-56
3-57
3-58
3-58
3-59
3-60
3-61
Array Maintenance
Memory Upgrades................................................................................................
4.1.1 Installing Memory Module.........................................................................
Upgrading the Firmware.....................................................................................
Hot Swap components........................................................................................
4.3.1 Replacing a disk.........................................................................................
4.3.2 Replacing a Power Supply........................................................................
4.3.3 Replacing a Fan..........................................................................................
4-1
4-2
4-3
4-10
4-10
4-11
4-12
Technical Specification...................................................
A-1
Chapter 1
Introduction
The RAID subsystem is a Fibre channel-to-Serial ATA II RAID (Redundant Arrays of Independent Disks) disk array subsystem. It consists of a RAID disk
array controller and twelve (12) disk trays.
The subsystem is a “Host Independent” RAID subsystem supporting RAID levels 0, 1, 3, 5, 6, 0+1 and JBOD. Regardless of the RAID level the subsystem
is configured for, each RAID array consists of a set of disks which to the user
appears to be a single large disk capacity.
One unique feature of these RAID levels is that data is spread across separate
disks as a result of the redundant manner in which data is stored in a RAID
array. If a disk in the RAID array fails, the subsystem continues to function
without any risk of data loss. This is because redundant information is stored
separately from the data. This redundant information will then be used to reconstruct any data that was stored on a failed disk. In other words, the subsystem can tolerate the failure of a drive without losing data while operating
independently of each other.
The subsystem is also equipped with an environment controller which is capable of accurately monitoring the internal environment of the
subsystem such as its power supplies, fans, temperatures and voltages.
The disk trays allow you to install any type of 3.5-inch hard drive. Its
modular design allows hot-swapping of hard drives without interrupting the
subsystem’s operation.
1.1 Key Features
Subsystem Features:
™ Features an Intel 80321 64 bit RISC I/O processor
™ Build-in 128MB cache memory, expandable up to 1024MB
™ 4Gb Fibre channel, dual loop optical SFP LC (short wave) host port
™ Smart-function LCD panel
™ Supports up to Twelve (12) 1" hot-swappable Serial ATA II hard drives
™ Redundant load sharing hot-swappable power supplies
™ High quality advanced cooling fans
™ Local audible event notification alarm
™ Supports password protection and UPS connection
™ Built-in R-Link LAN port interface for remote management & event notification
™ Dual host channels support clustering technology
™ Real time drive activity and status indicators
RAID Function Features:
™ Supports RAID levels 0, 1, 0+1, 3, 5, 6 and JBOD
™ Supports hot spare and automatic hot rebuild
™ Allows online capacity expansion within the enclosure
™ Tagged command queuing for 256 commands, allows for overlapping data
streams
™ Transparent data protection for all popular operating systems
™ Bad block auto-remapping
™ Supports multiple array enclosures per host connection
™ Multiple RAID selection
™ Array roaming
™ Online RAID level migration
1.2 RAID Concepts
RAID Fundamentals
The basic idea of RAID (Redundant Array of Independent Disks) is to combine
multiple inexpensive disk drives into an array of disk drives to obtain performance,
capacity and reliability that exceeds that of a single large drive. The array of
drives appears to the host computer as a single logical drive.
Six types of array architectures, RAID 1 through RAID 6, were originally defined,
each provides disk fault-tolerance with different compromises in features and
performance. In addition to these five redundant array architectures, it has become
popular to refer to a non-redundant array of disk drives as a RAID 0 array.
Disk Striping
Fundamental to RAID technology is striping. This is a method of
combining multiple drives into one logical storage unit. Striping partitions
the storage space of each drive into stripes, which can be as small as one
sector (512 bytes) or as large as several megabytes. These stripes are
then interleaved
in a rotating sequence, so that the combined space is composed alternately
of stripes from each drive. The specific type of operating environment determines whether large or small stripes should be used.
Most operating systems today support concurrent disk I/O operations across
multiple drives. However, in order to maximize throughput for the disk subsystem,
the I/O load must be balanced across all the drives so that each drive can be
kept busy as much as possible. In a multiple drive system without striping, the
disk I/O load is never perfectly balanced. Some drives will contain data files
that are frequently accessed and some drives will rarely be accessed.
By striping the drives in the array with stripes large enough so that each record
falls entirely within one stripe, most records can be evenly distributed across all
drives. This keeps all drives in the array busy during heavy load situations.
This situation allows all drives to work concurrently on different I/O operations,
and thus maximize the number of simultaneous I/O operations that can be
performed
by the array.
Definition of RAID Levels
RAID 0 is typically defined as a group of striped disk drives without parity or data
redundancy. RAID 0 arrays can be configured with large stripes for multi-user
environments or small stripes for single-user systems that access long sequential
records. RAID 0 arrays deliver the best data storage efficiency and performance
of any array type. The disadvantage is that if one drive in a RAID 0 array fails, the
entire array fails.
RAID 1, also known as disk mirroring, is simply a pair of disk drives that store
duplicate data but appear to the computer as a single drive. Although striping is
not used within a single mirrored drive pair, multiple RAID 1 arrays can be striped
together to create a single large array consisting of pairs of mirrored drives.
All writes must go to both drives of a mirrored pair so that the information on
the drives is kept identical. However, each individual drive can perform
simultaneous, independent read operations. Mirroring thus doubles the read
performance of a single non-mirrored drive and while the write performance is
unchanged. RAID 1 delivers the best performance of any redundant array type.
In addition, there is less performance degradation during drive failure than in
RAID 5 arrays.
RAID 3 sector-stripes data across groups of drives, but one drive in the group is
dedicated to storing parity information. RAID 3 relies on the embedded ECC in
each sector for error detection. In the case of drive failure, data recovery is
accomplished by calculating the exclusive OR (XOR) of the information recorded
on the remaining drives. Records typically span all drives, which optimizes the
disk transfer rate. Because each I/O request accesses every drive in the array,
RAID 3 arrays can satisfy only one I/O request at a time. RAID 3 delivers the
best performance for single-user, single-tasking environments with long records.
Synchronized-spindle drives are required for RAID 3 arrays in order to avoid
performance degradation with short records. RAID 5 arrays with small stripes
can yield similar performance to RAID 3 arrays.
Under RAID 5 parity information is distributed across all the drives. Since there
is no dedicated parity drive, all drives contain data and read operations can
be overlapped on every drive in the array. Write operations will typically access
one data drive and one parity drive. However, because different records store
their parity on different drives, write operations can usually be overlapped.
RAID 6 is similar to RAID 5 in that data protection is achieved by writing parity
information to the physical drives in the array. With RAID 6, however, two sets of
parity data are used. These two sets are different, and each set occupies a
capacity equivalent to that of one of the constituent drives. The main advantages
of RAID 6 is High data availability – any two drives can fail without loss of critical
data.
Dual-level RAID achieves a balance between the increased data availability
inherent in RAID 1 and RAID 5 and the increased read performance inherent in
disk striping (RAID 0). These arrays are sometimes referred to as RAID 0+1 or
RAID 10 and RAID 0+5 or RAID 50.
In summary:
™ RAID 0 is the fastest and most efficient array type but offers no faulttolerance. RAID 0 requires a minimum of two drives.
™ RAID 1 is the best choice for performance-critical, faulttolerant environments. RAID 1 is the only choice for fault-tolerance if no
more than two drives are used.
™ RAID 3 can be used to speed up data transfer and provide fault-tolerance
in single-user environments that access long sequential records. However,
RAID 3 does not allow overlapping of multiple I/O operations and requires
synchronized-spindle drives to avoid performance degradation with short
records. RAID 5 with a small stripe size offers similar performance.
™ RAID 5 combines efficient, fault-tolerant data storage with
good performance characteristics. However, write performance and
performance during drive failure is slower than with RAID 1. Rebuild
operations also require more time than with RAID 1 because parity
information is also reconstructed. At least three drives are required for
RAID 5 arrays.
™ RAID 6 is essentially an extension of RAID level 5 which allows
for additional fault tolerance by using a second independent distributed
par- ity scheme (two-dimensional parity). Data is striped on a block
level across a set of drives, just like in RAID 5, and a second set of
parity is calculated and written across all the drives; RAID 6 provides for
an ex- tremely high data fault tolerance and can sustain multiple
simultaneous
drive failures. Perfect solution for mission critical applications.
RAID Management
The subsystem can implement several different levels of RAID technology.
RAID levels supported by the subsystem are shown below.
RAID
Level
Description
Min
Drives
0
Block striping is provide, which yields higher performance than with
individual drives. There is no redundancy.
1
1
Drives are paired and mirrored. All data is 100% duplicated on an
equivalent drive. Fully redundant.
2
3
Data is striped across several physical drives. Parity protection is
used for data redundancy.
3
5
Data is striped across several physical drives. Parity protection is
used for data redundancy.
3
6
Data is striped across several physical drives. Parity protection is
used for data redundancy. Requires N+2 drives to implement
because of two-dimensional parity scheme
4
Combination of RAID levels 0 and 1. This level provides striping
and redundancy through mirroring.
4
0+1
1.3 Fibre Functions
1.3.1
Overview
Fibre Channel is a set of standards under the auspices of ANSI (American
National Standards Institute). Fibre Channel combines the best features from
SCSI bus and IP protocols into a single standard interface, including highperformance data transfer (up to 400 MB per second), low error rates, multiple
connection topologies, scalability, and more. It retains the SCSI command-set
functionality, but use a Fibre Channel controller instead of a SCSI controller
to provide the network interface for data transmission. In today’s fast-moving
com- puter environments, Fibre Channel is the serial data transfer protocol
choice
for high-speed transportation of large volumes of information
between workstation, server, mass storage subsystems, and peripherals.
Physically, the Fibre Channel can be an interconnection of multiple communication points, called N_Ports. The port itself only manages the connection
between itself and another such end-port which, which could either be part of
a switched network, referred to as a Fabric in FC terminology, or a pointto- point link. The fundamental elements of a Fibre Channel Network are Port
and node. So a node can be a computer system, storage device, or
Hub/Switch.
This chapter describes the Fibre-specific functions available in the
Fibre channel RAID controller. Optional functions have been implemented for
Fibre channel operation only available in the Web browser-based RAID
manager. The LCD and VT-100 can’t configure the options available for
Fibre channel RAID controller.
1.3.2 Three ways to connect (FC Topologies)
A topology defines the interconnection scheme. It defines the number of devices that can be connected. Fibre Channel supports three different logical or
physical arrangements (topologies) for connecting the devices into a network:
z
Point-to-Point
z
Arbitrated Loop(AL)
z
Switched (Fabric)
The physical connection between devices varies from one topology to another.
In all of these topologies, a transmitter node in one device sends information to
a receiver node in another device. Fibre Channel networks can use any combination of point-to-point, arbitrated loop(FC_AL), and switched fabric topologies
to provide a variety of device sharing options.
Point-to-point
A point-to-point topology consists of two and only two devices connected by
N-ports of which are connected directly. In this topology, the transmit Fibre of
one device connects to the receiver Fibre of the other device and vice versa.
The connection is not shared with any other devices. Simplicity and use of
the full data transfer rate make this Point-to-point topology an ideal extension
to the standard SCSI bus interface. The point-to-point topology extends SCSI
connectivity from a server to a peripheral device over longer distances.
Arbitrated Loop
The arbitrated loop (FC_AL) topology provides a relatively simple method of
connecting and sharing resources. This topology allows up to 126 devices or
nodes in a single, continuous loop or ring. The loop is constructed by daisychaining the transmit and receive cables from one device to the next or by
using a hub or switch to create a virtual loop. The loop can be selfcontained
or incorporated as an element in a larger network. Increasing the number of
devices on the loop can reduce the overall performance of the loop because
the amount of time each device can use the loop is reduced. The ports in an
arbitrated loop are referred as L-Ports.
Switched Fabric
A switched fabric a term is used in a Fibre channel to describe the
generic switching or routing structure that delivers a frame to a destination
based on the destination address in the frame header. It can be used to
connect up to
16 million nodes, each of which is identified by a unique, world-wide name.
In a switched fabric, each data frame is transferred over a virtual pointto- point connection. There can be any number of full-bandwidth transfers
occur- ring through the switch. Devices do not have to arbitrate for control
of the
network; each device can use the full available bandwidth.
A fabric topology contains one or more switches connecting the ports in the
FC network. The benefit of this topology is that many devices (approximately
2-24) can be connected. A port on a Fabric switch is called an F-Port (Fabric
Port). Fabric switches can function as an alias server, Multicast
server, broadcast server, quality of service facilitator and directory server as
well.
1.3.3 Basic elements
The following elements are the connectivity of storages and Server components using the Fibre channel technology.
Cables and connectors
There are different types of cables of varies lengths for use in a Fibre Channel configuration. Two types of cables are supported : Copper and optical
(fiber). Copper cables are used for short distances and transfer data up to 30
meters per link. Fiber cables come in two distinct types: Multi-Mode
fiber
(MMF) for short distances (up to 2km), and Single-Mode
Fiber(SMF) for longer distances (up to 10 kilometers). The controller
default supports two short wave multi-mode fibre optical SFP connector.
Fibre
Channel Adapter
Fibre Channel Adapter is devices that connect to a workstation, or server and
control the electrical protocol for communications.
Hubs
Fibre Channel hubs are used to connect up to 126 nodes into a logical loop.
All connected nodes share the bandwidth of this one logical loop. Each port
on a hub contains a Port Bypass Circuit(PBC) to automatically open
and close the loop to support hot pluggability.
Switched Fabric
Switched fabric is the highest performing device available for interconnecting
large numbers of devices, increasing bandwidth, reducing congestion
and providing aggregate throughput. .
Each device connected to a port on the switch, enabling an on-demand con-
nection to every connected device. Each node on a Switched fabric uses an
aggregate throughput data path to send or receive data.
1.3.4 LUN Masking
LUN masking is a RAID system-centric enforced method of masking multiple
LUNs behind a single port. By using World Wide Port Names (WWPNs) of
server HBAs, LUN masking is configured at the RAID-array level. LUN masking also allows disk storage resource sharing across multiple independent
servers. A single large RAID device can be sub-divided to serve a number
of different hosts that are attached to the RAID through the SAN fabric with
LUN masking. So that only one or a limited number of servers can see that
LUN, each LUN inside the RAID device can be limited.
LUN masking can be done either at the RAID device (behind the RAID port) or
at the server HBA. It is more secure to mask LUNs at the RAID device, but
not all RAID devices have LUN masking capability. Therefore, in order
to mask LUNs, some HBA vendors allow persistent binding at the driver-level.
1.4 Array Definition
1.4.1
RAID Set
A RAID Set is a group of disks containing one or more volume sets. It has the
following features in the RAID subsystem controller:
1. Up to sixteen RAID Sets are supported per RAID subsystem controller.
2. It is impossible to have multiple RAID Sets on the same disks.
A Volume Set must be created either on an existing RAID set or on a group
of available individual disks (disks that are not yet a part of an raid set). If
there are pre-existing raid sets with available capacity and enough disks for
specified RAID level desired, then the volume set will be created in the existing raid set of the user’s choice. If physical disks of different capacity are
grouped together in a raid set, then the capacity of the smallest disk
will become the effective capacity of all the disks in the raid set.
1.4.2
Volume Set
A Volume Set is seen by the host system as a single logical device. It is organized in a RAID level with one or more physical disks. RAID level refers to
the level of data performance and protection of a Volume Set. A Volume
Set ca- pacity can consume all or a portion of the disk capacity available in
a RAID Set. Multiple Volume Sets can exist on a group of disks in a RAID
Set. Addi- tional Volume Sets created in a specified RAID Set will reside on
all the physi- cal disks in the RAID Set. Thus each Volume Set on the RAID
Set will have its data spread evenly across all the disks in the RAID Set.
1. Volume Sets of different RAID levels may coexist on the same RAID Set.
2. Up to 16 volume sets can be created in a RAID set.
In the illustration below, Volume 1 can be assigned a RAID 5 level of operation while Volume 0 might be assigned a RAID 0+1 level of operation.
1.4.3
1.4.3.1
Easy of Use features
Instant Availability/Background Initialization
RAID 0 and RAID 1 volume set can be used immediately after the creation.
But the RAID 3, 5 and 6 volume sets must be initialized to generate
the parity. In the Normal Initialization, the initialization proceeds as a
background
task, the volume set is fully accessible for system reads and writes.
The operating system can instantly access to the newly created arrays
without requiring a reboot and waiting the initialization complete.
Furthermore, the RAID volume set is also protected against a single disk
failure while initialing.
In Fast Initialization, the initialization proceeds must be completed before the
volume set ready for system accesses.
1.4.3.2
Array Roaming
The RAID subsystem stores configuration information both in NVRAM and on
the disk drives It can protect the configuration settings in the case of a disk
drive or controller failure. Array roaming allows the administrators the ability to
move a completely raid set to another system without losing RAID configura-
tion and data on that raid set. If a server fails to work, the raid set disk drives
can be moved to another server and inserted in any order.
1.4.3.3
Online Capacity Expansion
Online Capacity Expansion makes it possible to add one or more physical
drive to a volume set, while the server is in operation, eliminating the need to
store and restore after reconfiguring the raid set. When disks are added to a
raid set, unused capacity is added to the end of the raid set. Data on the
existing volume sets residing on that raid set is redistributed evenly across all
the disks. A contiguous block of unused capacity is made available on the
raid set. The unused capacity can create additional volume set. The expansion process is illustrated as following figure.
The RAID subsystem controller redistributes the original volume set over the
original and newly added disks, using the same fault-tolerance configuration.
The unused capacity on the expand raid set can then be used to create an
additional volume sets, with a different fault tolerance setting if user need to
change.
1.4.3.4
Online RAID Level and Stripe Size Migration
User can migrate both the RAID level and stripe size of an existing
volume set, while the server is online and the volume set is in use. Online
RAID level/ stripe size migration can prove helpful during performance tuning
activities as
well as in the event that additional physical disks are added to the
RAID subsystem. For example, in a system using two drives in RAID level
1, you could add capacity and retain fault tolerance by adding one drive.
With the addition of third disk, you have the option of adding this disk to
your existing RAID logical drive and migrating from RAID level 1 to 5. The
result would be parity fault tolerance and double the available capacity
without taking the sys- tem off.
1.4.4 High availability
1.4.4.1 Creating Hot Spares
A hot spare drive is an unused online available drive, which is ready for replacing the failure disk drive. In a RAID level 1, 0+1, 3, 5 or 6 raid set, any
unused online available drive installed but not belonging to a raid set
can define as a hot spare drive. Hot spares permit you to replace failed
drives without powering down the system. When RAID subsystem detects a
UDMA drive failure, the system will automatic and transparent rebuilds
using hot spare drives. The raid set will be reconfigured and rebuilt in the
background, while the RAID subsystem continues to handle system
request. During the automatic rebuild process, system activity will continue
as normal, however, the system performance and fault tolerance will be
affected.
!
Important:
The hot spare must have at least the same or more capacity as the
drive it replaces.
1.4.4.2 Hot-Swap Disk Drive Support
The RAID subsystem has built the protection circuit to support the replacement of UDMA hard disk drives without having to shut down or reboot the
system. The removable hard drive tray can deliver “hot swappable,”
fault- tolerant RAID solutions at prices much less than the cost of
conventional SCSI hard disk RAID subsystems. We provide this feature for
subsystems to
provide the advanced fault tolerant RAID protection and “online”
drive replacement.
1.4.4.3 Hot-Swap Disk Rebuild
A Hot-Swap function can be used to rebuild disk drives in arrays with data
redundancy such as RAID level 1, 0+1, 3, 5 and 6. If a hot spare
is not available, the failed disk drive must be replaced with a new disk drive
so that the data on the failed drive can be rebuilt. If a hot spare is
available, the rebuild starts automatically when a drive fails. The RAID
subsystem auto-
matically and transparently rebuilds failed drives in the background with userdefinable rebuild rates. The RAID subsystem will automatically restart the
system and the rebuild if the system is shut down or powered off abnormally
during a reconstruction procedure condition. When a disk is Hot Swap, although the system is functionally operational, the system may no longer be
fault tolerant. Fault tolerance will be lost until the removed drive is
replaced and the rebuild operation is completed.
Chapter 2
Getting Started
Getting started with the subsystem consists of the following
steps:
™ Unpack the storage subsystem.
™ Identifying Parts of the subsystem.
™ Connect the Fibre Cables.
™ Power on the subsystem.
™ Install Hard Drives.
2.1 Unpacking the Subsystem
Before continuing, first unpack the subsystem and verify that the contents of
the shipping carton are all there and in good condition. Before removing the
subsystem from the shipping carton, visually inspect the physical condition of
the shipping carton. Exterior damage to the shipping carton may indicate that
the contents of the carton are damaged. If any damage is found, do not remove the components; contact the dealer where the subsystem was purchased
for further instructions.
The package contains the following items:
•
•
•
•
•
•
•
•
RAID subsystem unit
Two power cords
Two external Fibre optical cables
One external null modem cable
One external UPS cable
One RJ-45 ethernet cable
Installation Reference Guide
Spare screws, etc.
If any of these items are missing or damaged, please contact your dealer or
sales representative for assistance.
2.2 Identifying Parts of the subsystem
The illustrations below identify the various features of the subsystem. Get
yourself familiar with these terms as it will help you when you read further in
the following sections.
2.2.1 Front View
1
3
2
4
Slot 4
Slot 8
Slot 12
Slot 3
Slot 7
Slot 11
Slot 2
Slot 6
Slot 10
5
Slot 1
Slot 5
Slot 9
1. HDD trays 1 ~ 12
2. HDD status Indicator
Parts
HDD Status LEDs
Function
Green LED indicates power is on and hard drive status is good for
this slot. If there is no hard drive, the LED is red. If hard drive defected in this slot or the hard drive is failure, the LED is orange.
HDD access LEDs
3. LCD display panel
These LED will blink blue when the hard drive is being accessed.
4. Smart Function Panel - Function Keys for RAID configuration
The smart LCD panel is where you will configure the RAID subsystem. If you
are configuring the subsystem using the LCD panel, please press the controller
button to configure your RAID subsystem.
Parts
Function
Up and Down
arrow buttons
Use the Up or Down arrow keys to go through the information on
the LCD screen. This is also used to move between each menu
when you configure the subsystem.
Select button
This is used to enter the option you have selected.
Exit button
Press this button to return to the previous menu.
5. Environment status
Parts
Voltage warning
LED
Over temp
LED
Fan fail LED
Power fail LED
Function
If the output DC voltage is over or under +3.3V, +5V or +12V, an
alarm will sound warning of a voltage abnormality and this LED will
turn red. (+3.3V: +-5%, +5V: +-5% , +12V: +-10%)
If temperature irregularity in these systems occurs (HDD slot temperature over 55oC), this LED will turn red and an alarm will sound.
When a fan’s rotation speed is lower than 2600rpm, this LED will turn
red and an alarm will sound.
If a redundant power supply fails, this LED will turn red and an alarm
will sound.
Power LED
Green LED indicates power is on.
Access LED
Blue blinking LED indicates data is being accessed.
2.2.2 Rear View
2 3
1
6
8
9
7
4
5
10 11
1. System power on / off switch
2. Host Channel B
The subsystem is equipped with 2 host channels (Host channel A and Host
channel B). Each host channel with one optical LC Fibre connectors at the rear
of the subsystem for connect to Fibre Hub/Switch or Server’s fibre interface.
Link
Activity
TX RX
Link LED: Green LED indicates Fibre channel is linking.
Activity LED: The LED will blink blue when the Fibre
channel is being accessed.
3. Host Channel A
Connect to Host’s Fibre adapter.
4. Fan Fail indicator
If a fan fails, this LED will turn red.
5. Cooling Fan module
Two blower fans are located at the rear of the subsystem. They provide sufficient airflow and heat dispersion inside the chassis. In case a fan fails
to function, the “
” Fan fail LED will turn red and an alarm will sound. You
will
also see an error message appear in the LCD screen warning you of
fan failure.
6. Power Supply Alarm Reset button
You can push the power supply reset button to stop the power supply buzzer
alarm.
7. AC power input socket 1 ~ 2 (From left to right)
8. Power Supply Unit 1 ~ 2 (From left to right)
Three power supplies (power supply 1 and power supply 2) are located at the
rear of the subsystem. Turn on the power of these power supplies to power-on
the subsystem. The “power” LED at the front panel will turn green.
If a power supply fails to function or a power supply was not turned on, the
“ ” Power fail LED will turn red and an alarm will sound. An error message
will also appear on the LCD screen warning of power failure.
9. R-Link Port : Remote Link through RJ-45 ethernet for remote management
The subsystem is equipped with one 10/100 Ethernet RJ45 LAN port. You use
web-based browser to management RAID subsystem through Ethernet for remote configuration and monitoring.
Link LED: Green LED indicates ethernet is linking.
Link speed LED: Orange LED indicates the link speed is 100Mbps. The LED
will not blink when the link speed is 10Mbps.
10. Uninterrupted Power Supply (UPS) Port
The subsystem may come with an optional UPS port allowing you to connect a
UPS device. Connect the cable from the UPS device to the UPS port located
at the rear of the subsystem. This will automatically allow the subsystem to
use the functions and features of the UPS.
11. Monitor Port
The subsystem is equipped with a serial monitor port allowing you to connect
a PC or terminal.
2.3 Connecting to Fibre HBA
The subsystem supports fibre interface which provides fast 200MB data transfer rate using fibre. This section describes the location of the host channels
and instructions on connecting external fibre devices.
1.
Configure the Loop ID of subsystem or use dynamic LIP.
2.
The package comes with two fibre optical cables. For every pair of host
channel fibre connector at the rear of the subsystem, attach one end
of the fibre optical cable to one of the fibre connectors and the other end
to the host adapter’s external fibre connector or to the fibre Hub/Switch.
(The host adapter is installed in your Host subsystem.)
Link
Activity
!
3.
TX RX
Note:
Connect the RX of the fibre connectors to the TX of host adapter’s
external fibre connector or to the fibre Hub/Switch. Contrariwise,
Connect the TX of the fibre connectors to the RX of host adapter’s
external fibre connector or to the fibre Hub/Switch.
Connect the other host system using the other fibre optical cable if you
want to configure subsystem into multi-host attachment.
Host B
!
Host A
Note:
For safety reasons, make sure the Disk Array and Host Computer
are turned off when you plug-in the Fibre cable.
2.4 Powering-on the Subsystem
When you connect the Disk Array to the Host computer, you should press the
ON/OFF Power Supply Switch. It will turn the Disk Array on and the Self-Test
will be started automatically.
1.
Plug in all the power cords or power connectors located at the rear of the
subsystem.
Power Supply 1
Power Supply 2
Power Switch
AC-1
AC-2
!
Note:
The subsystem is equipped with redundant PFC (power factor
correction), Full Range power supplies. The subsystem will automatically selector voltage.
2.
Turn on the power.
3.
The “Power” LED on the front panel will turn green. After a few moments
the LCD should display the following message: “*” detecting the fibre
cable connect well.
{Model Name}
xxx.xxx.xxx.xxx *
2.5 Install Hard Drives
This section describes the physical locations of the hard drives supported by
the subsystem and gives instructions on installing a hard drive. The subsystem
supports hot-swapping allowing you to install or replace a hard drive while the
subsystem is running.
1.
Pull out an empty disk tray. (You can install in any available slot.)
2.
Take off the bracket before installing hard drive.
3.
Place the hard drive in the disk tray.
4.
Install the mounting screws on each side to secure the drive in the mobile
rack.
Note:
Insert screws through the front sides of the mounting holes.
5.
Slide the tray into a slot until it clicks into place. The HDD status LED will
turn green on front panel.
6.
Press the lever in until you hear the latch click into place.
7.
If the HDD power LED did not turn green, check the hard drive is in good
condition.
8.
If the hard drive is not being accessed, the HDD access LED will not
illuminate. The LED blinks only when being accessed.
2.6 Connecting an Uninterrupted Power Supply (UPS)
The subsystem is equipped with a UPS port located at the rear of the system
unit. It allows you to connect a UPS fail signal.
UPS port
Pin
!
Description
1
Not used
2
UPS Line Fail
3
Not used
4
UPS Common
5
Not used
6
Not used
7
Not used
8
Not used
9
Not used
Note:
UPS connection compliant with NetWare UPS management, smart
mode UPS not support.
2.7 Connecting to a PC or Terminal
The subsystem is equipped with a serial monitor port located at the rear of the
system unit. This serves as an alternative display when accessing the setup
utility.
Monitor port
Pin
Description
1
Data Carrier Detect (DCD)
2
Receive Data (RD)
3
Transmit Data (TD)
4
Data Teminal Ready (DTR)
5
Signal Ground (SG)
6
Data Set Ready (DSR)
7
Ready To Send (RTS)
8
Clear To Send (CTS)
9
Ring Indicator (RI)
Note:
Refer to Chapter 3 for instructions on accessing the setup
utility through a PC or terminal, as well as instructions on setting
the baud rate, stop bit, data bit and parity of your monitor or
terminal. The default setting of the monitor port is 115200 baud
rate, non-parity, 8 data bit and no flow control.
Chapter 3
Configuring
The subsystem has a setup configuration utility built in containing important
information about the configuration as well as settings for various optional
functions in the subsystem. This chapter explains how to use and
make changes to the setup utility.
Configuration Methods
There are three methods of configuring the subsystem. You may configure
through the following methods:
• VT100 terminal connected through the controller’s serial port
• Front panel touch-control keypad
• Web browser-based Remote RAID management via the R-Link ethernet port
!
Important:
The subsystem allows you to access the utility using only one method
at a time. You cannot use both methods at the same time.
3.1 Configuring through a Terminal
Configuring through a terminal will allow you to use the same configuration
options and functions that are available from the LCD panel. To start-up:
1.
Connect a VT100 compatible terminal or a PC operating in an equivalent
terminal emulation mode to the monitor port located at the rear of the
subsystem.
Note:
You may connect a terminal while the subsystem’s power is on.
2.
Power-on the terminal.
3.
Run the VT100 program or an equivalent terminal program.
4.
The default setting of the monitor port is 115200 baud rate, 8 data bit,
non-parity, 1 stop bit and no flow control.
5.
Click
disconnect button.
6.
Open the File menu, and then open Properties.
7.
Open the Settings Tab.
8.
Open the Settings Tab. Function, arrow and ctrl keys act as: Terminal
Keys, Backspace key sends: Crtl+H, Emulation: VT100, Telnet terminal:
VT100, Back scroll buffer lines: 500. Click OK.
9.
10.
Now, the VT100 is ready to use. After you have finished the VT100 Terminal setup, you may press “ X “ key (in your Terminal) to link the RAID
subsystem and Terminal together. Press “X’ key to display the disk array
Monitor Utility screen on your VT100 Terminal.
The Main Menu will appear.
Keyboard Function Key Definitions
“ A “ key - to move to the line above
“ Z “ key - to move to the next line
“ Enter “ key - Submit selection function
“ ESC “ key - Return to previous screen
“ L ” key - Line draw
“ X ” key - Redraw
Main Menu
The main menu shows all function that enables the customer to execute actions by clicking on the appropriate link.
Note:
The password option allows user to set or clear the raid
subsystem’s password protection feature. Once the password has been
set, the user can only monitor and configure the raid subsystem by
providing the cor- rect password. The password is used to protect the
internal RAID sub- system from unauthorized entry. The controller will
check the password only when entering the Main menu from the
initial screen. The RAID subsystem will automatically go back to the
initial screen when it does not receive any command in twenty
seconds. The RAID subsystem password is default setting at
00000000 by the manufacture.
VT100 terminal configuration Utility Main Menu Options
Select an option and the related information or submenu items display beneath
it. The submenus for each item are explained on the section 3.3. The configuration utility main menu options are:
Option
Description
Quick Volume And Raid Set
Setup
Create a RAID configurations which is
consist of the number of physical disk
installed
Raid Set Functions
Create a customized raid set
Volume Set Functions
Create a customized volume set
Physical Drive Functions
View individual disk information
Raid System Functions
Setting the raid system configurations
Fibre Channel Config
Setting the Fibre Channel configurations
Ethernet Configuration
Setting the Ethernet configurations
Views System Events
Record all system events in the buffer
Clear Event Buffer
Clear all event buffer information
Hardware Monitor
Show all system environment status
System Information
View the controller information
3.2 Configuring the Subsystem Using the LCD Panel
The LCD Display front panel function keys are the primary user interface for
the Disk Array. Except for the “Firmware update” ,all configuration can be performed through this interface.The LCD provides a system of screens with
ar- eas for information, status indication, or menus. The LCD screen
displays up
to two lines at a time of menu items or other information. The RAID subsystem
password is default setting at 00000000 by the manufacture.
Function Key Definitions
The four function keys at the top of the front panel perform the following functions :
Parts
Function
Up or Down
arrow buttons
Use the Up or Down arrow keys to go through the information on
the LCD screen. This is also used to move between each
menu when you configure the subsystem.
Select button
This is used to enter the option you have selected.
Exit button
Press this button to return to the previous menu.
3.3 Menu Diagram
The following tree diagram is a summary of the various configuration and setting functions that can be accessed through the LCD panel menus or the terminal monitor.
Raid 0
Greater Two TB Vol ume Support
Selected Capacity
No, Use 64Bit LBA, For windows
Select Stripe Size
4K,8K,16K,32K,64K,128K
Create Vol / Raid Set
Yes, No
Raid 1 or 0+1
Greater Two TB Vol ume Support
No, Use 64Bit LBA, For windows
Selected Capacity
Select Stripe Size
4K,8K,16K,32K,64K,128K
Create Vol / Raid Set
Yes, No
Raid 0+1 +Spare
Greater Two TB Vol ume Support
No, Use 64Bit LBA, For windows
Selected Capacity
Select Stripe Size
4K,8K,16K,32K,64K,128K
Create Vol / Raid Set
Raid 3
Greater Two TB Vol ume Support
Yes, No
No, Use 64Bit LBA, For windows
Selected Capacity
Quick Volume / Raid Setup
Raid 5
Greater Two TB Vol ume Support
No, Use 64Bit LBA, For windows
Selected Capacity
Select Stripe Size
4K,8K,16K,32K,64K,128K
Create Vol / Raid Set
Yes, No
Raid 3 + Spare
Greater Two TB Vol ume Support
No, Use 64Bit LBA, For windows
Selected Capacity
Create Vol / Raid Set
Raid 5 + Spare
Greater Two TB Vol ume Support
Yes, No
No, Use 64Bit LBA, For windows
Selected Capacity
Select Stripe Size
Raid 6
4K,8K,16K,32K,64K,128K
Create Vol / Raid Set
Greater Two TB Vol ume Support
Yes, No
No, Use 64Bit LBA, For windows
Selected Capacity
Select Stripe Size
4K,8K,16K,32K,64K,128K
Create Vol / Raid Set
Raid 6 + Spare
Greater Two TB Vol ume Support
Yes, No
No, Use 64Bit LBA, For windows
Selected Capacity
Select Stripe Size
4K,8K,16K,32K,64K,128K
Create Vol / Raid Set
Yes, No
Create Raid Set
Select IDE Drives for Raid Set
Create Raid Set
Ch01 ~ Ch12
Yes, No
Edit The Raid Set Name
Delete Raid Set
Select Raid Set To Delete
Delete Raid Set
Are you sure?
Yes, No
Yes, No
Expand Raid Set
Select IDE Drives for Raid Set Expansion
Select Drives IDE Channel
Expand Raid Set
Raid Set Function
Chxx ~ Ch12
Yes, No
Activate Raid Set
Select Raid Set To Active
Activate Raid Set
Are You Sure?
Create Hot Spare Disk
Select Drives for Hot spare,
Max 3 Hot spare supported
Create Hot Spare
Yes, No
Yes, No
Chxx ~ Ch12
Yes, No
Delete Hot Spare Disk
Select The Hot Spare Device To Be Deleted
Delete Hot Spare
Raid Set Information
Select Raid Set To Display
Yes, No
Create Volume Set
Create Volume From Raid Set
Volume Creation
Greater Two TB Volume Support,
Volume Name, Raid Level,
Capacity, Stripe Size, Fibre Host#,
LUN Base, Fibre LUN, Cache Mode,
Tag Queuing
Yes, No
Create Volume
Initialization Mode
Foreground, Background
Delete Volume Set
Delete Volume From Raid Set
Select Volume To Delete
Delete Volume Set
Are you sure?
Yes, No
Yes, No
Modify Volume Set
Modify Volume From Raid Set
Volume Set Function
Select Volume To Modify
Volume Modification
Modify Volume
Volume Name, Raid Level,
Capacity, Stripe Size,
Fibre Host#, LUN Base,
Fibre LUN, Cache Mode,
Tag Queuing
Yes, No
Are you sure?
Check Volume Set
Check Volume From Raid Set
Select Volume To Check
Check The Volume
Stop Volume Check
Yes, No
Stop All Volume Check
Are you sure?
Yes, No
Display Volume Info.
Display Volume Info in Raid
Select Volume To Display
Yes, No
Yes, No
View Driv e Information
Select T he Driv es
Creat e Pas s T hrough Disk
Select T he Driv es
Fibre H ost #, LUN Bas e,
Fibre LUN, C ache M ode,
Tag Queuing
M odify Pas s T hrough D isk
Physical Drives
Select T he Driv es
Fibre H ost #, LUN Bas e,
Fibre LUN, C ache M ode,
Tag Queuing
Delete Pass T hrough Dis k
Select T he Driv es
Delete Pass T hrough
Yes , N o
Are y ou s ure?
Yes , N o
Identif y Select ed Driv e
Select T he Driv es
Yes , N o
M ut e T he Alert Beeper
Alert Beeper Setting
Dis abled, Enabled
Sav e T he Settings
Yes , N o
Change Pass word
Ent er N ew Pas sw ord
Re-Enter Passw ord
Sav e T he Pass w ord
Yes , N o
JBOD / R AI D F unction
R AID, J BOD
Configured AS J BOD ?
Are y ou s ure?
Back ground Tas k Priority
Sav e T he Settings
Raid System Function
Yes , N o
Yes , N o
UltraLow (5%), Low(20%),
M edium (50%), High(80%)
Yes , N o
M axim um SATA M ode
SATA150, SATA150+N C Q,
SATA300, SATA300+N C Q
HD D R ead Ahead C ac he
Enable, Dis able M axt or,
Dis able
St agger Pow er on
0. 4, 0. 7, 1. 0, 1.5, 2.0, 2.5,
3. 0, 3. 5, 4. 0, 4.5, 5.0, 5.5, 6.0
Disk Writ e C ac he M ode
Aut o, Enabled, Dis abled
Capac ity Trunc ation
To M ultiples of 10G,
To M ultiples of 1G,
Dis abled
Terminal Port C onfig
Baud R at e
St op Bit s
1200,2400, 4800, 9600,
19200, 38400,57600,115200
1 bit, 2 bits
Updat e Firm w are
Restart C ontroller
Are y ou s ure?
Yes , N o
Yes , N o
Channel 0 Speed
Auto, 1Gb, 2Gb, 4Gb
Channel 0 Topology
Fibre Channel Config
Auto, Loop, Point-Point, Fabric
Channel 0 Hard Loop ID
Channel 1 Speed
Auto, 1Gb, 2Gb, 4Gb
Channel 1 Topology
Auto, Loop, Point-Point, Fabric
Channel 1 Hard Loop ID
DHCP Function
0 ~ 125
0 ~ 125
Disabled, Enabled
Local IP Address
Ethernet Configuration
HTTP Port Number: 80
Telnet Port Number: 23
SMTP Port Number: 25
View System Events
Show System Events
Yes, No
Clear Event Buffer
Clear Event Buffer
Hardware Monitor
The Hard Monitor Information
System Information
The System Information
3.4
Web browser-based Remote RAID management via RLink ethernet port
Configuration of the internal RAID subsystem with remote RAID management is a
web browser-based application, which utilizes the browser installed on your operating system. Web browser-based remote RAID management can be used to manage all the raid function.
To configure internal RAID subsystem on a remote machine, you need to know its
IP Address. Launch your web browser by entering http://[IP Address] in the remote
web browser.
!
Important:
The Ethernet default IP is “192.168.001.100” and DHCP function is
“enable”. You can configure correct IP Address through the LCD panel
or the terminal “Ethernet Configuration” menu.
Note that you must be logged in as administrator with local admin rights on the
remote machine to remotely configure it. The RAID subsystem controller default
User Name is “admin” and the Password is “00000000”.
Main Menu
The main menu shows all function that enables the customer to execute actions by clicking on the appropriate link.
Individual Category
Description
Quick Create
Create a RAID configuration, which is consist of
the number of physical disk installed; it
can modify the volume set Capacity, Raid Level,
and Stripe Size.
Raid Set Functions
Create a customized raid set.
Volume Set Functions
Create customized volume sets and modify the
existed volume sets parameter.
Physical Drive
Create pass through disks and modify the existed
pass through drives parameter. It also provides
the function to identify the respect disk drive.
System Control
Setting the raid system configurations
Information
View the controller and hardware
monitor
information. The Raid Set Hierarchy can
also view through the RaidSet Hierarchy item.
Configuration Procedures
Below are a few practical examples of concrete configuration procedures.
3.5
Quick Create
The number of physical drives in the raid subsystem determines the RAID
levels that can be implemented with the raid set. You can create a raid
set associated with exactly one volume set. The user can change the raid
level, capacity, Volume Initialization Mode and stripe size . A hot spare option
is also created depending upon the existing configuration.
If volume size over 2TB, it will be provided one option “Creater TwoTB Volume
Support” Automatically as above menu. There are three model for option “No” ,
“64bit LBA” , “For Windows”.
Greater Two TB Volume Support:
No: still keep the volume size with max. 2TB limitation.
64bit LBA: the max. size 512TB, for Unix or Linux.
For Windows: the max. size 16TB , just use with “ basic disk manager
“ under OS Window 2000, 2003 or XP. Noted that can’t be used by with
dynamic disk manager.
Tick on the Confirm The Operation and click on the Submit button in the
Quick Create screen, the raid set and volume set will start to initialize.
After you complete the Quick create function, you should refer to section 3.9.2
Fibre channel configuration to complete RAID configuration. Then you can
use RaidSet Hierarchy feature to view the fibre channel volume set host filters information (refer to section 3.10.1).
Note: In Quick Create your volume set is automatically configured based on the
number of disks in your system. Use the Raid Set Function and Volume Set Function if you prefer to customize your system.
3.6
Raid Set Functions
Use the Raid Set Function and Volume Set Function if you prefer to customize
your system. User manual configuration can full control of the raid set setting,
but it will take longer to complete than the Quick Volume/Raid
Setup configuration. Select the Raid Set Function to manually configure the
raid set for the first time or deletes existing raid set and reconfigures the
raid set. A raid set is a group of disks containing one or more volume sets.
3.6.1
Create Raid Set
To create a raid set, click on the Create Raid Set link. A “Select The IDE
Drive For RAID Set” screen is displayed showing the IDE drive connected to
the current controller. Click on the selected physical drives with the current
raid set. Enter 1 to 15 alphanumeric characters to define a unique identifier
for a raid set. The default raid set name will always appear as Raid Set. #.
Tick on the Confirm The Operation and click on the Submit button in the
screen, the raid set will start to initialize.
3.6.2
Delete Raid Set
To delete a raid set, click on the Delete Raid Set link. A “Select The RAID SET
To Delete” screen is displayed showing all raid set existing in the current controller.
Click the raid set number you which to delete in the select column to delete screen.
Tick on the Confirm The Operation and click on the Submit button in the
screen to delete it.
3.6.3
Expand Raid Set
Use this option to expand a raid set, when a disk is added to your system.
This function is active when at least one drive is available.
To expand a raid set, click on the Expand Raid Set link. Select the target raid
set, which you want to expand it.
Tick on the available disk and Confirm The Operation, and then click on the
Submit button in the screen to add disks to the raid set.
Note:
1. Once the Expand Raid Set process has started, user cannot
stop it. The process must be completed.
2. If a disk drive fails during raid set expansion and a hot spare is
available, an auto rebuild operation will occur after the raid set
ex- pansion completes.
Migrating occurs when a disk is added to a raid set. Migration status is displayed in the raid status area of the Raid Set information when a disk is added
to a raid set. Migrating status is also displayed in the associated volume status
area of the volume set Information when a disk is added to a raid set.
3.6.4
Activate Incomplete Raid Set
When one of the disk drive is removed in power off state, the raid set state
will change to Incomplete State. If user wants to continue to work, when the
RAID subsystem is power on. User can use the Activate Raid Set option to
active the raid set. After user complete the function, the Raid State
will change to Degraded Mode.
To activate the incomplete the raid set, click on the Activate Raid Set link. A
“Select The RAID SET To Activate” screen is displayed showing all raid set existing in the current controller. Click the raid set number you which to activate in
the select column.
Click on the Submit button in the screen to activate the raid set that
has
removed one of disk drive in the power off state. The RAID subsystem will
continue to work in degraded mode.
3.6.5
Create Hot Spare
When you choose the Create Hot Spare option in the Raid Set Function, all
unused physical devices connected to the current controller appear:
Select the target disk by clicking on the appropriate check box. Tick on the
Confirm The Operation, and click on the Submit button in the screen to
create the hot spares.
The create Hot Spare option gives you the ability to define a global hot spare.
3.6.6
Delete Hot Spare
Select the target Hot Spare disk to delete by clicking on the
appropriate check box.
Tick on the Confirm The Operation, and click on the Submit button in the
screen to delete the hot spares.
3.6.7
Rescue Raid Set
If you try to Rescue Missing RAID Set, please contact our engineer
for assistance.
3.7
Volume Set Function
A volume set is seen by the host system as a single logical device. It is organized in a RAID level with one or more physical disks. RAID level refers to
the level of data performance and protection of a volume set. A volume set
capacity can consume all or a portion of the disk capacity available in a raid set.
Multiple volume sets can exist on a group of disks in a raid set. Additional
volume sets created in a specified raid set will reside on all the physical disks
in the raid set. Thus each volume set on the raid set will have its data spread
evenly across all the disks in the raid set.
3.7.1
Create Volume Set
The following is the volume set features:
1.Volume sets of different RAID levels may coexist on the same raid set.
2.Up to 16 volume sets in a raid set can be created by the RAID subsystem
controller.
To create volume set from raid set system, move the cursor bar to the main
menu and click on the Create Volume Set link. The Select The Raid Set To
Create On It screen will show all raid set number. Tick on a raid set number
that you want to create and then click on the Submit button.
The new create volume set allows user to select the Volume name, capacity,
RAID level, strip size, Fibre channel/LUN, Cache mode, and tag queuing.
Volume Name:
The default volume name will always appear as Volume Set. #. You can rename the volume set name providing it does not exceed the 15 characters limit.
Raid Level:
Set the RAID level for the Volume Set. Highlight Raid Level and press Enter.
The available RAID levels for the current Volume Set are displayed. Select a
RAID level and press Enter to confirm.
Capacity:
The maximum volume size is default in the first setting. Enter the appropriate
volume size to fit your application.
Greater Two TB Volume Support: If volume size over 2TB, it will be provided one option “Creater TwoTB Volume Support” Automatically.
No: still keep the volume size with max. 2TB limitation.
64bit LBA: the max. size 512TB, for Unix or Linux.
For Windows: the max. size 16TB , just use with “ basic disk manager
“ under OS Window 2000, 2003 or XP. Noted that can’t be used by with
dynamic disk manager.
Initialization Mode:
Set the Initialization Mode for the Volume Set. Foreground mode is
faster completion and background is instant available.
Strip Size:
This parameter sets the size of the stripe written to each disk in a RAID 0, 1,
0+1, or 5 logical drive. You can set the stripe size to 4 KB, 8 KB, 16 KB, 32
KB, 64 KB, or 128 KB.
A larger stripe size produces better-read performance, especially if your computer does mostly sequential reads. However, if you are sure that your computer does random reads more often, select a small stripe size
Note: RAID level 3 can’t modify strip size.
Cache Mode:
The RAID subsystem supports Write-Through Cache and Write-Back Cache.
Tag Queuing:
The Enabled option is useful for enhancing overall system performance under
multi-tasking operating systems. The Command Tag (Drive Channel) function controls the Fibre command tag queuing support for each drive channel. This
function should normally remain enabled. Disable this function only when using
older Fibre drives that do not support command tag queuing
Fibre Channel/LUN Base/LUN:
Fibre Channel: Two 4Gbps Fibre channel can be applied to the internal RAID
subsystem. Choose the Fibre Host# option 0, 1 and 0&1 cluster.
LUN Base: Each fibre device attached to the Fibre card, as well as the card
itself, must be assigned a unique fibre ID number. A Fibre channel can connect
up to 128 (0 to 127) devices. The RAID subsystem is as a large Fibre device.
We should assign an LUN base from a list of Fibre LUNs.
LUN: Each Fibre LUN base can support up to 8 LUNs.
host adapter treats each LUN like a Fibre disk.
3.7.2
Most Fibre Channel
Delete Volume Set
To delete Volume from raid set system function, move the cursor bar to the
main menu and click on the Delete Volume Set link. The Select The Volume Set To Delete screen will show all raid set number. Tick on a raid set
number and the Confirm The Operation and then click on the Submit button to
show all volume set item in the selected raid set. Tick on a volume set
num- ber and the Confirm The Operation and then click on the Submit
button to delete the volume set.
3.7.3
Modify Volume Set
To modify a volume set from a raid set:
(1). Click on the Modify Volume Set link.
(2). Tick on the volume set from the list that you wish to modify. Click on the
Submit button.
The following screen appears.
Use this option to modify volume set configuration. To modify volume set attribute
values from raid set system function, move the cursor bar to the volume set attribute menu and click on it. The modify value screen appears. Move the
cursor bar to an attribute item, and then click on the attribute to modify the
value. After you complete the modification, tick on the Confirm The Operation
and click on the Submit button to complete the action. User can modify all
values except the capacity.
3.7.3.1 Volume Expansion
Volume Capacity (Logical Volume Concatenation Plus Re-stripe)
Use this raid set expands to expand a raid set, when a disk is added to your
system. (refer to section 3.6.3)
The expand capacity can use to enlarge the volume set size or create another
volume set. The modify volume set function can support the volume set expansion function. To expand volume set capacity value from raid set
system function, move the cursor bar to the volume set Volume capacity
item and entry the capacity size.
Tick on the Confirm The Operation and click on the Submit button to complete the action. The volume set start to expand.
3.7.4
Volume Set Migration
Migrating occurs when a volume set is migrating from one RAID
level to another, a volume set strip size changes, or when a disk is added
to a raid set. Migration status is displayed in the volume status area of the
RaidSet Hierarchy screen when one RAID level to another, a Volume set
strip size changes or when a disk is added to a raid set.
3.7.5
Check Volume Set
To check a volume set from a raid set:
(1). Click on the Check Volume Set link.
(2). Tick on the volume set from the list that you wish to check. Tick on Confirm The Operation and click on the Submit button.
Use this option to verify the correctness pf the redundant data in a volume set.
For example, in a system with dedicated parity, volume set check means computing the parity of the data disk drives and comparing the results to the contents of the dedicated parity disk drive. The checking percentage can also be
viewed by clicking on RaidSet Hierarchy in the main menu.
3.7.6
Stop VolumeSet Check
Use this option to stop the Check Volume Set function.
3.7.7
Volume Set Host Filters
Use this option to View/Edit Host Filters. Refer to section 3.9.2.2 View/Edit
Volume Set Host Filters for more information. You should complete the Fibre
Channel Configuration first before you use this option.
3.8
Physical Drive
Choose this option from the Main Menu to select a physical disk and to perform the operations listed below.
3.8.1
Create Pass-Through Disk
To create pass-through disk, move the mouse cursor to the main menu
and click on the Create Pass-Through link. The relative setting function
screen appears.
Disk is no controlled by the internal RAID subsystem firmware and thus cannot
be a part of a volume set. The disk is available to the operating system as an
individual disk. It is typically used on a system where the operating system is
on a disk not controlled by the RAID firmware. User can also select the cache
mode, Tagged Command Queuing and Fibre channel/LUN Base/LUN for this
volume.
3.8.2
Modify Pass-Through Disk
Use this option to modify the Pass-Through Disk Attribute. User can modify the
cache mode, Tagged Command Queuing and Fibre channel/LUN Base/LUN on
an existed pass through disk.
To modify the pass-through drive attribute from the pass-through drive pool,
move the mouse cursor bar to click on Modify Pass-Through link. The Select
The Pass Through Disk For Modification screen appears tick on the PassThrough Disk from the pass-through drive pool and click on the Submit button
to select drive.
The Enter Pass-Through Disk Attribute screen appears, modify the drive attribute values, as you want.
3.8.3
Delete Pass-Through Disk
To delete pass-through drive from the pass-through drive pool, move
the mouse cursor bar to the main menus and click on Delete Pass Through
link. After you complete the selection, tick on the Confirm The Operation and
click
on the Submit button to complete the delete action.
3.8.4
Identify Selected Drive
To prevent removing the wrong drive, the selected disk LED will light for physically locating the selected disk when the Identify Selected Drive is selected.
To identify the selected drive from the drives pool, move the mouse cursor bar
to click on Identify Selected Drive link. The Select The IDE Device For
identifica- tion screen appears tick on the IDE device from the drives pool and
Flash method. After completing the selection, click on the Submit button to
identify selected drive.
3.9
3.9.1
System Configuration
System Configuration
To set the raid system function, move the cursor bar to the main menu and
click on the Raid System Function link. The Raid System Function menu will
show all items. Select the desired function.
System Beeper Setting:
The Alert Beeper function item is used to Disabled or Enable the RAID subsystem controller alarm tone generator.
RAID Rebuild Priority:
The Raid Rebuild Priority is a relative indication of how much time the controller devotes to a rebuild operation. The RAID subsystem allows user to choose
the rebuild priority (ultraLow, Low, Medium, High) to balance volume set access and rebuild tasks appropriately. For high array performance, specify a
Low value.
Terminal Port Configuration:
Speed setting values are 1200, 2400, 4800, 9600, 19200,38400, 57600, and
115200.
Stop Bits values are 1 bit and 2 bits.
Note: Parity value is fixed at None.
Data Bits value is fixed at 8 bits.
JBOD/RAID Configuration
The RAID subsystem supports JBOD and RAID configuration.
Maximum SATA Mode Supported:
The 12 SATA drive channel can support up to SATA ll, which runs up to 300MB/s.
NCQ is a command protocol in Serial ATA that can only be implemented on native
Serial ATA hard drives. It allows multiple commands to be outstanding within a
drive at the same time. Drives that support NCQ have an internal queue where
outstanding commands can be dynamically rescheduled or re-ordered, along with
the necessary tracking mechanisms for outstanding and completed portions of
the workload. RAID subsystem allows user to choose the SATA Mode: SATA150,
SATA150+NCQ, SATA300, SATA300+NCQ.
HDD Read Ahead Cache:
This option allows the users to disable the cache of the HDDs on the RAID
subsystem. To some HDD models, disabling the cache in the HDD is necessary to prove the RAID subsystem functions correctly.
Stagger Power On Control:
This option allows the power supplier to power up in order each HDD on the
RAID subsystem. In the past, all the HDDs on the RAID subsystem are powered up altogether at the same time. The power transfer time (lag time)
from
the last HDD to the next one can be set within the range of 0.4
to 6.0. (Default: 0.7)
Disk Write Cache Mode:
The RAID subsystem supports auto, enabled and disabled. When the RAID subsystem with BBM (battery backup module) the auto option will Enable disk write
cache. Contrariwise, the auto option will Disable disk write cache.
Disk Capacity Truncation Mode:
This RAID subsystem use drive truncation so that drives from differing vendors
are more likely to be able to be used as spares for each other. Drive truncation slightly decreases the usable capacity of a drive that is used in
redundant units.
Multiples Of 10G: If you have 120 GB drives from different vendors; chances
are that the capacity varies slightly. For example, one drive might be 123.5
GB, and the other 120 GB. This drive Truncation mode Multiples Of 10G
uses the same capacity for both of these drives so that one could replace the
other.
Multiples Of 1G: If you have 123 GB drives from different vendors; chances
are that the capacity varies slightly. For example, one drive might be 123.5
GB, and the other 123.4 GB. This drive Truncation mode Multiples Of 1G
uses the same capacity for both of these drives so that one could replace the
other.
No Truncation: It does not truncate the capacity.
3.9.2
Fibre Channel Config
To set the Fibre Channel function, move the cursor bar to the main menu and
click on the Fibre Channel Config. The Raid System Fibre Channel Function
menu will show all items. Select the desired function.
WWNN (World Wide Node Name)
The WWNN of the FC RAID is shown at top of the Config Frame. This is an eightbyte unique address factory assigned to the FC RAID, common to both FC channels.
WWPN (World Wide Port Name)
Each FC channel has its unique WWPN, which is also factory assigned. Usually,
the WWNN:WWPN tuple is used to uniquely identify a port in the Fabric.
Channel Speed
Each FC Channel can be configured as 1 Gbps/sec, 2 Gbps/sec, 4 Gbps/sec or
use “Auto” option for auto speed negotiation between 1G/4G. The controller default is “Auto”, which should be adequate under most conditions. The Channel
Speed setting takes effect for the next connection. That means a link down or bus
reset should be applied for the change to take effect. The current connection
speed is shown at end of the row. You have to click the “Fibre Channel Config”
link again from the Menu Frame to refresh display of current speed.
Channel Topology
Each FC Channel can be configured as Fabric, Point-to-Point, Loop or Auto
Topology. The controller default is “Auto” topology, which takes precedence of
Loop topology. Firmware restart is needed for any topology change to take effect.
The current connection topology is shown at end of the row. You have to click the
“Fibre Channel Config” link again from the Menu Frame to refresh display of
current topology. Note that current topology is shown as “None” when no successful connection is made for the channel.
Hard Loop ID
This setting is effective only under Loop topology. When enabled, you can manually set the Loop ID in the range from 0 to 125. Make sure this hard assigned ID is
not conflicted with any other devices on the same loop; otherwise the channel will
be disabled. It is a good practical to disable the hard loop ID and let the loop itself
auto arrange the Loop ID.
3.9.2.1
View/Edit Host Name List
To set up LUN masking for each volume, a host list should be established at first.
This is done by clicking “View/Edit Host Name List” link at bottom of “Fibre
Chan- nel Config” page (refer to section 3.9.2). Only hosts that will be used as
include/ exclude filters are necessary to be added.
The subsystem provides two way to add a host to the list:
1. Select WWN From Detected Host.
2. Key in: first enter the WWPN (exact 16 hex digits) of the host in the “Host
WWN” text field. Optional host nick name (up to 23 ASCII characters) can be given
for descriptive purpose.
Choose “Add” operation, then Confirm/Submit to complete the add operation.
The added host will be shown in the upper half of the Config Frame. Up to 20
hosts can be added.
To delete a host from the list, select the radio button in front of the host list.
Choose “Delete” operation, then Confirm/Submit to complete the delete operation.
Once volumes are created and the host name list is established, Volume Set Host
Filters can be specified by clicking “View/Edit Volume Set Host Filters” link at
bottom of “Fibre Channel Config” page. Volume Set Host Filters can also be specified by clicking “Volume Set Functions” Æ “Volume Set Host Filters” from
the Menu Frame. Select the volume for LUN masking and then click the submit
button.
3.9.2.2 Volume Set Host Filters
Volume Set Host Filters can be specified by clicking “View/Edit Volume Set Host
Filters” link at bottom of “Fibre Channel Config” page. Volume Set Host Filters
can also be specified by clicking “Volume Set Functions” Æ “Volume Set Host
Filters” from the Menu Frame.
To add a host filter entry, first select the host to be include/exclude from Host
WWN list.
Adjust Range Mask, Filter Type, Access Mode fields. Choose “Add” operation,
then Confirm/Submit to complete the add operation. The added host filter entry
will be shown in the upper half of the Config Frame. Up to 8 host filter entries
can
be added.
To delete a host filter entry from the list, select the radio button in front of the host
entry.
Choose “Delete” operation, then Confirm/Submit to complete the delete operation.
Range Mark
Some times it is convenient to combine some corelated WWNs into one ID range
as a single filter entry. This ID range is obtained by “AND’ing” the host WWN and
the Range Mask (which are both 64-bit entities). For example, if
hosts
0x210000e0_8b03dc84 and 0x210000e0_8b03dc85 have the same access control,
they can be combined as a single filter entry (using either WWN as host WWN)
with Range Mask setting as 0xFFFFFFFF_FFFFFFFE. Note that under most
circumstances, the Range Mask is left as 0xFFFFFFFF_FFFFFFFF, which means
that only a single WWN is specified for that filter entry.
Filter Type
Each filter entry can be set to include or exclude certain host(s) from data access.
If a node’s WWN falls in an ID range specified as Exclude, the related volume set
will be “invisible” to this node and no data access is possible.
If a node’s WWN falls in an ID range specified as Include and does not fall in any ID
range specified as Exclude, this node will be allowed to access the data of the
related volume set.
The access mode can be specified as normal “Read/Write” or restricted “Read
Only”.
If a node’s WWN falls in none of the ranges and there is at least one Include-type
entry specified, this node is considered as Excluded; otherwise, it is considered
as Included.
Note that when no Filter Entries are specified for a volume set, any node can
access the volume set as there is no LUN masking.
Access mode
For certain applications, it is desired to limit the data access as “Read Only” such
that the data on the volume won’t be accidentally modified. This can be done by
setting the Access Mode of the Included ID range as “Read Only”. However,
some Operating Systems (e.g. Linux) may ignore this “Write Protect” attribute
and still issue write commands to the protected volumes.
“Data Protected” error will be response to these write commands then. It is suggested to mount the volumes as “Read Only” for a consistent behavior, if possible.
3.9.3
EtherNet Config
To set the EtherNet function, move the cursor bar to the main menu and click
on he EtherNet Config. The Raid System EtherNet Function menu will show
all items. Select the desired function.
3.9.4
Alert By Mail Config
To set the Event Notification function, move the cursor bar to the main menu
and click on the Alert By Mail Config. The Raid System Event Notification
Function menu will show all items. Select the desired function. When an
abnormal condi- tion occurs, an error message will be email to administrator
that a problem has occurred. Events are classified to 4 levels (urgent, serious,
warning, message).
3.9.5
SNMP Configuration
The SNMP gives users independence from the proprietary network management
schemes of some manufacturers and SNMP is supported by many WAN and LAN
manufacturers enabling true LAN/ WAN management integration.
To set the SNMP function, move the cursor bar to the main menu and click on
he SNMP Configuration. The Raid System SNMP Function menu will show
all items. Select the desired function.
SNMP Trap Configurations: Type the SNMP Trap IP Address. The Port default is 162.
SNMP System Configuration:
Community: The default is Public.
(1)sysContact.0; (2)sysLocation.0; (3)sysName.0: SNMP parameter (31 bytes
max). If this 3 categories are selected during initial setting then when an error
occurs SNMP will send out a message that includes the 3 categories within
the message. This allows user to easily define which RAID unit is
having
problem. Once this setting is done, alert by mail configuration will also work in
the same way.
SNMP Trap Notification Configurations: Select the desired function.
After you complete the addition, tick on the Confirm The Operation and click
on the Submit button to complete the action.
3.9.6
NTP Configuration
NTP stands for Network Time Protocol, and it is an Internet protocol used to
synchronize the clocks of computers to some time reference. NTP is
an
Internet standard protocol. You can directly type your NTP Server IP Address
to have the RAID subsystem can work with it.
To set the NTP function, move the cursor bar to the main menu and click on
he NTP Configuration. The Raid System NTP Function menu will show all
items. Select the desired function.
Key in NTP server IP , select Time Zone , get NTP time. Setting Automatic
Daylight Saving by the region. “NTP Time Got At” is NTP got last time.
After you complete the addition, tick on the Confirm The Operation and click
on the Submit button to complete the action.
3.9.7
View Events
To view the RAID subsystem controller’s information, move the mouse cursor
to the main menu and click on the System Information link. The Raid Subsystem events Information screen appears.
Choose this option to view the system events information: Timer,
Device, Event type, Elapse Time and Errors. The RAID system does not built
the real time clock. The Time information is the relative time from the RAID
subsystem power on.
3.9.8
Generate Test Events
If you want to generate test events, move the cursor bar to the main menu and
click on he Generate Test Events. Tick on the Confirm The Operation, and
click on the Submit button in the screen to create the hot spares. Then
click
on the View Events/Mute Beeper to view the test event.
3.9.9
Clear Events Buffer
Use this feature to clear the entire events buffer information.
3.9.10 Modify Password
To set or change the RAID subsystem password, move the mouse cursor to Raid
System Function screen, and click on the Change Password link. The Modify
System Password screen appears.
The password option allows user to set or clear the raid subsystem’s password protection feature. Once the password has been set, the user can only
monitor and configure the raid subsystem by providing the correct password.
The password is used to protect the internal RAID subsystem from unauthorized entry. The controller will check the password only when entering the
Main menu from the initial screen. The RAID subsystem will automatically go
back to the initial screen when it does not receive any command in
ten seconds.
To disable the password, press Enter key only in both the Enter New Password
and Re-Enter New Password column. Once the user confirms the operation
and clicks the Submit button. The existing password will be cleared. No password checking will occur when entering the main menu from the starting screen.
3.9.11 Upgrade Firmware
Please reference the section 4.2 for more information.
3.10 Information Menu
3.10.1 RaidSet Hierarchy
Use this feature to view the internal raid subsystem current raid set, current volume set and physical disk configuration. Click the volume set number you which
to View in the select column. Then you can view the Volume Set Information and
Fibre Channel Volume Set Host Filters.
3.10.2
System Information
To view the RAID subsystem controller’s information, move the mouse cursor to
the main menu and click on the System Information link. The Raid Subsystem
Information screen appears.
Use this feature to view the raid subsystem controller’s information. The controller name, firmware version, serial number, main processor, CPU data/Instruction
cache size and system memory size/speed appear in this screen.
3.10.3 Hardware Monitor
To view the RAID subsystem controller’s hardware monitor information, move the
mouse cursor to the main menu and click the Hardware Monitor link. The Hardware Information screen appears.
The Hardware Monitor Information provides the temperature, fan speed (chassis
fan) and voltage of the internal RAID subsystem. All items are also unchangeable.
The warning messages will indicate through the LCD, LED and alarm buzzer.
Item
Warning Condition
Controller Board Temperature
> 60 Celsius
HDD Temperature
> 55 Celsius
Controller Fan Speed
< 1900 RPM
Power Supply +12V
< 10.8V
Power Supply +5V
< 4.5V
Power Supply +3.3V
< 2.97V
or
> 3.63V
DDR Supply Voltage +2.5V
< 2.25V
or
> 2.75V
CPU Core Voltage +1.3V
< 1.17V
or
> 1.43V
DDR Termination Power +1.25V
< 1.125V
or
or
> 13.2V
> 5.5V
or
> 1.375V
3.11 Creating a New RAID or Reconfiguring an Existing
RAID
You can configure raid sets and volume sets using Quick Create or Raid Set
Functions/Volume Set Functions configuration method. Each configuration
method requires a different level of user input. The general flow of operations
for raid set and volume set configuration is:
Step
Action
1
Designate hot spares/pass-through (optional).
2
Choose a configuration method.
3
Create raid set using the available physical drives.
4
Define volume set using the space in the raid set.
5
Initialize the volume set and use volume set in the HOST OS.
Chapter 4
Array Maintenance
This chapter describes more information about your Disk Array. The
following items are describes in detail.
™ Memory Upgrades
™ Updating Firmware
™ Hot Swap Components
4.1 Memory Upgrades
The subsystem is equipped with one DDR SDRAM socket. By default, your Disk
Array comes with 128MB of memory that is expandable to a maximum of 1024MB.
These expansion memory module can be purchased from your dealer.
Memory Type : 2.5V PC2100/1600 DDR266 SDRAM 184pin ECC unbuffered
Memory Size : Supports 184pin DDR of 64MB, 128MB, 256MB , 512MB or 1GB.
Height : 1.15 Inches ( 29.2 mm ).
4.1.1 Installing Memory Module:
1.
Unscrew and pull out the controller Module.
Screw of Controller Module
2.
Unscrew and take off the cover of controller Module.
3.
Remove the DIMM Memory from the RAM socket. Then Press
the memory module firmly into socke, make sure that all the contacts
are aligned with the socket. Push the memory module forward to a
horizontal position.
4.2
Upgrading the Firmware
Upgrading Flash Firmware Programming Utility
Since the RAID subsystem controller features flash firmware, it is not necessary
to change the hardware flash chip in order to upgrade the RAID firmware. The
user can simply re-program the old firmware through the RS-232 port. New releases of the firmware are available in the form of a DOS file at OEM’s FTP. The
file available at the FTP site is usually a self-extracting file that contains the
following:
XXXXVVV.BIN Firmware Binary (where “XXXX” refers to the model name and
“VVV” refers to the firmware version)
README.TXT It contains the history information of the firmware change. Read
this file first before upgrading the firmware.
These files must be extracted from the compressed file and copied to one directory in drive A or C.
Establishing the Connection for the RS-232
The firmware can be downloaded to the RAID subsystem controller by using an
ANSI/VT-100 compatible terminal emulation program or Remote web browser
management. You must complete the appropriate installation procedure before
proceeding with this firmware upgrade. Whichever terminal emulation program
is used must support the ZMODEM file transfer protocol.
Configuration of the internal RAID subsystem web browser-based remote RAID
management. Web browser-based RAID management can be used to update
the firmware. You must complete the appropriate installation procedure before
proceeding with this firmware upgrade.
Upgrading Firmware Through ANSI/VT-100 Terminal Emulation
Get the new version firmware for your RAID subsystem controller. For Example,
download the bin file from your OEM’s web site onto the c:
1. From the Main Menu, scroll down to “Raid System Function”
2. Choose the “Update Firmware”, The Update The Raid Firmware dialog box
appears.
3. Go to the tool bar and select Transfer. Open Send File.
4. Select “ZMODEM modem” under Protocol. ZMODEM as the file transfer protocol of your terminal emulation software.
5. Click Browse. Look in the location where the Firmware upgrade software is
located. Select the File name:
“6160FIRM.BIN” and click open.
6. Click Send. Send the Firmware Binary to the controller
7. When the Firmware completes downloading, the confirmation screen appears.
Press Yes to start program the flash ROM.
8. When the Flash programming starts, a bar indicator will show “ Start Updating
Firmware. Please Wait:”.
9. The Firmware upgrade will take approximately thirty seconds to complete.
10. After the Firmware upgrade is complete, a bar indicator will show “ Firmware
Has Been Updated Successfully”.
NOTE:
The user has to reconfigure all of the settings after the firmware upgrade is complete, because all of the settings will default to the original
default values.
Upgrading Firmware Through Web Browser Management
Get the new version firmware for your RAID subsystem controller.
1. To upgrade the RAID subsystem firmware, move the cursor to Upgrade Firmware link. The Upgrade The Raid System Firmware screen appears.
2. Click Browse. Look in the location where the Firmware upgrade software is
located. Select the File name:
“6160FIRM.BIN” and click open.
3.Click the Confirm The Operation and press the Submit button.
4. The Web Browser begins to download the firmware binary to the controller
and start to update the flash ROM.
5. After the firmware upgrade is complete, a bar indicator will show “ Firmware
Has Been Updated Successfully”
4.3 Hot Swap Components
The disk array supports hot-swappable disk trays, power supply modules and
cooling fan unit. The following sections describe how to remove and install the
“Hot-Swap” parts without interrupting the data access while the disk array is on.
4.3.1 Replacing a disk
To replace a disk, perform the following steps (Refer to 2.5 Installing hard
disks)
1.
Open the tray lever by sliding the latch and wait for the drive
to spin down. The disk LED on the front panel will turn from green to
red indi- cate the disk is powered down.
2.
Lift the lever to disengage the disk tray from the slot.
3.
Gently pull the disk tray out of the slot.
4.
Replace the HDD.
5.
Slide the tray into a slot until it clicks into place. The HDD status LED will
turn green on front panel.
6.
Press the lever in until you hear the latch click into place.
4.3.2 Replacing a Power Supply
1.
Remove the screws located at the corners of the power supply. Place
the screws in a safe place as you will need them later when you install a
new power supply.
Alarm
Reset
Screw
Screw
2.
Use the handle to pull out the defective power supply.
3.
Replace it with a 350W power supply.
4.
Slide the new power supply in until it clicks into place.
5.
Replace the screws you removed in step 1.
6.
When you replace a new power supply unit, you should then push the
power supply reset switch to stop the buzzer alarm. The new
power supply unit will link with the other unit immediately and will start
working after you press the power supply reset switch, and the buzzer
warning noise will stop.
4.3.3 Replacing a Fan
1.
Unscrew the fan holder.
Screw
2.
Disconnect the fan cable connects between the backplane and the fan.
3.
The fans are attached to the fan holder. Remove the screws on the
cor- ners of the defective fan. Place the screws on a safe place as
you will need them later when you install a new fan.
Note:
We recommend that
you
remove
the fan
holder
from
the
subsystem. This allows easy installation and unlimited workspace
when replacing the fan.
4.
Install a new fan using the screws you removed in step 3.
5.
Replace the fan holder.
6.
Reconnect the fan cable connects.
Appendix A
Technical Specification
RAID processor
Intel 80321 RISC 64-bit
RAID level
0, 1, 3, 5, 6, 0+1 and JBOD
Cache memory
Up to 1024MB DDR SDRAM ECC
unbuffered
No. of channels (host+disk)
2+12
Host bus interface
FC-AL (4Gb/s * 2)
Data transfer
Up to 400MB / sec
Back Plane Board
S-ATA II
Hot swap disk bays
12
Hot swap power supply
350W * 2 w/PFC
Cooling fan
2
On-line expansion
Yes
Multiple RAID selection
Yes
Failed disk auto rebuild
Yes
Array Roaming
Yes
Bad block auto-remapping
Yes
Online RAID level migration
Yes
Audible alarm
Yes
Host Independent
Yes
Failed drive indicators
Yes