ARECA ARC-1110 1110,1120,1130,1160,1170, ARC-1110ML 1110ML,1120ML,1130ML,1160ML, ARC-1210 1210,1220,1210ML,1220ML,1230,1260,1280, ARC-1231ML 1231ML,1261ML,1280ML SATA RAID controller User manual
Below you will find brief product information for ARC-1110,ARC-1120,ARC-1130,ARC-1160,ARC-1170,ARC-1110ML,ARC-1120ML,ARC-1130ML,ARC-1160ML,ARC-1210,ARC-1220,ARC-1210ML,ARC-1220ML,ARC-1230,ARC-1260,ARC-1280,ARC-1231ML,ARC-1261ML,ARC-1280ML. These controllers support a maximum of 4, 8, 12, 16, or 24 SATA ll peripheral devices (depending on model) on a single controller. The ARC-11xx series for the PCI-X bus and the ARC-12xx Series for the PCI-Express bus. These controllers provide non-stop service with a high degree of fault tolerance through the use of RAID technology and can also provide advanced array management features.
Advertisement
Advertisement
SATA RAID Cards
ARC-1110/1120/1130/1160/1170
( 4/8/12/16/24-port PCI-X SATA RAID Controllers )
ARC-1110ML/1120ML/1130ML/1160ML
( 4/8-port Infinband connector and 12/16-port Multi-lane connector PCI-X SATA RAID Controllers )
ARC-1210/1220/1210ML/1220ML/1230/
1260/1280/
( 4/8/12/16/24-port PCI-Express SATA RAID Controllers )
ARC-1231ML/1261ML/1280ML
(12/16/24-port PCI-Express SATA RAID Controllers)
USER Manual
Version: 3.5
Issue Date: February, 2008
Microsoft WHQL Windows Hardware Compatibility
Test
ARECA is committed to submitting products to the Microsoft Windows
Hardware Quality Labs (WHQL), which is required for participation in the Windows Logo Program. Successful passage of the WHQL tests results in both the “Designed for Windows” logo for qualifying ARECA
PCI-X and PCI-Express SATA RAID controllers and a listing on the Microsoft Hardware Compatibility List (HCL).
Copyright and Trademarks
The information of the products in this manual is subject to change without prior notice and does not represent a commitment on the part of the vendor, who assumes no liability or responsibility for any errors that may appear in this manual. All brands and trademarks are the properties of their respective owners. This manual contains materials protected under International Copyright Conventions. All rights reserved. No part of this manual may be reproduced in any form or by any means, electronic or mechanical, including photocopying, without the written permission of the manufacturer and the author. All inquiries should be addressed to Areca Technology Corporation.
FCC STATEMENT
This equipment has been tested and found to comply with the limits for a Class B digital device, pursuant to part 15 of the FCC Rules.
These limits are designed to provide reasonable protection against interference in a residential installation. This equipment generates, uses, and can radiate radio frequency energy and, if not installed and used in accordance with the instructions, may cause harmful interference to radio communications. However, there is no guarantee that interference will not occur in a particular installation.
Contents
1. Introduction .............................................................. 10
1.1 Overview ....................................................................... 10
1.2 Features ........................................................................ 12
2. Hardware Installation ............................................... 16
2.1 Before Your Begin Installation ........................................... 16
2.2 Board Layout .................................................................. 17
2.3 Installation ..................................................................... 23
3. McBIOS RAID Manager .............................................. 42
3.1 Starting the McBIOS RAID Manager ................................... 42
3.2 McBIOS RAID manager .................................................... 43
3.3 Configuring Raid Sets and Volume Sets .............................. 44
3.4 Designating Drives as Hot Spares ...................................... 44
3.5 Using Quick Volume/Raid Setup Configuration ..................... 45
3.6 Using RAID Set/Volume Set Function Method ...................... 46
3.7 Main Menu .................................................................... 48
3.7.1 Quick Volume/RAID Setup ........................................... 49
3.7.2 Raid Set Function ....................................................... 52
3.7.2.1 Create Raid Set .................................................... 53
3.7.2.2 Delete Raid Set ..................................................... 54
3.7.2.3 Expand Raid Set .................................................... 55
• Migrating ...................................................................... 56
3.7.2.4 Activate Incomplete Raid Set ................................... 56
3.7.2.5 Create Hot Spare ................................................... 57
3.7.2.6 Delete Hot Spare ................................................... 58
3.7.2.7 Raid Set Information .............................................. 58
3.7.3 Volume Set Function ................................................... 59
3.7.3.1 Create Volume Set ................................................. 59
• Volume Name ................................................................ 61
• Raid Level ..................................................................... 61
• Capacity ....................................................................... 62
• Stripe Size .................................................................... 64
• SCSI Channel ................................................................ 64
• SCSI ID ........................................................................ 65
• Cache Mode .................................................................. 66
• Tag Queuing .................................................................. 66
3.7.3.2 Delete Volume Set ................................................. 67
3.7.3.3 Modify Volume Set ................................................. 67
3.7.3.4 Check Volume Set .................................................. 69
3.7.3.5 Stop Volume Set Check .......................................... 70
3.7.3.6 Display Volume Set Info. ........................................ 70
3.7.4 Physical Drives ........................................................... 71
3.7.4.1 View Drive Information .......................................... 71
3.7.4.2 Create Pass-Through Disk ....................................... 72
3.7.4.3 Modify a Pass-Through Disk ..................................... 72
3.7.4.4 Delete Pass-Through Disk ....................................... 73
3.7.4.5 Identify Selected Drive ........................................... 73
3.7.5 Raid System Function ................................................. 74
3.7.5.1 Mute The Alert Beeper ........................................... 74
3.7.5.2 Alert Beeper Setting ............................................... 75
3.7.5.3 Change Password .................................................. 75
3.7.5.4 JBOD/RAID Function .............................................. 76
3.7.5.5 Background Task Priority ........................................ 77
3.7.5.6 Maximum SATA Mode ............................................. 77
3.7.5.7 HDD Read Ahead Cache ......................................... 78
3.7.5.8 Stagger Power On .................................................. 78
3.7.5.9 Empty HDD slot HDD ............................................. 79
3.7.5.10 HDD SMART Status Polling .................................... 80
3.7.5.11 Controller Fan Detection ....................................... 80
3.7.5.12 Disk Write Cache Mode ......................................... 81
3.7.5.13 Capacity Truncation .............................................. 81
3.7.6 Ethernet Configuration (12/16/24-port) ......................... 82
3.7.6.1 DHCP Function ...................................................... 83
3.7.6.2 Local IP address .................................................... 84
3.7.6.3 Ethernet Address ................................................... 85
3.7.7 View System Events ................................................... 85
3.7.8 Clear Events Buffer ..................................................... 86
3.7.9 Hardware Monitor ....................................................... 86
3.7.10 System Information .................................................. 86
4. Driver Installation ..................................................... 88
4.1 Creating the Driver Diskettes ............................................ 88
4.2 Driver Installation for Windows ......................................... 90
4.2.1 New Storage Device Drivers in Windows 2003/XP-64/Vista .
90
4.2.2 Install Windows 2000/XP/2003/Vista on a SATA RAID Volume .................................................................................. 90
4.2.2.1 Installation Procedures ........................................... 90
4.2.2.2 Making Volume Sets Available to Windows System ..... 92
4.2.3 Installing Controller into an Existing Windows 2000/
XP/2003/Vista Installation ................................................... 92
4.2.3.1 Making Volume Sets Available to Windows System ..... 94
4.2.4 Uninstall controller from Windows 2000/XP/2003/Vista .... 94
4.3 Driver Installation for Linux .............................................. 95
4.4 Driver Installation for FreeBSD .......................................... 96
4.5 Driver Installation for Solaris 10 ........................................ 96
4.6 Driver Installation for Mac 10.x ......................................... 96
4.7 Driver Installation for UnixWare 7.1.4 ................................ 97
4.8 Driver Installation for NetWare 6.5 .................................... 98
5. ArcHttp Proxy Server Installation ............................. 99
5.1 For Windows................................................................. 100
5.2 For Linux ..................................................................... 101
5.3 For FreeBSD ................................................................. 103
5.4 For Solaris 10 x86 ......................................................... 103
5.5 For Mac OS 10.x ........................................................... 103
5.6 ArcHttp Configuration .................................................... 104
6. Web Browser-based Configuration ......................... 107
6.1 Start-up McRAID Storage Manager ................................. 107
• Start-up McRAID Storage Manager from Windows Local Administration ........................................................................ 108
• Start-up McRAID Storage Manager from Linux/FreeBSD/Solaris/Mac Local Administration .......................................... 109
• Start-up McRAID Storage Manager Through Ethernet port
(Out-of-Band) ............................................................... 109
6.2 McRAID Storage Manager ............................................... 110
6.3 Main Menu .................................................................. 111
6.4 Quick Function .............................................................. 111
6.5 RaidSet Functions ......................................................... 112
6.5.1 Create Raid Set ....................................................... 112
6.5.2 Delete Raid Set ........................................................ 113
6.5.3 Expand Raid Set ....................................................... 113
6.5.4 Activate Incomplete Raid Set ..................................... 114
6.5.5 Create Hot Spare ..................................................... 115
6.5.6 Delete Hot Spare ...................................................... 115
6.5.7 Rescue Raid Set ....................................................... 115
6.5.8 Offline Raid Set ........................................................ 116
6.6 Volume Set Functions .................................................... 116
6.6.1 Create Volume Set .................................................... 116
• Volume Name .............................................................. 117
• Raid Level .................................................................. 117
• Capacity ..................................................................... 117
• Greater Two TB Volume Support ..................................... 117
• Initialization Mode ........................................................ 118
• Stripe Size .................................................................. 118
• Cache Mode ................................................................ 118
• Tag Queuing ................................................................ 119
6.6.2 Delete Volume Set .................................................... 119
6.6.3 Modify Volume Set .................................................... 119
6.6.3.1 Volume Growth ................................................... 120
6.6.3.2 Volume Set Migration ........................................... 121
6.6.4 Check Volume Set .................................................... 121
6.6.5 Stop VolumeSet Check .............................................. 122
6.7 Physical Drive .............................................................. 122
6.7.1 Create Pass Through Disk .......................................... 122
6.7.2 Modify Pass Through Disk .......................................... 123
6.7.3 Delete Pass Through Disk .......................................... 123
6.7.4 Identify Selected Drive .............................................. 124
6.8 System Controls ........................................................... 124
6.8.1 System Config ......................................................... 124
• System Beeper Setting ................................................. 124
• Background Task Priority ............................................... 124
• JBOD/RAID Configuration .............................................. 125
• Maximun SATA Supported ............................................. 125
• HDD Read Ahead Cache ................................................ 125
• Stagger Power on ........................................................ 125
• Empty HDD Slot LED .................................................... 126
• Disk Write Cache Mode ................................................. 127
• Disk Capacity Truncation Mode ....................................... 128
6.8.2 Ethernet Configuration (12/16/24-port) ....................... 129
6.8.3 Alert by Mail Configuration (12/16/24-port) ................ 130
6.8.4 SNMP Configuration (12/16/24-port) ........................... 130
• SNMP Trap Configurations ............................................. 131
• SNMP System Configurations ......................................... 131
• SNMP Trap Notification Configurations ............................. 131
6.8.5 NTP Configuration (12/16/24-port) ............................. 131
• NTP Sever Address ....................................................... 132
• Time Zone ................................................................... 132
• Automatic Daylight Saving............................................. 132
6.8.6 View Events/Mute Beeper .......................................... 133
6.8.7 Generate Test Event ................................................. 133
6.8.8 Clear Events Buffer ................................................... 134
6.8.9 Modify Password ...................................................... 134
6.8.10 Update Firmware ................................................... 135
6.9 Information .................................................................. 135
6.9.1 RaidSet Hierarchy ..................................................... 135
6.9.2 System Information .................................................. 136
6.9.3 Hardware Monitor ..................................................... 136
Appendix A ................................................................. 138
Upgrading Flash ROM Update Process .................................... 138
Upgrading Firmware Through McRAID Storage Manager ........... 138
Upgrading Firmware Through nflash DOS Utility ...................... 140
Appendix B .................................................................. 142
Battery Backup Module (ARC6120-BAT-Txx) ........................... 142
BBM Components ........................................................... 142
Status of BBM ................................................................ 142
Installation .................................................................... 142
Battery Backup Capacity .................................................. 143
Operation ...................................................................... 143
Changing the Battery Backup Module ................................ 143
BBM Specifications .......................................................... 144
Appendix C .................................................................. 145
SNMP Operation & Definition ................................................ 145
Appendix D .................................................................. 152
Event Notification Configurations ........................................ 152
A. Device Event .............................................................. 152
B. Volume Event ............................................................. 153
C. RAID Set Event .......................................................... 154
D. Hardware Monitor Event .............................................. 154
Appendix E .................................................................. 156
RAID Concept ......................................................... 156
RAID Set ......................................................................... 156
Volume Set ...................................................................... 156
Ease of Use Features ......................................................... 157
• Foreground Availability/Background Initialization .............. 157
• Online Array Roaming ................................................... 157
• Online Capacity Expansion ............................................. 157
• Online RAID Level and Stripe Size Migration .................... 159
• Online Volume Expansion .............................................. 160
High availability .................................................................. 160
• Global Hot Spares .......................................................... 160
• Hot-Swap Disk Drive Support .......................................... 161
• Auto Declare Hot-Spare ................................................. 161
• Auto Rebuilding ............................................................ 162
• Adjustable Rebuild Priority .............................................. 162
High Reliability ................................................................... 163
• Hard Drive Failure Prediction ........................................... 163
• Auto Reassign Sector...................................................... 163
• Consistency Check ......................................................... 164
Data Protection .................................................................. 164
• Battery Backup ............................................................. 164
• Recovery ROM ............................................................... 165
Appendix F .................................................................. 165
Understanding RAID ........................................................... 165
• RAID 0 ......................................................................... 166
• RAID 1 ......................................................................... 166
• RAID 10 ....................................................................... 167
• RAID 3 ......................................................................... 167
• RAID 5 ......................................................................... 168
• RAID 6 ......................................................................... 169
Appendix G .................................................................. 172
Technical Support ............................................................... 172
INTRODUCTION
1. Introduction
This section presents a brief overview of the SATA RAID Series controller, ARC-1110/1110ML/1120/1120ML/1130/1130ML/1160/
1160ML/1170 (4/8/12/16/24-port PCI-X SATA RAID Controllers) and
ARC-1210/1220/1210ML/1220ML/1230/1230/1231ML/1260/1261ML/
1280/1280ML (4/8/12/16/24-port PCIe SATA RAID Controllers).
1.1 Overview
The ARC-11xx and ARC-12xx Series of high-performance Serial ATA
RAID controllers support a maximum of 4, 8, 12, 16, or 24 SATA
II peripheral devices (depending on model) on a single controller.
The ARC-11xx series for the PCI-X bus and the ARC-12xx Series for the PCI-Express bus. When properly configured, these SATA controllers provide non-stop service with a high degree of fault tolerance through the use of RAID technology and can also provide advanced array management features.
The 4 and 8 port SATA RAID controllers are low-profile PCI cards, ideal for 1U and 2U rack-mount systems. These controllers utilize the same RAID kernel that has been field-proven in Areca existing external RAID controllers, allowing Areca to quickly bring stable and reliable RAID controllers to the market.
Unparalleled Performance
The SATA RAID controllers provide reliable data protection for desktops, workstations, and servers. These cards set the standard with enhancements that include a high-performance Intel I/O
Processor, a new memory architecture, and a high performance PCI bus interconnection. The 8/12/16/24-port controllers with the RAID
6 engine built-in can offer extreme-availability RAID 6 functionality.
This engine can concurrently compute two parity blocks with performance very similar to RAID 5. The controllers by default support
256MB of ECC SDRAM memory. The 12/16/24 port controllers support one DDR333 SODIMM socket that allows for upgrading up to
1GB of memory. The 12/16/24 port controllers support one DDR2-
533 DIMM socket that allows for upgrading up to 2GB of memory.
The controllers use Marvell 4/8 channel SATA PCI-X controller
10
INTRODUCTION chips, which can simultaneously communicate with the I/O processor and read or write data on multiple drives.
Unsurpassed Data Availability
As storage capacity requirements continue to rapidly increase, users require greater levels of disk drive fault tolerance, which can be implemented without doubling the investment in disk drives. RAID
1 (mirroring) provides high fault tolerance. However, half of the drive capacity of the array is lost to mirroring, making it too costly for most users to implement on large volume sets due to doubling the number of drives required. Users want the protection of RAID 1 or better with an implementation cost comparable to RAID 5. RAID
6 can offer fault tolerance greater than RAID 1 or RAID 5 but only consumes the capacity of 2 disk drives for distributed parity data.
The 8/12/16/24-port RAID controllers provide RAID 6 functionality to meet these demanding requirements.
The SATA RAID controllers also provide RAID levels 0, 1, 10, 3, 5,
6, Single Disk or JBOD configurations. Its high data availability and protection is derived from the following capabilities: Online RAID
Capacity Expansion, Array Roaming, Online RAID Level / Stripe
Size Migration, Dynamic Volume Set Expansion, Global Online
Spare, Automatic Drive Failure Detection, Automatic Failed Drive
Rebuilding, Disk Hot-Swap, Online Background Rebuilding and
Instant Availability/Background Initialization. During the controller firmware flash upgrade process, it is possible that an error results in corruption of the controller firmware. This could result in the device becoming non-functional. However, with our Redundant Flash image feature, the controller will revert back to the last known version of firmware and continue operating. This reduces the risk of system failure due to firmware crashes.
Easy RAID Management
The SATA RAID controller utilizes built-in firmware with an embedded terminal emulation that can access via hot key at M/B BIOS boot-up screen. This pre-boot manager utility can be used to simplify the setup and management of the RAID controller. The controller firmware also contains a web browser-based program that
11
INTRODUCTION can be accessed through the ArcHttp proxy server function in Windows, Linux, FreeBSD and more environments. This web browserbased McRAID storage manager utility allows both local and remote creation and modification RAID sets, volume sets, and monitoring of RAID status from standard web browsers.
1.2 Features
Adapter Architecture
• Intel IOP 331 I/O processor (ARC-11xx series)
• Intel IOP 332/IOP 333 I/O processor (ARC-12xx series)
• Intel IOP341 I/O processor (ARC-12x1ML/ARC-1280ML/1280)
• 64-bit/133MHz PCI-X Bus compatible
• PCI Express X8 compatible
• 256MB on-board DDR333 SDRAM with ECC protection (4/8-port)
• One SODIMM Socket with default 256 MB of DDR333 SDRAM
with ECC protection, upgrade to 1GB (12, 16 and 24-port cards
only)
• One DIMM Socket with default 256 MB of DDR2-533 SDRAM
with ECC protection, upgrade to 2GB(ARC-12xxML, ARC-1280)
• An ECC or non-ECC SDRAM module using X8 or X16 chip organi-
zation
• Support up to 4/8/12/16/24 SATA ll drives
• Write-through or write-back cache support
• Multi-adapter support for large storage requirements
• BIOS boot support for greater fault tolerance
• BIOS PnP (plug and play) and BBS (BIOS boot specification)
support
• Supports extreme performance Intel RAID 6 functionality
• NVRAM for RAID event & transaction log
• Battery backup module (BBM) ready (Depend on mother
board)
RAID Features
• RAID level 0, 1, 10, 3, 5, 6, Single Disk and JBOD
• Multiple RAID selection
• Online array roaming
• Online RAID level/stripe size migration
• Online capacity expansion & RAID level migration simultaneously
• Online volume set growth
• Instant availability and background initialization
12
INTRODUCTION
• Automatic drive insertion/removal detection and rebuilding
• Greater than 2TB per volume set for 64-bit LBA
• Redundant flash image for adapter availability
• Support SMART, NCQ and OOB staggered spin-up capable
drives
Monitors/Notification
• System status indication through LED/LCD connector, HDD
activity/fault connector, and alarm buzzer
• SMTP support for email notification
• SNMP agent supports for remote SNMP manager
• I2C Enclosure Management Ready (IOP331/332/333)
• I2C & SGPIO Enclosure Management Ready (IOP341)
RAID Management
• Field-upgradeable firmware in flash ROM
• Ethernet port support on 12/16/24-port
In-Band Manager
• Hot key boot-up McBIOS RAID manager via M/B BIOS
• Support controller’s API library, allowing customer to write its
own AP
• Support Command Line Interface (CLI)
• Browser-based management utility via ArcHttp proxy server
• Single Admin Portal (SAP) monitor utility
• Disk Stress Test (DST) utility for production in Windows
Out-of-Band Manager
• Firmware-embedded browser-based MCRAID storage manager,
SMTP manager, SNMP agent and Telent function via Ethernet
port (for 12/16/24-port Adapter)
• Support controller’s API library for customer to write its own
AP (for 12/16/24-port Adapter)
• Push Button and LCD display panel (option)
Operating System
• Windows 2000/XP/Server 2003/Vista
• Red Hat Linux
• SuSE Linux
• FreeBSD
• Novell Netware 6.5
• Solaris 10 X86/X86_64
13
INTRODUCTION
• SCO Unixware 7.1.4
• Mac OS 10.X (EFI BIOS support)
(For latest supported OS listing visit http://www.areca.com.tw)
Internal PCI-X RAID Card Comparison (ARC-11XX)
1110 1120 1130 1160
RAID processor
Host Bus Type
RAID 6 support
Cache Memory
Drive Support
Disk Connector
1170
YES
256MB
YES
256MB
IOP331
PCI-X 133MHz
YES
One SO-
DIMM
YES
One SO-
DIMM
YES
One SO-
DIMM
4 * SATA ll 8 * SATA ll 12 * SATA ll 16 * SATA ll 24 * SATA ll
SATA SATA SATA SATA SATA
PCI-X RAID Card Comparison (ARC-11XXML)
1110ML 1120ML 1130ML 1160ML/1160ML2
RAID processor
Host Bus Type
RAID 6 support
Cache Memory
Yes
256MB
YES
256MB
Drive Support 4 * SATA ll 8 * SATA ll
Disk Connector Infinband Infinband
IOP331
PCI-X 133MHz
YES
One SODIMM
12 * SATA ll
Multi-lane
YES
One SODIMM
16 * SATA ll
Multi-lane/4*SFF-8087
Internal PCI-Express RAID Card Comparison (ARC-12XX)
1210/1210ML 1220/1220ML 1230 1260
RAID processor
Host Bus Type
RAID 6 support
Cache Memory
IOP332
N/A
256MB
YES
IOP333
PCI-Express X8
256MB
Drive Support 4 * SATA ll 8 * SATA ll
Disk Connector SATA/SFF-8088 SATA/2*SFF-8088
YES
One SODIMM One SODIMM
12 * SATA ll
SATA
YES
16 * SATA ll
SATA
14
INTRODUCTION
Internal PCI-Express RAID Card Comparison (ARC-12X1ML/1280)
1231ML 1261ML 1280ML 1280
RAID processor
Host Bus Type
RAID 6 support
Cache Memory
IOP341
PCI-Express X8
YES YES YES
One DDR2 DIMM (Default 256MB, Upgrade to 2GB)
YES
Drive Support
Disk Connector
12 * SATA ll
3*SFF-8087
16 * SATA ll
4*SFF-8087
24 * SATA ll
6*SFF-8087
24 * SATA ll
24*SATA
15
HARDWARE INSTALLATION
2. Hardware Installation
This section describes the procedure for installing the SATA RAID controllers.
2.1 Before Your Begin Installation
Thank you for purchasing the SATA RAID Controller as your RAID data storage and management system. This user guide gives you a simple step-by-step instructions for installing and configuring the SATA RAID Controller. To ensure personal safety and to protect your equipment and data, please read the information carefully in pack content list before you begin installing.
Package Contents
If any items listed in your package is missing, please contact your local dealers before proceeding with installation (disk drives and
disk mounting brackets are not included):
ARC-11xx Series SATA RAID Controller
• 1 x PCI-X SATA RAID Controller in an ESD-protective bag
• 4/8/12/16/24 x SATA interface cables (one per port)
• 1 x Installation CD
• 1 x User Manual
ARC-11xxML/12xxML Series SATA RAID Controller
• 1 x PCI-X SATA RAID Controller in an ESD-protective bag
• 1 x Installation CD
• 1 x User Manual
ARC-12xx Series SATA RAID Controller
• 1 x PCI-Express SATA RAID Controller in an ESD-protective bag
• 4/8/12/16 x SATA interface cables (one per port)
• 1 x Installation CD
• 1 x User Manual
16
HARDWARE INSTALLATION
2.2 Board Layout
Follow the instructions below to install a PCI RAID Card into your
PC / Server.
Figure 2-1, ARC-1110/1120 (4/8-port PCI-X SATA RAID Controller)
Figure 2-2, ARC-1210/1220 (4/8-port PCI-Express SATA RAID Controller)
17
HARDWARE INSTALLATION
Figure 2-3, ARC-1110ML/1120ML (4/8-port PCI-X SATA RAID Controller)
18
Figure 2-4, ARC-1210ML/1220ML (4-port PCI-Express SATA RAID
Controller)
HARDWARE INSTALLATION
Figure 2-5, ARC-1130/1160 (12/16-port PCI-X SATA RAID Controller)
Figure 2-6, ARC-1130ML/1160ML (12/16-port PCI-X SATA RAID
Controller)
19
HARDWARE INSTALLATION
Figure 2-7, ARC-1230/1260 (12/16-port PCI-EXpress SATA RAID
Controller)
20
Figure 2-8, ARC-1170 (24-port PCI-X SATA RAID Controller)
HARDWARE INSTALLATION
Figure 2-9, ARC-1280 (24-port PCI-Express SATA RAID Controller)
Figure 2-10, ARC-1231ML/1261ML/1280ML (12/16/24-port PCI-Express SATA RAID Controller)
21
HARDWARE INSTALLATION
Tools Required
An ESD grounding strap or mat is required. Also required are standard hand tools to open your system’s case.
System Requirement
The controller can be installed in a universal PCI slot and requires a motherboard that:
ARC-11xx series required one of the following:
• Complies with the PCI Revision 2.3 32/64-bit 33/66MHz, 3.3V.
• Complies with the PCI-X 32/64-bit 66/100/133 MHz, 3.3V.
ARC-12xx series requires:
• Complies with the PCI-Express X8
It can also work on the PCIe x1, x4, x8 and x16 signal with x8 or x16 slot M/B.
The SATA RAID controller may be connected to up to 4, 8, 12, 16, or 24 SATA ll hard drives using the supplied cables.
Optional cables are required to connect any drive activity LEDs and fault LEDs on the enclosure to the SATA RAID controller.
Installation Tools
The following items may be needed to assist with installing the
SATA RAID controller into an available PCI expansion slot.
• Small screwdriver
• Host system hardware manuals and manuals for the disk or enclosure being installed.
Personal Safety Information
To ensure personal safety as well as the safety of the equipment:
• Always wear a grounding strap or work on an ESD-protective mat.
• Before opening the system cabinet, turn off power switches and unplug the power cords. Do not reconnect the power cords until you have replaced the covers.
22
HARDWARE INSTALLATION
Warning:
High voltages may be found inside computer equipment. Before installing any of the hardware in this package or removing the protective covers of any computer equipment, turn off power switches and disconnect power cords. Do not reconnect the power cords until you have replaced the covers.
Electrostatic Discharge
Static electricity can cause serious damage to the electronic components on this SATA RAID controller. To avoid damage caused by electrostatic discharge, observe the following precautions:
• Do not remove the SATA RAID controller from its anti-static packaging until you are ready to install it into a computer case.
• Handle the SATA RAID Controller by its edges or by the metal mounting brackets at its each end.
• Before you handle the SATA RAID controller in any way, touch a grounded, anti-static surface, such as an unpainted portion of the system chassis, for a few seconds to discharge any built-up static electricity.
2.3 Installation
Follow the instructions below to install a SATA RAID controller into your PC / Server.
Step 1. Unpack
Unpack and remove the SATA RAID controller from the package.
Inspect it carefully, if anything is missing or damaged, contact your local dealer.
Step 2. Power PC/Server Off
Turn off computer and remove the AC power cord. Remove the system’s cover. See the computer system documentation for instruction.
23
HARDWARE INSTALLATION
Step 3. Install the PCI RAID Cards
To install the SATA RAID controller remove the mounting screw and existing bracket from the rear panel behind the selected PCI slot. Align the gold-fingered edge on the card with the selected
PCI expansion slot. Press gently but firmly down to ensure that the card is properly seated in the slot, as shown in Figure 2-11. Next, screw the bracket into the computer chassis. ARC-11xx controllers can fit in both PCI (32-bit/3.3V) and PCI-X slots. It can get the best performance installed in a 64-bit/133MHz PCI-X slot. ARC-12xx controllers require a PCI-Express x8 slot.
24
Figure 2-11, Insert SATA RAID controller into a PCI-X slot
Step 4. Mount the Cages or Drives
Remove the front bezel from the computer chassis and install the cages or SATA Drives in the computer chassis. Loading drives to the drive tray if cages are installed. Be sure that the power is connected to either the cage backplane or the individual drives.
HARDWARE INSTALLATION
Figure 2-12, Mount Cages & Drives
Step 5. Connect the SATA Cable
Model ARC-11XX and ARC-12XX controllers have dual-layer SATA internal connectors. If you have not yet connected your SATA cables, use the cables included with your kit to connect the controller to the SATA hard drives.
The cable connectors are all identical, so it does not matter which end you connect to your controller, SATA hard drive, or cage backplane SATA connector.
Figure 2-13, SATA Cable
Note:
The SATA cable connectors must match your HDD cage.
For example: Channel 1 of RAID Card connects to channel 1 of HDD cage, channel 2 of RAID Card connects to channel 2 of HDD cage, and follow this rule.
25
HARDWARE INSTALLATION
Step 5-2. Connect the Multi-lance Cable
Model ARC-11XXML has multi-lance internal connectors, each of them can support up to four SATA drives. These adapters can be installed in a server RAID enclosure with a Multi-lance connector
(SFF-8470) backplane. Multi-lance cables are not included in the
ARC-11XXML package.
If you have not yet connected your Multi-lance cables, use the cables included with your enclosure to connect your controller to the Multi-lance connector backplane. This type of cable will depend on what enclosure you have. The following diagram shows one example picture of Multi-lane cable.
Unpack and remove the PCI RAID cards. Inspect it carefully. If anything is missing or damaged, contact your local dealer.
Figure 2-14, Multi-Lance Cable
26
Step 5-3. Connect the Min SAS 4i to 4*SATA Cable
Model ARC-1231ML/1261ML/1280ML have Min SAS 4i (SFF-8087) internal connectors, each of them can support up to four SATA drives. These adapters can be installed in a server RAID enclosure with a standard SATA connector backplane. Min SAS 4i to SATA cables are included in the ARC-1231ML/1261ML/1280ML package.
The following diagram shows the picture of MinSAS 4i to 4*SATA cables.
Unpack and remove the PCI RAID cards. Inspect it carefully. If anything is missing or damaged, contact your local dealer.
HARDWARE INSTALLATION
Figure 2-15, Min SAS 4i to 4*SATA Cable
For sideband cable signal Please refer to page 51 for SGPIO bus.
Step 5-4. Connect the Min SAS 4i to Multi-lance Cable
Model ARC-1231ML/1261ML/1280ML have Min SAS 4i internal connectors, each of them can support up to four SATA drives. These controllers can be installed in a server RAID enclosure with a Multilance connector (SFF-8470) backplane. Multi-lance cables are not included in the ARC-12XXML package.
If you have not yet connected your Min SAS 4i to Multi-lance cables, buy the Min SAS 4i to Multi-lance cables to fit your enclosure. And connect your controller to the Multi-lance connector backplane. The type of cable will depend on what enclosure you have. The following diagram shows one example picture of Min SAS
4i to Multi-lance cable.
Unpack and remove the PCI RAID cards. Inspect it carefully. If anything is missing or damaged, contact your local dealer.
Figure 2-16, Min SAS 4i to Multi-lance Cable
27
HARDWARE INSTALLATION
Step 5-5. Connect the Min SAS 4i to Min SAS 4i Cable
Model ARC-1231ML/1261ML/1280ML have Min SAS 4i (SFF-8087) internal connectors, each of them can support up to four SATA drives and SGPIO (Serial General Purpose Input/Output) side-band signals . These adapters can be installed in a server RAID enclosure with a Min SAS 4i internal connector backplane. Min SAS 4i cables are not included in the ARC-12XXML package.
This Min SAS 4i cable has eight signal pins to support four SATA drives and six pins for the SGPIO (Serial General Purpose Input/
Output) side-band signals. The SGPIO bus is used for efficient LED management and for sensing drive Locate status. Please see page
52 for the details of the SGPIO bus.
Figure 2-17, Min SAS 4i to Min SAS 4i Cable
The SGPIO signal can carry the fault/activity signal without needing any individual LED cable. The SGPIO is included in the SFF-8087.
• Min SAS 4i Connector (SFF-8087) Signal
28
Figure 2-18, Min SAS 4i (SFF-8087) Connector
HARDWARE INSTALLATION
Name
HDD R0+
HDD R0-
HDD R1+
HDD R1A6
Sideband 0 A8
Sideband 1 A9
Sideband 2 A10
Pin
A2
A3
A5
Sideband 6 A11
HDD R2+ A13
HDD R2A14
HDD R3+
HDD R3-
GND
A16
A17
HDD T3+
HDD T3-
A1, A4, A7, A12, A15, A18 GND
Name
HDD T0+
HDD T0-
HDD T1+
HDD T1-
Sideband 7
Sideband 3
Sideband 4
Sideband 5
HDD T2+
HDD T2-
Table-1 Min SAS 4i cable(SFF8087) pin assignment
B6
B8
B9
B10
Pin
B2
B3
B5
B11
B13
B14
B16
B17
B1, B4, B7, B12, B15, B18
Step 5-6. Connect the Min SAS 4x to Min SAS 4x Cable
Model ARC-12X0ML/12X1ML have external Min SAS 4x (SFF-8088) connectors, each of them can support up to four SATA drives. These adapters can be installed in a server which works with external
RAID enclosure with a Min SAS 4x connector. External Min SAS 4x cables are not included in the ARC-12X0ML/12X1ML package.
Figure 2-19, Min SAS 4x to Min SAS 4x Cable
29
HARDWARE INSTALLATION
If you have not connected your Min SAS 4x cables yet, use the cables included with your enclosure to connect your controller to the Min SAS 4x connector. This type of cable will depend on what enclosure you have. The above diagram shows one example picture of Min SAS 4x cable.
Step 6. Install the LED Cable (optional)
ARC-1XXX Series Fault/Activity Header Intelligent Electronics
Schematic.
30
The intelligent LED controller outputs a low-level pulse to determine if status LEDs are attached to pin sets 1 and 2. This allows automatic controller configuration of the LED output. If the logical level is different between the fist 2 sets of the HDD LED header
(LED attached to Set 1 but not Set 2), the controller will assign the first HDD LED header as the global indicator connector. Otherwise, each LED output will show only individual drive status.
The SATA RAID controller provides four kinds of LED status connectors.
A: Global indicator connector, which light up when any drive is active.
B: Individual LED indicator connector, for each drive channel.
C: I 2 C connector, for SATA proprietary backplane enclosure.
D: SGPIO connector for SAS backplane enclosure
HARDWARE INSTALLATION
The following diagrams and description describes each type of connector.
Note:
A cable for the global indicator comes with your computer system. Cables for the individual drive LEDs may come with a drive cage or you may need to purchase them.
A: Global indicator connector
If the system use only a single global indicator, attach the global indicator cable to the two pins HDD LED connector. The following diagrams show the connector and pin locations.
Figure 2-20, ARC-
1110/1120/1210/1220 global LED connection for computer case.
Figure 2-21, ARC-
1130/1160/1230/1260 global LED connection for computer case.
31
HARDWARE INSTALLATION
Figure 2-22, ARC-1170 global LED connection for computer case.
Figure 2-23, ARC-1280 global LED connection for computer case.
32
Figure 2-24, ARC-1231ML/
1261ML/1280ML global LED connection for computer case.
HARDWARE INSTALLATION
B: Individual LED indicator connector
Connect the cables for the drive activity LEDs and fault LEDs between the backplane of the cage and the respective connector on the SATA RAID controller. The following describes the fault/activity
LED.
LED Normal Status
Activity LED When the activity LED is illuminated, there is I/O activity on that disk drive. When the
LED is dark, there is no activity on that disk drive.
Fault LED When the fault LED is solid illuminated, there is no disk present and When the fault
LED is off, that disk is present and status is normal.
When the "Identify Drive" is selected, the selected drive fault LED will blank.
N/A
Problem Indication
When the fault LED is slow blinking (2 times/sec), that indicate disk drive has failed and should be hot-swapped immediately.
When the activity LED is illuminated and fault LED is fast blinking
(10 times/sec) that indicate there is rebuilding activity on the disk drive.
Figure 2-25, ARC-
1110/1120/1210/1220 individual LED indicators connector, for each channel drive.
33
HARDWARE INSTALLATION
Figure 2-26, ARC-
1130/1160/1230/1260 individual LED indicators connector, for each channel drive.
Figure 2-27, ARC-1170 individual LED indicators connector, for each channel drive.
34
Figure 2-28, ARC-1280 individual LED indicators connector, for each channel drive.
HARDWARE INSTALLATION
Figure 2-29, ARC-1231ML/
1261ML/1280ML individual
LED indicators connector, for each channel drive.
C: I 2 C Connector
You can also connect the I 2 C interface to a SATA backplane enclosure which includes Areca CPLD decoder controller on the backplane. This can reduce the number of activity LED and/or fault
LED cables. The I 2 C interface can also cascade to another SATA backplane enclosure for the additional channel status display.
Figure 2-30, Activity/Fault LED I 2 C connector connected between
SATA RAID controller & SATA HDD cage backplane.
35
HARDWARE INSTALLATION
Figure 2-31, Activity/Fault LED I 2 C connector connected between
SATA RAID controller & 4 SATA HDD backplane.
Note:
Ci-Design has supported this feature in its 4-port 12-6336-
05A SATA ll backplane.
The following is the I 2 C signal name description for LCD & fault/activity LED.
36
PIN
1
3
5
7
D: SGPIO bus
Description power (+5V)
LCD Module Interrupt
LCD Module Serial Data
Fault/Activity Serial Data
PIN
2
4
6
8
Description
GND
Protect Key
Fault/Activity clock
LCD Module clock
The preferred I/O connector for server backplanes is the Min SAS
4i (SFF-8087) internal serial-attachment connector. This connector has eight signal pins to support four SATA drives and six pins for the SGPIO (Serial General Purpose Input/Output) sideband signals which use to replace the individual LED cable.
HARDWARE INSTALLATION
The SGPIO bus is used for efficient LED management and for sensing drive locate status. See SFF 8485 for the specification of the SGPIO bus.The number of drives supported can be increased, by a factor of four, by adding similar backplane to maximum of 24 drives (6 backplanes)
LED Management: The backplane may contain LEDs to indicate drive status. Light from the LEDs could be transmitted to the outside of the server by using light pipes mounted on the SATA drive tray. A small CPLD on the backplane, connected via the SGPIO bus to a ARC-1231ML/1261ML/1280ML SATA RAID controller, could control the LEDs. Activity: blinking/controller access Fault: solid illuminated
Drive Locate Circuitry: The locate of a drive may be detected by sensing the voltage level of one of the pre-charge pins before and after a drive is installed. Fault blinking 2 times/second.
The following signal defines the SGPIO assignments for the Min
SAS 4i connector (SFF-8087) in ARC-1231ML/1261ML/1280ML.
PIN
SideBand0
SideBand2
SideBand4
Description
SClock (Clock signal)
Ground
SDataOut (Serial data output bit stream)
Reserved
PIN
SideBand1
SideBand3
SideBand5
Description
SLoad (Last clock of a bit stream)
Ground
SDataIn (Serial data input bit stream)
Reserved SideBand6 SideBand7
The following signal defines the sideband connector which can work with Areca sideband cable on its SFF-8087 to 4 SATA cable.
The sideband header is located at backplane. For SGPIO to work properly, please connect Areca 8-pin sideband cable to the sideband header as shown above. See the table for pin definitions.
37
HARDWARE INSTALLATION
Step 7. Re-check the SATA HDD LED and Fault LED Cable
Connections
Be sure that the proper failed drive channel information is displayed by the fault and HDD activity LEDs. An improper connection will tell the user to ‘‘Hot Swap’’ the wrong drive. This will remove the wrong disk (one that is functioning properly) from the controller. This can result in failure and loss of system data.
Step 8. Power up the System
Check the installation thoroughly, reinstall the computer cover, and reconnect the power cord cables. Turn on the power switch at the rear of the computer (if equipped) and then press the power button at the front of the host computer.
Step 9. Configure Volume Set
The SATA RAID controller configures RAID functionality through the
McBIOS RAID manager. Please refer to Chapter 3, McBIOS RAID manager, for the detail regarding configuration. The RAID controller can also be configured through the McRAID storage manager software utility with ArcHttp proxy server installed through on-board
LAN port or LCD module. For this option, please refer to Chapter 6,
Web Browser-Based Configuration or LCD Configuration Menu.
Step 10. Install the Controller Driver
For a new system:
• Driver installation usually takes places as part of operating system installation. Please refer to Chapter 4 Diver Installation for the detail installation procedure.
In an existing system:
• Install the controller driver into the existing operating system.
Please refer to the Chapter 4, Driver Installation, for the detailed installation procedure.
38
HARDWARE INSTALLATION
Note:
Look for latest release versions of drivers, please download from http://www.areca.com.tw
Step 11. Install ArcHttp Proxy Server
The SATA RAID controller firmware has embedded the web-browser
McRAID storage manager. ArcHttp proxy server will enable it. The browser-based McRAID storage manager provides all of the creation, management, and monitor SATA RAID controller status.
Please refer to the Chapter 5 for the detail ArcHttp Proxy Server
Installation. For SNMP agent function, please refer to Appendix C.
Step 12. Determining the Boot Sequences
The SATA RAID controller is a bootable controller. If your system already contains a bootable device with an installed operating system, you can set up your system to boot a second operating system from the new controller. To add a second bootable controller, you may need to enter setup of M/B BIOS and change the device boot sequence so that the SATA RAID controller heads the list. If the system BIOS setup does not allow this change, your system may not be configurable to allow the SATA RAID controller to act as a second boot device.
Summary of the Installation
The flow chart below describes the installation procedures for SATA
RAID controller. These procedures include hardware installation, the creation and configuration of a RAID volume through the Mc-
BIOS/McRAID, OS installation and installation of SATA RAID controller software.
The software components configure and monitor the SATA RAID controller via ArcHttp proxy server.
39
HARDWARE INSTALLATION
Configuration Utility
McBIOS RAID Manager
McRAID Storage Manager
(Via Archttp proxy server)
SAP Monitor (Single Admin Portal to scan for multiple RAID units in the network, Via ArcHttp proxy server)
SNMP Manager Console Integration
Operating System Supported
OS-Independent
Windows 2000/XP/2003, Linux, Free-
BSD, Solaris and Mac
Windows 2000/XP/2003
Windows 2000/XP/2003, Linux, Free-
BSD
40
McRAID Storage Manager
Before launching the firmware-embedded web server, McRAID storage manager, you can to install the ArcHttp proxy server on your server system or through on-board LAN-port (if equipped). If you need additional information about installation and start-up of this function, see the McRAID Storage Manager section in Chapter 6.
SNMP Manager Console Integration
• Out of Band-Using Ethernet Port (12/16/24-port Controller)
Before launching the firmware-embedded SNMP agent in the sever, you need first to enable the fireware-embedded SNMP agent function on your SATA RAID controller. If you need additional information about installation and start-up this function, see the section 6.8.4 SNMP Configuration (12/16/24port)
HARDWARE INSTALLATION
• In-Band-Using PCI-X/PCIe Bus (4/8/12/16/24-port
Controller)
Before launching the SNMP agent in the sever, you need to enable the fireware-embedded SNMP community configuration first and install Areca SNMP extension agent in your server system.
If you need additional information about installation and start-up the function, see the SNMP Operation & Installation section in the
Appendix C
Single Admin Portal (SAP) Monitor
This utility can scan for multiple RAID units on the network and monitor the controller set status. It also includes a disk stress test utility to identify marginal spec disks before putting the RAID unit into a production environment.
For additional information, see the utility manual in the packaged software CD or download it from the web site http://www.areca.
com.tw
41
BIOS CONFIGURATION
3. McBIOS RAID Manager
The system mainboard BIOS automatically configures the following
SATA RAID controller parameters at power-up:
• I/O Port Address
• Interrupt Channel (IRQ)
• Adapter ROM Base Address
Use McBIOS RAID manager to further configure the SATA RAID controller to suit your server hardware and operating system.
3.1 Starting the McBIOS RAID Manager
This section explains how to use the McBIOS RAID manager to configure your RAID system. The McBIOS RAID manager is designed to be user-friendly. It is a menu-driven program, residing in the firmware, which allows you to scroll through various menus and sub-menus and select among the predetermined configuration options.
When starting a system with an SATA RAID controller installed, it will display the following message on the monitor during the startup sequence (after the system bios startup screen but before the operating system boots):
ARC-1xxx RAID Ctrl - DRAM: 128(MB) / #Channels: 8
BIOS: V1.00 / Date: 2004-5-13 - F/W: V1.31 / Date: 2004-5-31
I/O-Port=F3000000h, IRQ=11, BIOS ROM mapped at D000:0h
No BIOS disk Found, RAID Controller BIOS not installed!
Press <Tab/F6> to enter SETUP menu. 9 second(s) left <ESC to Skip>..
42
The McBIOS RAID manager message remains on your screen for about nine seconds, giving you time to start the configuration menu by pressing Tab or F6. If you do not wish to enter configuration menu, press ESC to skip configuration immediately. When activated, the McBIOS RAID manager appears showing a selection dialog box listing the SATA RAID controllers that are installed in the system.
The legend at the bottom of the screen shows you what keys are enabled for the screens.
BIOS CONFIGURATION
Areca Technology Corporation RAID Controller Setup <V1.0, 2004/05/20>
Select An Adapter To Configure
( 3/14/ 0)I/O=DD200000h, IRQ = 9
ArrowKey Or AZ:Move Cursor, Enter: Select, ** Select & Press F10 to Reboot**
Use the Up and Down arrow keys to select the adapter you want to configure. While the desired adapter is highlighted, press the
Enter key to enter the main menu of the McBIOS RAID manager.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Ethernet Configuration
View System Events
Clear Event Buffer
Hardware Monitor
System Information
Verify Password
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Note:
The manufacture default password is set to 0000; this password can be modified by selecting
Change Password in the Raid System
Function section.
3.2 McBIOS RAID manager
The McBIOS RAID manager is firmware-based and is used to configure RAID sets and volume sets. Because the utility resides in the
SATA RAID controller firmware, operation is independent of any operating systems on your computer. This utility can be used to:
• Create RAID sets,
• Expand RAID sets,
• Add physical drives,
• Define volume sets,
43
BIOS CONFIGURATION
• Modify volume sets,
• Modify RAID level/stripe size,
• Define pass-through disk drives,
• Modify system functions, and
• Designate drives as hot spares.
3.3 Configuring Raid Sets and Volume Sets
You can configure RAID sets and volume sets with McBIOS RAID manager automatically using “Quick Volume/Raid Setup” or manually using “Raid Set/Volume Set Function”. Each configuration method requires a different level of user input. The general flow of operations for RAID set and volume set configuration is:
Step
1
2
3
4
5
Action
Designate hot spares/pass-through drives (optional).
Choose a configuration method.
Create RAID sets using the available physical drives.
Define volume sets using the space available in the RAID Set.
Initialize the volume sets and use volume sets (as logical drives) in the host OS.
3.4 Designating Drives as Hot Spares
Any unused disk drive that is not part of a RAID set can be designated as a hot spare. The “Quick Volume/Raid Setup” configuration will add the spare disk drive and automatically display the appropriate RAID level from which the user can select. For the “Raid Set
Function” configuration option, the user can use the “Create Hot
Spare” option to define the hot spare disk drive.
When a hot spare disk drive is being created using the “Create Hot
Spare” option (in the “Raid Set Function”), all unused physical devices connected to the current controller appear:
Choose the target disk by selecting the appropriate check box.
Press the Enter key to select a disk drive, and press Yes in the create hot spare to designate it as a hot spare.
44
BIOS CONFIGURATION
3.5 Using Quick Volume/Raid Setup Configuration
“Quick Volume/Raid Setup” configuration collects all available drives and includes them in a RAID set. The RAID set you create is associated with exactly one volume set. You will only be able to modify the default RAID level, the stripe size, and the capacity of the new volume set. Designating drives as hot spares is also possible in the RAID level selection option. The volume set default settings will be:
Parameter
Volume Name
SCSI Channel/SCSI ID/SCSI LUN
Cache Mode
Tag Queuing
Volume Set # 00
0/0/0
Write Back
Yes
Setting
The default setting values can be changed after configuration is completed. Follow the steps below to create arrays using the
“Quick Volume/ Raid Setup” method:
Step
1
2
Action
Choose “Quick Volume/Raid Setup” from the main menu. The available
RAID levels with hot spare for the current volume set drive are displayed.
It is recommended that you use drives of the same capacity in a specific array. If you use drives with different capacities in an array, all drives in the RAID set will be set to the capacity of the smallest drive in the RAID set.
The numbers of physical drives in a specific array determines which RAID levels that can be implemented in the array.
RAID 0 requires 1 or more physical drives.
RAID 1 requires at least 2 physical drives.
RAID 1+Spare requires at least 3 physical drives.
RAID 10 requires at least 4 physical drives.
RAID 3 requires at least 3 physical drives.
RAID 5 requires at least 3 physical drives.
RAID 3 +Spare requires at least 4 physical drives.
RAID 5 + Spare requires at least 4 physical drives.
RAID 6 requires at least 4 physical drives.
RAID 6 + Spare requires at least 5 physical drives.
Highlight the desired RAID level for the volume set and press the Enter key to confirm.
45
BIOS CONFIGURATION
3
4
5
6
7
8
The capacity for the current volume set is entered after highlighting the desired RAID level and pressing the Enter key.
The capacity for the current volume set is displayed. Use the UP and
DOWN arrow keys to set the capacity of the volume set and press the
Enter key to confirm. The available stripe sizes for the current volume set are then displayed.
Use the UP and DOWN arrow keys to select the current volume set stripe size and press the Enter key to confirm. This parameter specifies the size of the stripes written to each disk in a RAID 0, 1, 10, 5 or
6 volume set. You can set the stripe size to 4 KB, 8 KB, 16 KB, 32 KB,
64 KB, or 128 KB. A larger stripe size provides better read performance, especially when the computer preforms mostly sequential reads. However, if the computer preforms random read requests more often, choose a smaller stripe size.
When you are finished defining the volume set, press the Enter key to confirm the “Quick Volume/Raid Setup” function.
Foreground (Fast Completion) Press Enter key to define Foreground
Initialization or Selected the Background (Instant Available) or No Init. In the Background Initialization, the initialization proceeds as a background task, the volume set is fully accessible for system reads and writes.
The operating system can instantly access to the newly created arrays without requiring a reboot and waiting the initialization complete. In
Foreground Initialization, the initialization proceeds must be completed before the volume set ready for system accesses. In No Init, there is no initialization on this volume.
Initialize the volume set you have just configured.
If you need to add additional volume set, using main menu “Create Volume Set” function.
3.6 Using RAID Set/Volume Set Function
Method
In “Raid Set Function”, you can use the “Create Raid Set Function” to generate a new RAID set. In “Volume Set Function”, you can use the “Create Volume Set” function to generate an associated volume set and and configuration parameters.
If the current controller has unused physical devices connected, you can choose the “Create Hot Spare” option in the “Raid Set
Function” to define a global hot spare. Select this method to configure new RAID sets and volume sets. The “Raid Set/Volume Set
Function” configuration option allows you to associate volume sets with partial and full RAID sets.
46
BIOS CONFIGURATION
Step
1
2
3
4
5
6
7
8
9
10
Action
To setup the hot spare (option), choose “Raid Set Function” from the main menu. Select the “Create Hot Spare” and press the Enter key to define the hot spare.
Choose “RAID Set Function” from the main menu. Select “Create Raid
Set” and press the Enter key.
The “Select a Drive For Raid Set” window is displayed showing the SATA drives connected to the SATA RAID controller.
Press the UP and DOWN arrow keys to select specific physical drives.
Press the Enter key to associate the selected physical drive with the current RAID set.
It is recommended that you drives of the same capacity in a specific array. If you use drives with different capacities in an array, all drives in the RAID set will be set to the capacity of the smallest drive in the RAID set. The numbers of physical drives in a specific array determines which
RAID levels that can be implemented in the array.
RAID 0 requires 1 or more physical drives.
RAID 1 requires at least 2 physical drives.
RAID 10 requires at least 4 physical drives.
RAID 3 requires at least 3 physical drives.
RAID 5 requires at least 3 physical drives.
RAID 6 requires at least 4 physical drives.
After adding the desired physical drives to the current RAID set, press
Yes to confirm the “Create Raid Set” function.
An “Edit The Raid Set Name” dialog box appears. Enter 1 to 15 alphanumeric characters to define a unique identifier for this new raid set. The default raid set name will always appear as Raid Set. #. Press Enter to finish the name editing.
Press the Enter key when you are finished creating the current RAID set. To continue defining another RAID set, repeat step 3. To begin volume set configuration, go to step 8.
Choose the “Volume Set Function” from the main menu. Select “Create
Volume Set” and press the Enter key.
Choose a RAID set from the “Create Volume From Raid Set” window.
Press the Enter key to confirm the selection.
Choosing Foreground (Fast Completion) or Background (Instant Availability) initialization or No Init (To Rescue Volume): during Background
Initialization, the initialization proceeds as a background task and the volume set is fully accessible for system reads and writes. The operating system can instantly access the newly created arrays without requiring a reboot and waiting for initialization complete. In Foreground Initialization, the initialization must be completed before the volume set is ready for system accesses. In Fast Initialization, initiation is completed more quickly but volume access by the operating system is delayed. When No
Init, there is no initialization on this volume.
47
BIOS CONFIGURATION
48
11 If space remains in the raid set, the next volume set can be configured.
Repeat steps 8 to 10 to configure another volume set.
Note:
The “Modify Volume Set” method provides the same functions as the “Create Volume Set” configuration method. In the
“Volume Set function”, you can use “Modify Volume Set” to change all volume set parameters except for capacity (size).
3.7 Main Menu
The main menu shows all functions that are available for executing actions, which is accomplished by clicking on the appropriate link.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Note:
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Ethernet Configuration
View System Events
Clear Event Buffer
Hardware Monitor
System Information
Verify Password
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
The manufacture default password is set to 0000; this password can be modified by selecting
Change Password in the Raid System
Function section.
Option Description
Quick Volume/Raid Setup Create a default configuration based on the number of physical disk installed
Raid Set Function Create a customized RAID set
Volume Set Function Create a customized volume set
Physical Drives
Raid System Function
Ethernet Configuration
View individual disk information
Setup the RAID system configuration
Ethernet LAN setting (12/16/24 ports only)
View System Events
Clear Event Buffer
Hardware Monitor
System Information
Record all system events in the buffer
Clear all information in the event buffer
Show the hardware system environment status
View the controller system information
BIOS CONFIGURATION
This password option allows user to set or clear the RAID controller’s password protection feature. Once the password has been set, the user can only monitor and configure the raid controller by providing the correct password. The password is used to protect the internal RAID controller from unauthorized entry. The controller will only prompt for the password when entering the main menu from the initial screen. The SATA RAID controller will automatically return to the initial screen when it does not receive any command in twenty seconds.
3.7.1 Quick Volume/RAID Setup
“Quick Volume/RAID Setup” is the fastest way to prepare a RAID set and volume set. It requires only a few keystrokes to complete. Although disk drives of different capacity may be used in the RAID set, it will use the capacity of the smallest disk drive as the capacity of all disk drives in the RAID set. The “Quick Volume/RAID Setup” option creates a RAID set with the following properties:
1. All of the physical drives are contained in one RAID set.
2. The RAID level, hot spare, capacity, and stripe size options are selected during the configuration process.
3. When a single volume set is created, it can consume all or a portion of the available disk capacity in this RAID set.
4. If you need to add an additional volume set, use the main menu “Create Volume Set” function.
The total number of physical drives in a specific RAID set determine the RAID levels that can be implemented within the RAID
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Total 4 Drives
Physical Drives
Raid System Function
Ethernet Configuration
View System Events
Clear Event Buffer
Hardware Monitor
System Information
Raid 0
Raid 1 + 0
Raid 1 + 0 + Spare
Raid 3
Raid 5
Raid 3 + Spare
Raid 5 + Spare
Raid 6
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
49
BIOS CONFIGURATION
Set. Select “Quick Volume/RAID Setup” from the main menu; all possible RAID level will be displayed on the screen.
If volume capacity will exceed 2TB, controller will show the
“Greater Two TB Volume Support” sub-menu.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Greater Two TB Volume Support
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Total 4 Drives
Physical Drives
Raid System Function
Ethernet Configuration
View System Events
Clear Event Buffer
Hardware Monitor
System Information
Raid 0
Raid 1 + 0
No
Use 64bit LBA
Raid 1 + 0 + Spare
Raid 3
Raid 5
Raid 3 + Spare
Raid 5 + Spare
Raid 6
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
• No
It keeps the volume size with max. 2TB limitation.
• LBA 64
This option use 16 bytes CDB instead of 10 bytes. The maximum volume capacity supports up to 512TB.
This option works on different OS which supports 16 bytes CDB.
Such as:
Windows 2003 with SP1
Linux kernel 2.6.x or latter
• For Windows
It change the sector size from default 512 Bytes to 4k Bytes. the maximum volume capacity up to 16TB.
This option works under Windows platform only. And it can not be converted to “Dynamic Disk”, because 4k sector size is not a standard format.
For more details please download PDF file from ftp://ftp.areca. com.tw/RaidCards/Documents/Manual_Spec/Over2TB_
050721.zip
50
BIOS CONFIGURATION
A single volume set is created and consumes all or a portion of the disk capacity available in this RAID set. Define the capacity of volume set in the “Available Capacity” popup. The default value for the volume set, which is 100% of the available capacity, is displayed in the selected capacity. Use the UP and DOWN keys to select the capacity, press the Enter key to accept this value. If the volume set uses only part of the RAID set capacity, you can use the “Create Volume Set” option in the main menu to define additional volume sets.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Total 4 Drives
Physical Drives
Raid System Function
Ethernet Configuration
View System Events
Clear Event Buffer
Hardware Monitor
System Information
Raid 0
Raid 1 + 0
Available Capacity : 160.1GB
Selected Capacity : 160.1GB
Raid 1 + 0 + Spare
Raid 3
Raid 5
Raid 3 + Spare
Raid 5 + Spare
Raid 6
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Stripe Size This parameter sets the size of the stripe written to each disk in a RAID 0, 1, 10, 5, or 6 logical drive. You can set the stripe size to 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, or 128 KB.
A larger stripe size produces better-read performance, especially if your computer does mostly sequential reads. However, if you
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Total 4 Drives
Available Capacity : 160.1GB
Selected Capacity : 160.1GB
Ethernet Configuration
View System Events
Clear Event Buffer
Hardware Monitor
System Information
Raid 0
Raid 1 + 0
Raid 1 + 0 + Spare
Raid 3
Raid 5
Raid 3 + Spare
Raid 5 + Spare
Raid 6
Select Strip Size
4K
8K
16K
32K
64K
128K
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
51
BIOS CONFIGURATION are certain that your computer performs random reads more often, select a smaller stripe size.
Press the Yes option in the “Create Vol/Raid Set” dialog box, the
RAID set and volume set will start to initialize it.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Available Capacity : 160.1GB
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Total 4 Drives
Selected Capacity : 160.1GB
Physical Drives
Raid System Function
Ethernet Configuration
View System Events
Clear Event Buffer
Hardware Monitor
System Information
Raid 0
Raid 1 + 0
Raid 1 + 0 +
Spare
Raid 3
Raid 5
Raid 3 + Spare
Raid 6
Select Strip Size
Yes
No
4K
8K
16K
32K
64K
128K
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Select “Foreground (Faster Completion)” or “Background (Instant
Available)” for initialization. “No Init (To Rescue Volume)” for recovering the missing RAID set configuration
Controller I/O Port:F3000000h, F2: Sel “Noect Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Total 4 Drives
Ethernet Configuration
View System Events
Clear Event Buffer
Hardware Monitor
System Information
Raid 5
Raid 3 + Spare
Raid 5 + Spare
Raid 6
Available Capacity : 160.1GB
Selected Capacity : 160.1GB
Raid 0
Raid 1 + 0
Raid 1 + 0 + Spare
Raid 3
Initialization Mode
Foreground (Faster Completion)
Background (Instant Available)
16K
32K
64K
128K
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
3.7.2 Raid Set Function
Manual configuration gives complete control of the RAID set setting, but it will take longer to configure than “Quick Volume/Raid
Setup” configuration. Select “Raid Set Function” to manually configure the RAID set for the first time or delete existing RAID sets and reconfigure the RAID set.
52
BIOS CONFIGURATION
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Ethernet Configuration
View System Events
Clear Event Buffer
Hardware Monitor
System Information
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
3.7.2.1 Create Raid Set
To define a RAID set, follow the procedure below:
1. Select “Raid Set Function” from the main menu.
2. Select “Create Raid Set “ from the “Raid Set Function” dialog box.
3. A “Select SATA Drive For Raid set” window is displayed showing the SATA drives connected to the current controller.
Press the UP and DOWN arrow keys to select specific physical drives. Press the Enter key to associate the selected physical drive with the current RAID set. Repeat this step; the user can add as many disk drives as are available to a single RAID set.
When finish selecting SATA drives for RAID set, press the Esc key. A “Create Raid Set Confirmation” screen appears, select the
Yes option to confirm it.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Raid Set Function
Quick Volume/Raid Setup
Create Raid Set
Volume Set Function
Physical Drives
Select IDE Drives For Raid Set
[*]Ch01| 80.0GBST380013AS
Raid System Function
Ethernet Configuration
View System Events
[ ]Ch08| 80.0GBST380013AS
Hardware Monitor
System Information
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
53
BIOS CONFIGURATION
4. An “Edit The Raid Set Name” dialog box appears. Enter 1 to
15 alphanumeric characters to define a unique identifier for the
RAID set. The default RAID set name will always appear as Raid
Set. #.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Raid Set Function
Quick Volume/Raid Setup
Create Raid Set
Delete Raid Set
Expand Raid Set
Raid System Function
Edit The Raid Set Name
Delete Hot Spare
Raid Set Information
Hardware Monitor
System Information
Select IDE Drives For Raid Set
[*]Ch01| 80.0GBST380013AS
Ethernet Configuration
View System Events
Clear Event Buffer
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
3.7.2.2 Delete Raid Set
To erase and reconfigure a RAID set completely, you must delete it and re-create the RAID set first. To delete a RAID set, select the RAID set number that user want to delete in the “Select Raid
Set to Delete” screen. The “Delete Raid Set” dialog box appears, then press Yes option to delete it. Please noticed data on RAID set will be lost if this option is used.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Quick Volume/Raid Setup
Raid Set Function
Delete Raid Set
Physical Drives
Raid System Function
Create Hot Spare
View System Events
Raid Set # 00
Raid Set # 01
Raid Set Information
Hardware Monitor
System Information
Are you Sure?
Yes
No
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
54
BIOS CONFIGURATION
3.7.2.3 Expand Raid Set
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Quick Volume/Raid Setup
Raid Set Function
Delete Raid Set
Exp
Raid System Function
Ethernet Configuration
View System Events
Clear Event Buffer
Hardware Monitor
System Information
Select Drives For Raid Set Expansion
Activate Raid Set
Create Hot Spare
Delete Hot Spare
Raid Set Information
Are you Sure?
Yes
No
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Instead of deleting a RAID set and recreating it with additional disk drives, the “Expand Raid Set” function allows the users to add disk drives to the RAID set that have already been created.
To expand a RAID set:
Select the “Expand Raid Set” option. If there is an available disk, then the “Select SATA Drives For Raid Set Expansion” screen appears.
Select the target RAID set by clicking on the appropriate radio button. Select the target disk by clicking on the appropriate check box.
Press the Yes option to start the expansion on the RAID set.
The new additional capacity can be utilized by one or more volume sets. The volume sets associated with this RAID set appear for you to have chance to modify RAID level or stripe size.
Follow the instruction presented in the “Modify Volume Set ” to modify the volume sets; operation system specific utilities may be required to expand operating system partitions.
Note:
1. Once the “Expand Raid Set” process has started, user can not stop it. The process must be completed.
2. If a disk drive fails during raid set expansion and a hot spare is available, an auto rebuild operation will occur after the RAID set expansion completes.
55
BIOS CONFIGURATION
• Migrating
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Quick Volume/Raid Setup
Volume Set Function
Physical Drives
The Raid Set Information
Expand Raid Set
Activate Raid Set
Create Hot Spare
Delete Hot Spare
Free Capacity : 144.1GB
Min Member Disk Size : 40.0GB
Member Disk Channels : 1234
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Migration occurs when a disk is added to a RAID set. Migrating state is displayed in the RAID state area of “The Raid Set
Information” screen when a disk is being added to a RAID set.
Migrating state is also displayed in the associated volume state area of the “Volume Set Information” which belongs this RAID set.
3.7.2.4 Activate Incomplete Raid Set
The following screen is used to activate the RAID set after one of its disk drive was removed in the power off state.
When one of the disk drives is removed in power off state, the RAID set state will change to “Incomplete State”. If a user wants to continue to work while the SATA RAID controller is powered on, the user can use the “Activate Raid Set” option to active the RAID set. After user selects this function, the RAID state will change to “Degraded Mode”.
56
BIOS CONFIGURATION
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Create Hot Spare
Delete Hot Spare
Raid Set Information
Hardware Monitor
Total Capacity : 160.1GB
Free Capacity : 144.1GB
Min Member Disk Size : 40.0GB
Member Disk Channels : 1234
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
3.7.2.5 Create Hot Spare
When you choose the “Create Hot Spare” option in the “Raid Set
Function”, all unused physical devices connected to the current controller will result in the following:
Select the target disk by clicking on the appropriate check box.
Press the Enter key to select a disk drive and press Yes option in the “Create Hot Spare” to designate it as a hot spare.
The “Create Hot Spare” option gives you the ability to define a global hot spare.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Raid Set Function
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
[*]Ch05| 80.0GBST380013AS
Create Hot Spare
View System Events
Raid Set Information
Are you Sure?
Yes
No
Hardware Monitor
System Information
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
57
BIOS CONFIGURATION
3.7.2.6 Delete Hot Spare
Select the target hot spare disk to delete by clicking on the appropriate check box.
Press the Enter keys to select a hot spare disk drive, and press
Yes in the “Delete Hot Spare” screen to delete the hot spare.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Expand Raid Set
[*]Ch05| 80.0GBST380013AS
Delete Hot Spare
Clear Event Buffer
Hardware Monitor
System Information
Are you Sure?
Yes
No
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
3.7.2.7 Raid Set Information
To display RAID set information, move the cursor bar to the desired RAID set number, then press the Enter key. The “Raid Set
Information” will display.
You can only view information for the RAID set in this screen.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Physical Drives
The Raid Set Information
Volume Set Function
Expand Raid Set
Activate Raid Set
Create Hot Spare
Delete Hot Spare
Min Member Disk Size : 80.0GB
Member Disk Channels : 1458
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
58
BIOS CONFIGURATION
3.7.3 Volume Set Function
A volume set is seen by the host system as a single logical device; it is organized in a RAID level within the controller utilizing one or more physical disks. RAID level refers to the level of data performance and protection of a volume set. A volume set can consume all of the capacity or a portion of the available disk capacity of a RAID set. Multiple volume sets can exist on a RAID set. If multiple volume sets reside on a specified RAID set, all volume sets will reside on all physical disks in the RAID set. Thus each volume set on the RAID set will have its data spread evenly across all the disks in the RAID set. This is with regards in having more than one volume set using some of the available disks and another volume set using other disks.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Volume Set Function
Create Volume Set
Delete Volume Set
Raid System Function
Ethernet Configuration
Check Volume Set
StopVolume Check
Clear Event Buffer
Hardware Monitor
System Information
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
3.7.3.1 Create Volume Set
1. Volume sets of different RAID levels may coexist on the same
RAID set.
2. Up to 16 volume sets can be created by the SATA RAID
controller.
3. The maximum addressable size of a single volume set is not
limited to 2TB because the controller is capable of 64-bit
mode. However, the operating system itself may not be
capable of addressing more than 2TB.
To create a volume set, follow the following steps:
1. Select the “Volume Set Function” from the main menu.
2. Choose the “Create Volume Set” from “Volume Set Functions” dialog box screen.
59
BIOS CONFIGURATION
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Volume Set Function
Raid Set Function
Delete Volume Set
Raid System Function
Ethernet Configuration
View System Events
Clear Event Buffer
Hardware Monitor
System Information
Raid Set # 00
Raid Set # 01
StopVolume Check
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
3. The “Create Volume From RAID Set” dialog box will be appeared. This screen displays the existing arranged RAID sets.
Select the RAID set number and press the Enter key. The “Volume Creation” dialogue is displayed in the screen.
4. A window with a summary of the current volume set’s settings. The “Volume Creation” option allows user to select the
Volume Name, Capacity, RAID Level, Strip Size, SCSI Channel/
SCSI ID/SCSI LUN, Cache Mode and Tag Queuing. The user can modify the default values in this screen; the modification procedures are in section 3.5.3.3.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Volume Set Function
Raid Set Function
Volume Set Function
Physical Drives Volume Name : Volume Set # 00
Raid Level : 5
Capacity : 160.1GB
View System Events
Clear Event Buffer
Hardware Monitor
System Information
SCSI Channel : 0
SCSI ID : 0
SCSI LUN : 0
Cache Mode : Write Back
Tag Queuing : Enabled
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
5. After completing the modification of the volume set, press the
Esc key to confirm it. An “Initialization” screen is presented.
• Select “Foreground (Faster Completion)” for faster initialization of the selected volume set.
• Select “Background (Instant Available)” for normal initialization of the selected volume set.
60
BIOS CONFIGURATION
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Volume Set Function
Raid Set Function
Volume Set Function
Physical Drives
Raid Level : 5
Capacity : 160.1GB
View System Events
Clear Event Buffer
Hardware Monitor
System Information
SCSI Channel : 0
SCSI ID : 0
Background (Instant Available)
No Init (To Rescue Volume)
SCSI LUN : 0
Cache Mode : Write Back
Tag Queuing : Enabled
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
• Select "No Init (To Rescue Volume)" for no initialization of the selected volume set.
6. Repeat steps 3 to 5 to create additional volume sets.
7. The initialization percentage of volume set will be displayed at the button line.
• Volume Name
The default volume name will always appear as Volume Set #.
You can rename the volume set providing it does not exceed the 15 characters limit.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Volume Set Function
Raid Set Function
Volume Set Function
Physical Drives
Check Volume Set
View System Events
Clear Event Buffer
Volume Name : Volume Set # 00
Ethernet Configuration
Raid Set # 01
StopVolume Check
SCSI Channel : 0
Hardware Monitor
System Information
SCSI ID : 0
SCSI LUN : 0
Edit The Volume Name
Cache Mode : Write Back
Tag Queuing : Enabled
V olume Set # 00
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
• Raid Level
Set the RAID level for the volume set. Highlight RAID Level and press Enter.
61
BIOS CONFIGURATION
The available RAID levels for the current volume set are displayed. Select a RAID level and press the Enter key to confirm.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Volume Set Function
Raid Set Function
Volume Set Function
Physical Drives
StopVolume Check
Clear Event Buffer
Hardware Monitor
Raid Level : 5
Capacity : 160.1GB
View System Events
SCSI Channel : 0
SCSI ID : 0
SCSI LUN : 0
Select Raid Level
Cache Mode : Write Back
Tag Queuing : Enabled
0
0 + 1
3
5
6
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
• Capacity
The maximum available volume size is the default value for the first setting. Enter the appropriate volume size to fit your application. The capacity value can be increased or decreased by the UP and DOWN arrow keys. The capacity of each volume set must be less than or equal to the total capacity of the RAID set on which it resides.
If volume capacity will exceed 2TB, controller will show the
"Greater Two TB Volume Support" sub-menu.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Available Capacity : 160.1GB
Volume Set Function
Raid Set Function
Volume Creation
Delete Volume Set
Raid System Function
Ethernet Configuration
View System Events
Clear Event Buffer
Selected Capacity : 160.1GB
Raid Set # 00
Raid Set # 01
Hardware Monitor
System Information
SCSI Channel : 0
SCSI ID : 0
SCSI LUN : 0
Cache Mode : Write Back
Tag Queuing : Enabled
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
62
BIOS CONFIGURATION
• No
It keeps the volume size with max. 2TB limitation.
• LBA 64
This option uses 16 bytes CDB instead of 10 bytes. The maximum volume capacity supports up to 512TB.
This option works on different OS which supports 16 bytes CDB.
Such as:
Windows 2003 with SP1
Linux kernel 2.6.x or latter
• For Windows
It change the sector size from default 512 Bytes to 4k Byetes. the maximum volume capacity up to 16TB.
This option works under Windows platform only. And it can not be converted to Dynamic Disk, because 4k sector size is not a standard format.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Greater Two TB Volume Support
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Total 4 Drives
No
Use 64bit LBA
Ethernet Configuration
View System Events
Clear Event Buffer
Hardware Monitor
System Information
Raid 0
Raid 1 + 0
Raid 1 + 0 + Spare
Raid 3
Raid 5
Raid 3 + Spare
Raid 5 + Spare
Raid 6
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
For more details please download PDF file from ftp://ftp. areca.com.tw/RaidCards/Documents/Manual_Spec/
Over2TB_050721.zip
63
BIOS CONFIGURATION
• Stripe Size
This parameter sets the size of segment written to each disk in a RAID 0, 1, 10, 5, or 6 logical drive. You can set the stripe size to 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, or 128 KB.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Volume Set Function
Raid Set Function
Physical Drives
Volume Creation
Delete Volume Set
Raid System Function
Ethernet Configuration
View System Events
Raid Set # 00
Raid Set # 01
StopVolume Check
Clear Event Buffer
Capacity : 160.1GB
Stripe Size : 64K
Hardware Monitor
System Information
SCSI Channel : 0
SCSI ID : 0
SCSI LUN : 0
Cache Mode : Write Back
Tag Queuing : Enabled
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
• SCSI Channel
The SATA RAID controller function simulates a SCSI RAID controller. The host bus represents the SCSI channel. Choose the
“SCSI Channel”. A “Select SCSI Channel” dialog box will appears; select the channel number and press the Enter key to confirm it.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Volume Set Function
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Raid Set # 00
Capacity : 160.1GB
View System Events
Clear Event Buffer
Hardware Monitor
System Information
SCSI Channel : 0
SCSI ID : 0
SCSI LUN : 0
Cache Mode : Write Back
Tag Queuing : Enabled
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
64
BIOS CONFIGURATION
• SCSI ID
Each device attached to the SATA card, as well as the card itself, must be assigned a unique SCSI ID number. A SCSI channel can connect up to 15 devices. It is necessary to assign a
SCSI ID to each device from a list of available SCSI IDs.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Volume Set Function
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
StopVolume Check
Clear Event Buffer
Hardware Monitor SCSI ID : 0
System Information
Raid Set # 00
Capacity : 160.1GB
SCSI Channel : 0
SCSI ID : 0
SCSI LUN : 0
Cache Mode : Write Back
Tag Queuing : Enabled
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
• SCSI LUN
Each SCSI ID can support up to 8 LUNs. Most SATA controllers treat each LUN as if it were a SATA disk.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Volume Set Function
Raid Set Function
Delete Volume Set
Modify Volume Set
View System Events
Clear Event Buffer
Raid Set # 00
Raid Set # 01
Hardware Monitor
SCSI Channel : 0
SCSI ID : 0
SCSI LUN : 0
Cache Mode : Write Back
Tag Queuing : Enabled
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
65
BIOS CONFIGURATION
• Cache Mode
User can set the cache mode to either “Write Through” or
“Write Back” cache.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Volume Set Function
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Raid Set # 00
Capacity : 160.1GB
View System Events
Clear Event Buffer
Hardware Monitor
System Information
SCSI Channel : 0
SCSI ID : 0
SCSI LUN : 0
Cache Mode : Write Back
Tag Queuing : Enabled
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
• Tag Queuing
This option, when enabled, can enhance overall system performance under multi-tasking operating systems. The Command
Tag (Drive Channel) function controls the SCSI command tag queuing support for each drive channel. This function should normally remain enabled. Disable this function only when using older drives that do not support command tag queuing.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Volume Set Function
Raid Set Function
Delete Volume Set
Modify Volume Set
Clear Event Buffer
Hardware Monitor
System Information
Volume Creation
Volume Set Function
Raid System Function
Ethernet Configuration
View System Events
Raid Set # 00
Raid Set # 01
SCSI Channel : 0
SCSI ID : 0
SCSI LUN : 0
Cache Mode : Write Back
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
66
BIOS CONFIGURATION
3.7.3.2 Delete Volume Set
To delete volume set from a RAID set, move the cursor bar to the “Delete Volume Set” item, then press the Enter key. The
“Volume Set Functions” menu will show all Raid Set # items.
Move the cursor bar to a RAID set number, then press the Enter key to show all volume sets within that RAID set. Move the cursor to the volume set number that is to be deleted and press
Enter key to delete it.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Volume Set Function
Raid Set Function
Volume Set Function
Modify Volume Set
Check Volume Set
StopVolume Check
Display Volume Info.
Hardware Monitor
System Information
Select Volume To Delete
Volume Set # 00
Delete Volume Set
Yes
No
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
3.7.3.3 Modify Volume Set
Use this option to modify volume set configuration. To modify volume set values from RAID set system function, move the cursor bar to the “Modify Volume Set” item, then press the Enter key. The “Volume Set Functions” menu will show all RAID set items. Move the cursor bar to a RAID set number item, then press the Enter key to show all volume set items. Select the volume set from the list to be changed, press the Enter key to modify it.
67
BIOS CONFIGURATION
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Volume Set Function
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Ethernet Configuration
View System Events
Clear Event Buffer
Select Volume To Modify
Volume Name : Volume Set # 00
Hardware Monitor
System Information
SCSI Channel : 0
SCSI ID : 0
SCSI LUN : 0
Cache Mode : Write Back
Tag Queuing : Enabled
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
As shown, volume information can be modified at this screen.
Choose this option to display the properties of the selected volume set; all values can be modified except only the last volume set capacity of the expand RAID set.
3.7.3.3.1 Volume Growth
Use “Expand RAID Set” function to add disk to a RAID set. The additional capacity can be used to enlarge the last volume set size or to create another volume set. The “Modify Volume Set” function can support the “Volume Modification” function. To expand the last volume set capacity , move the cursor bar to the “ Capacity” item and entry the capacity size. When finished the above action, press the ESC key and select the Yes option to complete the action. The last volume set starts to expand its capacity.
To expand an existing volume noticed:
• Only the last volume can expand capacity.
• When expand volume capacity, you can’t modify stripe size or
modify RAID revel simultaneously.
• You can expand volume capacity, but can’t reduce volume
capacity size.
• After volume expansion, the volume capacity can't be
decreased.
68
BIOS CONFIGURATION
For greater 2TB expansion:
• If your system installed in the volume, don't expand the
volume capacity greater 2TB, currently OS can’t support boot
up from a greater 2TB capacity device.
• Expand over 2TB used LBA64 mode. Please make sure your
OS supports LBA64 before expand it.
3.7.3.3.2 Volume Set Migration
Migrating occurs when a volume set is migrating from one RAID level to another, when a volume set strip size changes, or when a disk is added to a RAID set. Migration status is displayed in the volume state area of the “Volume Set Information” screen.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Check Volume Set
StopVolume Check
Clear Event Buffer
Raid Set # 00
Raid Set # 01
RAID Level : 6
Hardware Monitor
System Information
Cache Attribute : Write-Back
Tag Queuing : Enabled
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
3.7.3.4 Check Volume Set
Use this option to verify the correctness of the redundant data in a volume set. For example, in a system with a dedicated parity disk drive, a volume set check entails computing the parity of the data disk drives and comparing those results to the contents of the dedicated parity disk drive. To check volume set, move the cursor bar to the “Check Volume Set” item, then press the
Enter key. The “Volume Set Functions” menu will show all RAID set number items. Move the cursor bar to a RAID set number item and then press the Enter key to show all volume set items.
Select the volume set to be checked from the list and press En-
ter key to select it. After completing the selection, the confirmation screen appears, press Yes option to start the check.
69
BIOS CONFIGURATION
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Volume Set Function
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Ethernet Configuration
View System Events
Clear Event Buffer
Hardware Monitor
System Information
Select Volume To Check
Raid Set # 00
Raid Set # 01
Check The Volume ?
Yes
No
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
3.7.3.5 Stop Volume Set Check
Use this option to stop all of the “Check Volume Set” operations.
3.7.3.6 Display Volume Set Info.
To display volume set information, move the cursor bar to the desired volume set number and then press the Enter key. The
“Volume Set Information” screen will be shown. You can only view the information of this volume set in this screen.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Check Volume Set
StopVolume Check
Clear Event Buffer
Raid Set # 00
Raid Set # 01
RAID Level : 6
Hardware Monitor
System Information
Cache Attribute : Write-Back
Tag Queuing : Enabled
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
70
BIOS CONFIGURATION
3.7.4 Physical Drives
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Volume Set Function
Create Pass-Through Disk
Modify Pass-Through Disk
Delete Pass-Through Disk
Identify Selected Drive
Hardware Monitor
System Information
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
Choose this option from the main menu to select a physical disk and perform the operations listed above.
3.7.4.1 View Drive Information
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Ch01
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives Disk Capacity : 80.0 GB
Raid System Function
Ch04| 80.0GB|RaidSet Member|ST380013AS
Ch05| 80.0GB|RaidSet Member|ST380013AS
Hardware Monitor
System Information
SMART Read Error Rate : 200 (51)
SMART Spinup Time : 173 (21)
SMART Reallocation Count : 200 (140)
SMART Seek Error Rate : 200 (51)
SMART Spinup Retries : 100 (51)
SMART Calibration Retries : 100 (51)
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
71
BIOS CONFIGURATION
When you choose this option, the physical disks connected to the SATA RAID controller are listed. Move the cursor to the desired drive and press Enter to view drive information.
3.7.4.2 Create Pass-Through Disk
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Modify Pass-Through Disk
Delete Pass-Through Disk
SCSI LUN : 0
Cache Mode : Write Back
Yes
No
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
A pass-through disk is not controlled by the SATA RAID controller firmware and thus can not be a part of a volume set. The disk is available directly to the operating system as an individual disk. It is typically used on a system where the operating system is on a disk not controlled by the SATA RAID controller firmware. The SCSI Channel, SCSI ID, SCSI LUN, Cache Mode, and
Tag Queuing must be specified to create a pass-through disk.
3.7.4.3 Modify a Pass-Through Disk
Use this option to modify pass-through disk attributes. To select and modify a pass-through disk from the pool of pass-through disks, move the cursor bar to the “Modify Pass-Through Drive” option and then press the Enter key. The “Physical Drive Function” menu will show all pass-through drive number options.
Move the cursor bar to the desired item and then press the
Enter key to show all pass-through disk attributes. Select the parameter from the list to be changed and them press the Enter key to modify it.
72
BIOS CONFIGURATION
3.7.4.4 Delete Pass-Through Disk
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Volume Set Function
Physical Drives
Modify Pass-Through Disk
Delete Pass-Through
Identify Selected Drive
Clear Event Buffer
Hardware Monitor
System Information
Delete Pass-Through
Yes
No
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
To delete a pass-through drive from the pass-through drive pool, move the cursor bar to the “Delete Pass-Through Drive” item, then press the Enter key. The “Delete Pass-Through confirmation” screen will appear; select Yes option to delete it.
3.7.4.5 Identify Selected Drive
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Volume Set Function
Physical Drives Select The Drive
Modify Pass-Through Disk
View System Events
Hardware Monitor
System Information
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
You can used the "Identify Selected Drive" feature to prevent removing the wrong drive, the selected drive fault LED will blank when the "Identify Selected Drive" is selected.
73
BIOS CONFIGURATION
3.7.5 Raid System Function
To set the RAID system function, move the cursor bar to the main menu and select the “Raid System Function” item and then press
Enter key. The “Raid System Function” menu will show multiple items. Move the cursor bar to an item, then press Enter key to select the desired function.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function Raid System Function
Ethernet Configuration
View System Events
Clear Event Buffer
Hardware Monitor
System Information
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
3.7.5.1 Mute The Alert Beeper
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Mute The Alert Beeper
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Ethernet Configuration
View System Events
Clear Event Buffer
Empty HDD slot LED
HDD SMART Status Polling
Controller Fan Detection
Disk Write Cache Mode
Capacity Truncation
Mute Alert Beeper
HDD Read Ahead Cache
Stagger Power on
Yes
No
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
The “Mute The Alert Beeper” function item is used to control the
SATA RAID controller beeper. Select Yes option and press the
Enter key in the dialog box to turn the beeper off temporarily.
The beeper will still activate on the next event.
74
BIOS CONFIGURATION
3.7.5.2 Alert Beeper Setting
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Background Task Priority
Maximum SATA Mode
HDD Read Ahead Cache
Stagger Power on
Hardware Monitor
System information
Controller Fan Detection
Disk Write Cache Mode
Capacity Truncation
Disabled
Enabled
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
The “Alert Beeper Setting” item is used to “Disabled or “Enable” the SATA RAID controller alarm tone generator. Select “Disabled” and press the Enter key in the dialog box to turn the beeper off.
3.7.5.3 Change Password
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Mute The Alert Beeper
Alert Beeper Setting
Change Password
JBOD/RAID Function
Raid System Function
Ethernet Configuration
View System Events
Clear Event Buffer
Empty HDD slot LED
HDD SMART Status Polling
Controller Fan Detection
Disk Write Cache Mode
Capacity Truncation
Enter New Password
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
The manufacture default password is set to 0000. The password option allows user to set or clear the password protection feature. Once the password has been set, the user can monitor and configure the controller only by providing the correct password. This feature is used to protect the SATA RAID system from unauthorized access. The controller will check the password only when entering the main menu from the initial
75
BIOS CONFIGURATION screen. The system will automatically go back to the initial screen if it does not receive any command in 20 seconds.
To set or change the password, move the cursor to “Raid System
Function” screen, press the “Change Password” item. The “Enter
New Password” screen will appear.
To disable the password, only press Enter key in both the “Enter
New Password” and “Re-Enter New Password” column. The existing password will be cleared. No password checking will occur when entering the main menu.
3.7.5.4 JBOD/RAID Function
JBOD is an acronym for “Just a Bunch Of Disk”. A group of hard disks in a RAID controllers are not set up as any type of RAID configuration. All drives are available to the operating system as an individual disk. JBOD does not provide data redundancy. User needs to delete the RAID set, when you want to change the option from the RAID to the JBOD function.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
JBOD/RAID Function
Alert Beeper Setting
Volume Set Function
Physical Drives
Raid System Function
Ethernet Configuration
View System Events
Clear Event Buffer
JBOD
Empty HDD slot LED
HDD SMART Status Polling
Controller Fan Detection
Disk Write Cache Mode
Capacity Truncation
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
76
BIOS CONFIGURATION
3.7.5.5 Background Task Priority
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Raid Rebuild Priority
UltraLow(5%)
Low(20%)
Medium(50%)
Background Task Priority
Maximum SATA Mode
HDD Read Ahead Cache
Stagger Power on
Hardware Monitor
System information
Controller Fan Detection
Disk Write Cache Mode
Capacity Truncation
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
The “Background Task Priority” is a relative indication of how much time the controller devotes to a rebuild operation. The
SATA RAID controller allows the user to choose the rebuild priority (Ultralow, Low, Normal, High) to balance volume set access and rebuild tasks appropriately.
3.7.5.6 Maximum SATA Mode
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Maximum SATA Mode
Alert Beeper Setting
Change Password
JBOD/RAID Function
Background Task Priority
Maximum SATA Mode
View System Events
Clear Event Buffer
Empty HDD slot LED
HDD SMART Status Polling
Controller Fan Detection
Disk Write Cache Mode
Capacity Truncation
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
The SATA RAID controller can support up to SATA ll, which runs up to 300MB/s, twice as fast as SATA150. NCQ is a command protocol in Serial ATA that can only be implemented on native Serial ATA hard drives. It allows multiple commands to be outstanding within a drive at the same time. Drives that support
NCQ have an internal queue where outstanding commands can
77
BIOS CONFIGURATION be dynamically rescheduled or re-ordered, along with the necessary tracking mechanisms for outstanding and completed portions of the workload. The SATA RAID controller allows the user to choose the SATA Mode: SATA150, SATA150+NCQ, SATA300,
SATA300+NCQ.
3.7.5.7 HDD Read Ahead Cache
Allow Read Ahead (Default: Enabled)—When Enabled, the drive’ s read ahead cache algorithm is used, providing maximum performance under most circumstances.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Mute The Alert Beeper
Alert Beeper Setting
Volume Set Function
Physical Drives
Background Task Priority
Ethernet Configuration
View System Events
Clear Event Buffer
Empty HDD slot LED
HDD SMART Status Polling
Controller Fan Detection
Disk Write Cache Mode
Capacity Truncation
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
3.7.5.8 Stagger Power On
In a PC system with only one or two drives, the power can supply enough power to spin up both drives simultaneously. But in systems with more than two drives, the startup current from spinning up the drives all at once can overload the power supply, causing damage to the power supply, disk drives and other system components. This damage can be avoided by allowing the host to stagger the spin-up of the drives. New SATA drives have support stagger spin-up capabilities to boost reliability.
Stagger spin-up is a very useful feature for managing multiple disk drives in a storage subsystem. It gives the host the ability to spin up the disk drives sequentially or in groups, allowing the drives to come ready at the optimum time without straining the system power supply. Staggering drive spin-up in a multiple drive environment also avoids the extra cost of a power supply designed to meet short-term startup power demand as well as steady state conditions.
78
BIOS CONFIGURATION
Areca RAID controller has included the option for customer to select the disk drives sequentially stagger power up value.
The values can be selected from 0.4s to 6s per step which powers up one drive.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
1.0
1.5
.
STagger Power on
Empty HDD slot LED
HDD SMART Status Polling
Controller Fan Detection
Disk Write Cache Mode
Capacity Truncation
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
3.7.5.9 Empty HDD slot HDD
The firmware has added the "Empty HDD Slot LED" option to setup the fault LED light "ON "or "OFF" when there is no HDD installedon this slot. When each slot has a power LED for the
HDD installed identify, user can set this option to "OFF ". Choose option "ON", the RAID controller will light the fault LED; if no
HDD installed.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Ethernet Configuration
View System Events
Clear Event Buffer
Empty HDD slot LED
System information
Controller Fan Detection
Disk Write Cache Mode
Capacity Truncation
Empty HDD slot LED
On
OFF
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
79
BIOS CONFIGURATION
3.7.5.10 HDD SMART Status Polling
An external RAID enclosure has the hardware monitor in the dedicated backplane that can report HDD temperature status to the controller. However, PCI cards do not use backplanes if the drives are internal to the main server chassis. The type of enclosure cannot report the HDD temperature to the controller. For this reason, "HDD SMART Status Polling" was added to enable scanning of the HDD temperature function. It is necessary to enable “HDD SMART Status Polling” function before
SMART information is accessible. This function is disabled by default.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Mute The Alert Beeper
Alert Beeper Setting
Change Password
JBOD/RAID Function
Raid System Function
Maximum SATA Mode
HDD Read Ahead Cache
Stagger Power on
Hardware Monitor
System information
HDD SMART Status Polling
HDD SMART Status Polling
Disk Write Cache Mode
Capacity Truncation
Disabled
Enabled
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
The above screen shot shows how to change the McBIOS RAID manager setting to enable the polling function.
3.7.5.11 Controller Fan Detection
Included in the product box is a field replaceable passive heatsink to be used only if there is enough airflow to adequately cool the passive heatsink.
The “Controller Fan Detection” function is available in the version 1.36 date: 2005-05-19 and later for preventing the
Buzzer warning. When using the passive heatsink, disable the
“Controller Fan Detection” function through this McBIOS RAID manager setting.
The following screen shot shows how to change the McBIOS
RAID manager setting to disable the beeper function. (This function is not available in the web browser setting.)
80
BIOS CONFIGURATION
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Mute The Alert Beeper
Alert Beeper Setting
Volume Set Function
Physical Drives
Raid System Function
Ethernet Configuration
View System Events
Clear Event Buffer
Controller Fan Detection
Empty HDD slot LED
HDD SMART Status Polling
Controller Fan Detection
Disk Write Cache Mode
Capacity Truncation
Disabled
Enabled
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
3.7.5.12 Disk Write Cache Mode
User can set the “Disk Write Cache Mode” to Auto, Enabled, or
Disabled. “Enabled” increases speed, “Disabled” increases reliability.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Mute The Alert Beeper
Raid Set Function
Change Password
JBOD/RAID Function
Raid System Function
Ethernet Configuration
View System Events
Clear Event Buffer
Empty HDD slot LED
HDD SMART Status Polling
Controller Fan Detection
Disk Write Cache Mode
Capacity Truncation
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
3.7.5.13 Capacity Truncation
SATA RAID controller use drive truncation so that drives from differing vendors are more likely to be able to be used as spares for each other. Drive truncation slightly decreases the usable capacity of a drive that is used in redundant units.
81
BIOS CONFIGURATION
The controller provides three truncation modes in the system configuration: “Multiples Of 10G”, “Multiples Of 1G”, and “No
Truncation”.
Multiples Of 10G: If you have 120 GB drives from different vendors; chances are that the capacity varies slightly. For example, one drive might be 123.5 GB, and the other 120 GB.
“Multiples Of 10G” truncates the number under tens. This makes the same capacity for both of these drives so that one could replace the other.
Multiples Of 1G: If you have 123 GB drives from different vendors; chances are that the capacity varies slightly. For example, one drive might be 123.5 GB, and the other 123.4 GB.
“Multiples Of 1G” truncates the fractional part. This makes the same capacity for both of these drives so that one could replace the other.
No Truncation: It does not truncate the capacity.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
To Multiples of 10G
To Multiples of 1G
Background Task Priority
Ethernet Configuration
View System Events
Clear Event Buffer
Empty HDD slot LED
HDD SMART Status Polling
Controller Fan Detection
Disk Write Cache Mode
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
3.7.6 Ethernet Configuration (12/16/24-port)
Use this feature to set the controller Ethernet port configuration.
It is not necessary to create reserved disk space on any hard disk for the Ethernet port and HTTP service to function; these functions are built into the controller firmware. To choose the
"Ethernet Configuration" of the controller, move the cursor bar to the main menu “Ethernet Configuration” function item and then press the Enter key.
82
BIOS CONFIGURATION
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
Ethernet Configuration
View System Events
Clear Event Buffer
Hardware Monitor
System Information
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
3.7.6.1 DHCP Function
DHCP (Dynamic Host Configuration Protocol) allows network administrators centrally manage and automate the assignment of
IP (Internet Protocol) addresses on a computer network. When using the TCP/IP protocol (Internet protocol), it is necessary for a computer to have a unique IP address in order to communicate to other computer systems. Without DHCP, the IP address must be entered manually at each computer system. DHCP lets a network administrator supervise and distribute IP addresses from a central point. The purpose of DHCP is to provide the automatic (dynamic) allocation of IP client configurations for a specific time period (called a lease period) and to minimize the work necessary to administer a large IP network. To config-
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Local IP Address : 192.168.001.100
Ethernet Address : 00.04.D9.7F.FF.FF
Hardware Monitor
System Information
Disabled
Enabled
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
83
BIOS CONFIGURATION ure the DHCP function of the controller, move the cursor bar to
“DHCP Function” item, then press Enter key to show the DHCP setting. Select the “Disabled’ or ‘Enabled” option to enable or disable the DHCP function. If DHCP is disabled, it will be necessary to manually enter a static IP address that does not conflict with other devices on the network.
3.7.6.2 Local IP address
If you intend to set up your client computers manually (no
DHCP), make sure that the assigned IP address is in the same range as the default router address and that it is unique to your private network. However, it is highly recommend to use DHCP if that option is available on your network. An IP address allocation scheme will reduce the time it takes to set-up client computers and eliminate the possibilities of administrative errors and duplicate addresses. To manually configure the local IP address of the controller, move the cursor bar to “Local IP address” item, then press the Enter key to show the default address setting in the SATA RAID controller. You can then reassign the static
IP address of the controller.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Raid System Function
Local IP Address : 192.168.001.100
Ethernet Address : 00.04.D9.7F.FF.FF
Hardware Monitor
System Information
1 92.168.001.100
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
84
BIOS CONFIGURATION
3.7.6.3 Ethernet Address
A MAC address stands for “Media Access Control” address and is unique to every single Ethernet device. On an Ethernet LAN, it’s the same as your Ethernet address. When you’re connected to a local network from the SATA RAID controller Ethernet port, a correspondence table relates your IP address to the SATA RAID controller’s physical (MAC) address on the LAN.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives
Raid System Function
View System Events
Clear Event Buffer
Hardware Monitor
System Information
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
3.7.7 View System Events
To view the SATA RAID controller’s system events information, move the cursor bar to the main menu and select the “View
System Events” link, then press the Enter key. The SATA RAID controller’s events screen will appear.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Time Device Event Type ElapseTime Errors
Volume Set Function
2004-1-1 12:00:00 H/W Monitor Raid Powered On
Raid System Function
Ethernet Configuration
View System Events
Clear Event Buffer
Hardware Monitor
System information
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
85
BIOS CONFIGURATION
Choose this option to view the system events information: Timer,
Device, Event type, Elapsed Time, and Errors. The RAID system does not have a real time clock. The time information is the relative time from the SATA RAID controller powered on.
3.7.8 Clear Events Buffer
Use this feature to clear the entire events buffer.
3.7.9 Hardware Monitor
To view the RAID controller’s hardware monitor information, move the cursor bar to the main menu and click the “Hardware Monitor” link. The hardware information screen appears.
The “The Hardware Monitor” provides the temperature and fan speed (I/O Processor fan) of the SATA RAID controller.
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
Volume Set Function
Physical Drives Battery Status : Not Installed
Raid System Function
Ethernet Configuration
HDD #3 Temp. : 48
Clear Event Buffer
Hardware Monitor
System Information
HDD #7 Temp. : --
HDD #8 Temp. : --
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
3.7.10 System Information
Choose this option to display main processor, CPU instruction
Cache and data cache size, firmware version, serial number, system memory/speed and controller model name. To check the system information, move the cursor bar to “System Information” item, then press Enter key. All relevant controller information will be displayed.
86
BIOS CONFIGURATION
Controller I/O Port:F3000000h, F2: Select Controller, F10: Reboot System
Areca Technology Corporation RAID Controller
Main Menu
Quick Volume/Raid Setup
Raid Set Function
The System Information
Volume Set Function
Physical Drives
Main Processor : 500MHz IOP331
CPU ICache Size : 32KB
CPU DCache Size : 32KB/Write Back
System Memory : 128MB/333MHz
Firmware Version : V1.31 2004-5-31
BOOT ROM Version : V1.34 2004-9-29
Hardware Monitor
System Information
ArrowKey Or AZ:Move Cursor, Enter: Select, ESC: Escape, L:Line Draw, X: Redraw
87
DRIVER INSTALLATION
4. Driver Installation
This chapter describes how to install the SATA RAID controller driver to your operating system. The installation procedures use the following terminology:
Installing operating system on the SATA volume
If you have a new drive configuration without an operating system and want to install operating system on a disk drive managed by the
SATA RAID Controller. The driver installation is a part of the operating system installation.
Installing SATA RAID controller into an existing operating system
The computer has an existing operating system installed and the
SATA RAID controller is being installed as a secondary controller.
Have all required system hardware and software components on hand before proceeding with the setup and installation.
Materials required:
• Microsoft Windows 2000/XP/2003, Linux, FreeBSD or more installa-
tion CD
• SATA RAID Controller Device Drivers Software CD
• SATA RAID controller
4.1 Creating the Driver Diskettes
The software CD disc shipped with the SATA RAID controller is a self booting CD. In order to created driver diskettes for Windows,
Linux, FreeBSD or more installation drivers, your system is required to support booting from the CD-ROM.
If you do not have the software CD disc with the package, contact your local dealer or you can also download the latest version drivers for Windows 2000/XP/2003/Vista, Linux, FreeBSD and more from the Areca web site at http://www.areca.com.tw
88
DRIVER INSTALLATION
These driver diskettes are intended for use with new operating system installations. Determine the correct kernel version and identify which diskette images contain drivers for that kernel. If the driver file ends in .img, create the appropriate driver diskette using
“dd” utility. The following steps are required to create the driver diskettes:
1. The computer system BIOS must be set to boot-up from the
CD-ROM.
2. Insert the SATA Controller Driver software CD disc into the CD-
ROM drive.
3. The system will boot-up from CD-ROM Drive.
Note:
It will take about 5 minutes to boot up the Knoppix
GNU/Linux, Live Linux CD.
4. To create the driver diskette, for example: making the Cen-
tOS 5 driver diskette.
4a. Execute xterm by clicking the XTerm icon on left-bottom
toolbar.
4b. Change the path to the specific driver image.
cd /cdrom/PACKAGES/Linux/DRIVER/CentOS_5
4c. Dump the driver image into floppy diskette using "dd" util
-ity, Command format: dd if=<image file> of=
<destination>
dd if=driver.img of=/dev/fd0
4d. When the operation is complete, the following messages are
shown.
2880+0 records in
2880+0 records out
1474560 bytes (1.5 MB) copied, 97.5903 seconds, 15.1
kB/s
The driver diskette is made now. Proceed to the following instruction for installation procedures.
89
DRIVER INSTALLATION
4.2 Driver Installation for Windows
The SATA RAID controller can be used with Microsoft Windows
2000, Windows XP, Windows Server 2003, and Vista. The SATA
RAID controllers support SCSI Miniport and StorPort device driver for Windows Server 2003/Vista.
4.2.1 New Storage Device Drivers in Windows
2003/XP-64/Vista
The Storport driver is new to Windows 2003/XP-64/Vista. Storport implements a new architecture designed for better performance with RAID systems and in Storage Area Network (SAN) environments. Storport delivers higher I/O throughput, enhanced manageability, and an improved miniport interface. Storport better utilizes faster adapters through the use of reduced Delay
Procedure Call (DPC) and improved queue management.
4.2.2 Install Windows 2000/XP/2003/Vista on a
SATA RAID Volume
The following instructions explain how to install the SATA RAID controller driver. For completed details on installing Windows, see the Windows User’s Manual.
4.2.2.1 Installation Procedures
The following detailed procedure installing the SATA RAID controller driver while installing Windows 2000/XP/2003/Vista. Have your bootable Microsoft Windows 2000/XP/2003/Vista CD and follow the required procedure below to install SATA RAID controller:
1. Make sure you follow the instructions in Chapter 2 “Hardware
Installation” to install the controller and connect the disk drives or enclosure.
2. Start the system and then press Tab+F6 to access the Mc-
BIOS RAID manager. Use the McBIOS RAID manager to create the RAID set and volume set to which you will install Windows.
90
DRIVER INSTALLATION
For details, see Chapter 3 “McBIOS RAID manager”. Once a volume set is created and configured, continue with next step to install the operating system.
3. Insert the Windows setup CD and reboot the system to begin the Windows installation.
Note:
The computer system BIOS must support bootable from
CD-ROM.
4. Press F6 key as soon as the Windows screen shows ”Setup is
Inspecting your Computer’s Hardware Configuration”. A message stating “Press F6 to Specify Thrid-party RAID Controller” will display during this time. This must be done or else the Windows installer will not prompt for the driver for from the SATA RAID controller and the driver diskette will not be recognized.
5. The next screen will show: “Setup could not determine the type of one or more mass storage device installed in your system”. Selected “specify additional SCSI adapter” by pressing S.
6. Window will prompt to place the “Manufacturer-supplied hardware support disk” into floppy drive A: Insert the SATA RAID series driver diskette in drive “A:” and press Enter key.
7. Window will check the floppy; select the correct card and CPU type for your hardware from the listing and press Enter key to install it.
8. After Windows scans the hardware and finds the controller, it will display:
“Setup will load support for the following Mass Storage devices:”
“ARECA [Windows X86-64 Storport] SATA/SAS PCI RAID Controller (RAID6-Engine inside)”. Press Enter key to continue and copy the driver files. From this point on, simply follow the Microsoft Windows installation procedure. Follow the on-screen instructions, responding as needed, to complete the installation.
91
DRIVER INSTALLATION
9. After the installation is completed, reboot the system to load the new drivers/operating system.
10. See Chapter 5 in this manual to customize your RAID volume sets using McRAID storage manager.
4.2.2.2 Making Volume Sets Available to Windows
System
When you reboot the system, log in as a system administrator.
Continue with the following steps to make any additional volume sets or pass-through disks accessible to Windows. This procedure assumes that the SATA RAID controller hardware, driver, and Windows are installed and operational in your system.
1. Partition and format the new volume set or disks using Disk
Administrator: a. Choose “Administrative Tools” from the “Start” menu.
b. Choose “Computer Management” from the “Administrative
Tools” menu.
c. Select “Storage”.
d. Select “Disk Management”.
2. Follow the on-screen prompts to write a signature to the drive.
3. Right click on the disk drive and select “Create Volume” from the menu.
4. Follow the on-screen prompts to create a volume set and to give a disk drive letter.
4.2.3 Installing Controller into an Existing Windows 2000/XP/2003/Vista Installation
In this scenario, you are installing the controller in an existing
Windows system. To install the driver:
1. Follow the instructions in Chapter 2, the Hardware Installation
Chapter, to install the controller and connect the disk drives or enclosure.
92
DRIVER INSTALLATION
2. Start the system and then press Tab+F6 to enter the McBI-
OS RAID manager utility. Use the configuration utility to create the raid set and volume set. For details, see Chapter 3, McBIOS
RAID Manager. Once a volume set is created and configured, continue with installation of the driver.
3. Re-Boot Windows and the OS will recognize the SATA RAID
Controller and launch the “Found New Hardware Wizard”, which guides you in installing the SATA RAID driver.
4. The “Upgrade Device Driver Wizard” will pop-up and provide a choice of how to proceed. Choose “Display a list of known drivers for this device, so that you can choose a specific driver.” and click on “Next”.
5. When the next screen queries the user about utilizing the currently installed driver, click on the “Have Disk” button.
6. When the “Install From Disk” dialog appears, insert the SATA
RAID controller driver diskette or the shipping CD-ROM and type-in or browse to the correct path for the “Copy manufacturer’s files from:” dialog box.
7. After specifying the driver location, the previous dialog box will appear showing the selected driver to be installed. Click the
“Next” button.
8. The “Digital Signature Not Found” screen will appear. Click on
“Yes” to continue the installation.
9. Windows automatically copies the appropriate driver files and rebuilds its driver database.
10. The “Found New Hardware Wizard” summary screen appears; click the “Finish”button.
11. The “System Settings Change” dialog box appears. Remove the diskette from the drive and click Yes option to restart the computer to load the new drivers.
12. See Chapter 5 in this manual for information on customizing your RAID volumes using McRAID storage manager.
93
DRIVER INSTALLATION
4.2.3.1 Making Volume Sets Available to Windows
System
When you reboot the system, log in as a system administrator.
The following steps show how to make any new disk arrays or independent disks accessible to Windows 2000/XP/2003/Vista.
This procedure assumes that the SATA RAID controller hardware, driver, and Windows are installed and operational in your system.
1. Partition and format the new arrays or disks using Disk Administrator: a. Choose “Administrative Tools” from the “Start” menu.
b. Choose “Computer Management” from the “Administrative
Tools” menu.
c. Select “Storage”.
d. Select “Disk Management”.
2. Follow the on-screen prompts to write a signature to the drive.
3. Right click on the drive and select “Create Volume” from the menu.
4. Follow the on-screen prompts to create a volume set and to assign a disk drive letter.
4.2.4 Uninstall controller from Windows 2000/
XP/2003/Vista
To remove the SATA RAID controller driver from the Windows system, follow the instructions below.
1. Ensure that you have closed all applications and are logged in with administrative rights.
2. Open “Control Panel” and start the “Add/Remove Program” icon and uninstall and software for the SATA RAID controller.
94
DRIVER INSTALLATION
3. Go to “Control Panel” and select “System”. Select the “Hardware” tab and then click the “Device Manager” button. In “Device Manager”, expand the “SCSI and RAID Controllers” section.
Right click on the “Areca SATA RAID Adapter” and select “uninstall”.
4. Click “Yes” to confirm removing the SATA RAID driver. The prompt to restart the system will then be displayed.
4.3 Driver Installation for Linux
This chapter describes how to install the SATA RAID controller driver to Red Hat Linux, and SuSE Linux. Before installing the SATA
RAID driver to the Linux, complete the following actions:
1. Install and configure the controller and hard disk drives according to the instructions in Chapter 2 Hardware Installation.
2. Start the system and then press Tab+F6 to enter the McBIOS
RAID manager configuration utility. Use the McBIOS RAID manager utility to create the RAID set and volume set. For details, see
Chapter 3, McBIOS RAID Manager.
If you are using a Linux distribution for which there is not a compiled driver available from Areca, you can copy the source from the
SATA software CD or download the source from the Areca website and compile a new driver.
Compiled and tested drivers for Red Hat and SuSE Linux are included on the shipped CD. You can download updated versions of compiled and tested drivers for Red Hat or SuSE Linux from the Areca web site at http://www.areca.com.tw. Included in these downloads is the Linux driver source, which can be used to compile the updated version driver for RedHat, SuSE and other versions of Linux.
Please refer to the “readme.txt” file on the included Areca software
CD or website to make driver diskette and to install driver to the system.
95
DRIVER INSTALLATION
4.4 Driver Installation for FreeBSD
This chapter describes how to install the SATA RAID controller driver to FreeBSD. Before installing the SATA RAID driver to Free-
BSD, complete following actions:
1. Install and configure the controller and hard disk drives according to the instructions in Chapter 2, Hardware Installation.
2. Start the system and then press Tab+F6 to enter the McBIOS
RAID Manager configuration utility. Use the McBIOS RAID manager utility to create the raid set and volume set. For details, see Chapter 3, McBIOS RAID Manager.
The supplied software CD that came with the SATA RAID controller includes compiled and tested drivers for FreeBSD 4.x (4.2 and onwards) and 5.x (5.2 and onwards). To check if a more current version driver is available, please see the Areca web site at http:// www.areca.com.tw.
Please refer to the “readme.txt” file on the SATA RAID controller software CD or website to make driver diskette and to install driver to the system.
4.5 Driver Installation for Solaris 10
Please refer to the “readme.txt” file on the software CD or a manual from website: http://www.areca.com.tw
4.6 Driver Installation for Mac 10.x
After hardware installation, the SATA disk drives connected to the
SATA RAID Adapter must be configured and the volume set units initialized by the controller before they are ready to use by the system.
You must have administrative level permissions to install Areca Mac driver & software. You can install driver& software on your Power
Mac G5 or Mac Pro as below:
1. Insert the Areca Mac driver & software CD that came with your Areca SATA RAID Adapter.
96
DRIVER INSTALLATION
2. Double-click on the following file that resides at <CD-ROM>\ packages\MacOS to add the installer on the Finder.
a). install_mraid_mac.zip (For Power Mac G5) b). install_mraid_macpro.zip (For Mac Pro)
3. Launch the installer by double-clicking the install_mraid_mac or install_mraid_macpro on the Finder.
4. Follow the installer steps to install Areca driver, MRAID (archttp64 and arc_cli utility) at the same time.
5. Reboot your Power Mac G5 or Mac Pro system.
Normally archttp64 and arc_cli are installed at the same time on
Areca SATA RAID adapter. Once archttp64 and arc_cli have been installed, the background task automatically starts each time when you start your computer. There is one MARID icon showing on your desktop. This icon is for you to start up the McRAID storage manager (by archttp64) and arc_cli utility. You can also only upgrade the driver, archttp64 or arc_cli individual item that resides at <CD-ROM>\packages\MacOS Arc-cli performs many tasks at the command line. You can download arc-cli manual from Areca website or software CD <CDROM>\ DOCS directory.
4.7 Driver Installation for UnixWare 7.1.4
Please refer to the “readme.txt” file on the software CD or a manual from website: http://www.areca.com.tw
97
DRIVER INSTALLATION
4.8 Driver Installation for NetWare 6.5
Please refer to the “readme.txt” file on the software
CD or a manual from website: http://www.areca.com.
98
ARCHTTP PROXY SERVER INSTALLATION
5. ArcHttp Proxy Server Installation
Overview
After hardware installation, the SATA disk drives connected to the SATA
RAID controller must be configured and the volume set units initialized before they are ready to use.
The user interface for these tasks can be accessed through the builtin configuration that resides in the controller’s firmware. It provides complete control and management of the controller and disk arrays, eliminating the need for additional hardware or software.
In addition, a software utility to configure the SATA RAID is provided on the CD delivered with SATA controller. This software CD contains the software utility that can monitor, test, and support the SATA RAID controller. The software utility and McRAID storage manager can configure and monitor the SATA RAID controller via ArcHttp proxy server.
The following table outlines their functions:
Configuration Utility
McBIOS RAID Manager
McRAID Storage Manager
(Via Archttp proxy server)
SAP Monitor (Single Admin portal to scan for multiple RAID units in the network, Via ArcHttp Proxy Server)
Operating System Supported
OS-Independent
Windows 2000/XP/2003, Linux, FreeBSD,
Solaris and Mac
Windows 2000/XP/2003
From version 1.6 and later, the HTTP management software (ArcHttp) runs as a service or daemon, and have it automatically start the proxy for all controllers found. This way the controller can be managed remotely without having to sign in the server. The HTTP management software (ArcHttp) also has integrated the General Configuration, Mail
Configuration and SNMP Configuration. Those can be configured in local or remote standard web browser.
Note:
If your controller have onboard LAN port, you do not need to install ArcHttp proxy server, you can use McRAID storage manager directly.
99
ARCHTTP PROXY SERVER INSTALLATION
5.1 For Windows
You must have administrative level permissions to install SATA
RAID software. This procedure assumes that the SATA RAID hardware and Windows are installed and operational in your system.
Screen captures in this section are taken from a Windows XP installation. If you are running another version of Windows, your instalation screen may look different, but the ArcHttp proxy server installation is essentially the same.
1. Insert the RAID controller software CD in the CD-ROM drive.
2. Run the setup.exe file that resides at: <CD-ROM>\PACKAGES\
Windows\http\setup.exe on the CD-ROM.
3. The screen shows “Preparing to Install”.
Follow the on-screen prompts to complete ArcHttp proxy server software installation.
A program bar appears that measures the progress of the
ArcHttp setup. When this screen complete, you have completed the ArcHttp proxy server software setup.
4. After a successful installation, the “Setup Complete” dialog box is displayed.
100
Click the “Finish” button to complete the installation.
ARCHTTP PROXY SERVER INSTALLATION
Click on the “Start” button in the Windows task bar and then click “Program”, select the “McRAID” and run “ ArcHttp proxy server”. The ArcHttp proxy server dialog box appears.
1. When you select “Controller#01(PCI)” then click “Start” button. Then web broswer McRAId storage manager appears.
2. If you select “Cfg Assistant” then click “Start” button. The
“ArcHttp Configuration” appears. (Please refer to section 5.6
ArcHttp Configuration)
5.2 For Linux
You should have administrative level permissions to install SATA
RAID software. This procedure assumes that the SATA RAID hardware and Linux are installed and operational in your system.
The following details the Linux installation procedure of the SATA
RAID controller software.
The ArcHttp proxy server is provided on the software CD delivered with SATA card or download from the www.areca.com.tw. The firmware embedded McRAID storage manager can configure and monitor the SATA RAID controller via ArcHttp Proxy Server.
1. Login as root. Copy the ArcHttp file to a local directory.
(1). Insert the SATA RAID controller CD in the CD-ROM drive.
101
ARCHTTP PROXY SERVER INSTALLATION
(2). Copy <CD-ROM>\PACKAGES\Mac\http directory to local (Ex:/ usr/local/sbin).
Or
(1). Download from the www.areca.com.tw or from the email attachment.
2. You must have administrative level permissions to install SATA
RAID controller ArcHttp proxy server software. This procedure assumes that the SATA RAID hardware and driver are installed and operational in your system.
The following details are the installation procedure of the SATA
RAID controller for Linux ArcHttp proxy server software.
(1).Run the Archttp proxy server by using the following command:
Usage: ./archttp32 (TCP_PORT) or ./archttp64 (TCP_PORT). It depends on your OS version.
Parameters: TCP_PORT value= 1~65535 (If TCP_PORT assigned,
Archttp will start from this port. Otherwise, it will use the setting in the archttpsrv.conf or default 81). This is the port address assigning for the first adapter.
Such as: archttp64 1553
(2). Archttp server console started, controller card detected then
ArcHttp proxy server screen appears.
Copyright (c) 2004 Areca, Inc. All Rights Reserved.
Areca HTTP proxy server V1.80.240 for Areca RAID controllers.
Controller(s) list
--------------------------------------------
Controller[1](PCI) : Listen to port[1553].
Cfg Assistant : Listen to port[1554].
Binding IP:[0.0.0.0]
Note: IP[0.0.0.0] stands for any ip bound to this host.
--------------------------------------------
##############################
Press CTRL-C to exit program!!
##############################
Controller [1] Http: New client [9] accepted
102
ARCHTTP PROXY SERVER INSTALLATION
Controller [1] Http: New Recv 243 bytes
Controller [1] Http: Send [174] bytes back to the client
(3). If you need the “Cfg Assistant”, please refer to section 5.6
ArcHttp Configuration.
(4). See the next chapter detailing the McRAID storage manager to customize your RAID volume set.
5.3 For FreeBSD
You should have administrative level permissions to install SATA
RAID software. This procedure assumes that the SATA RAID hardware and FreeBSD are installed and operational in your system.
The following details FreeBSD installation procedure of the SATA
RAID controller software.
1. Insert the RAID controller CD in the CD-ROM drive.
2. Copy <CD-ROM>\PACKAGES\FreeBSD\http directory to local
The next following step is the same with Linux. Please see section
5.2 For Linux.
5.4 For Solaris 10 x86
You must have administrative level permissions to install SATA RAID software. This procedure assumes that the SATA RAID hardware and FreeBSD are installed and operational in your system.
The following details Solaris installation procedure of the SATA
RAID controller software.
1. Insert the RAID subsystem CD in the CD-ROM drive.
2. Copy <CD-ROM>\PACKAGES\Solaris\http directory to local The next following step is the same with Linux. Please see section 5.2
For Linux.
5.5 For Mac OS 10.x
The ArcHttp proxy server is provided on the software CD delivered with SATA card or download from the www.areca.com.tw. The
103
ARCHTTP PROXY SERVER INSTALLATION
firmware embedded McRAID storage manager can configure and monitor the SATA RAID controller via ArcHttp proxy server. The
Archttp proxy server for Mac, please reference Chapter 4.6 Driver
Installation for Mac 10.x or refer to the the Mac_manual_xxxx.
pdf that resides at CD <CD-ROM>\DOCS directory. You can install driver, archttp64 and arc-cli from software CD < CD >\package\
Mac OS directory at the same time.
5.6 ArcHttp Configuration
The ArcHttp proxy server will automatically assign one additional port for setup its configuration. If you want to change the "archttpsrv.conf" setting up of ArcHttp configuration, For example: General
Configuration, Mail Configuration, and SNMP Configuration, please start web browser by entering http://[computer IP address]:[cfg port number]
104
The ArcHttp configuration starts.
• General Configuration
Binding IP 0.0.0.0: You can choose either local adminstration or remote adminstration to connect web browser.
Binding IP 127.0.0.1: Using local adminstration to connect web browser.
Binding IP 192.166.0.44: Using remote adminstration to connect web browser.
HTTP Port#: Value 1~65535
Display HTTP Connection Information To Console: Select Yes to show Http send bytes and receive bytes information in the console.
ARCHTTP PROXY SERVER INSTALLATION
Scanning PCI Device: Select “Yes” for ARC-1XXX series adapter
Scanning RS-232 Device: No
Scanning Inband Device: No
• Mail Configuration
When you open the mail configuration page, you will see following settings:
SMTP server IP Address: Enter the SMTP server IP address which is not McRAID storage manager IP. Ex: 192.168.0.2
Sender Name: Enter the sender name that will be shown in the outgoing mail. Ex: RaidController_1
Mail address: Enter the sender email that will be shown in the outgoing mail, but don’t type IP to replace domain name.
Account: Enter the valid account if your SMTP mail server need authentication.
Password: Enter the valid password if your SMTP mail server need authentication.
105
ARCHTTP PROXY SERVER INSTALLATION
MailTo Name: Enter the alert receiver name that will be shown in the outgoing mail.
Mail Address: Enter the alert receiver mail address.
Note:
Please make sure you have completed mail address before you submit mail configurations.
• SNMP Trap Configuration
Please refer to the 6.8.4 SNMP configuration(12/16/24-port) section.
Configure configuration and submit. After ArcHttp configurations have successfully submitted, the Archttp console restarts again.
106
Note:
Event Notification Table refer to Appendix D.
After you confirm and submit configurations, you can use
"Generate Test Event" feature to make sure these settings are correct.
WEB BROWSER-BASED CONFIGURATION
6. Web Browser-based Configuration
Before using the firmware-based browser McRAID storage manager utility, do the initial setup and installation of this product. If you need to boot up the operating system from a volume set, you must first create a RAID volume by using McBIOS RAID Manager. Please refer to section 3.3 Using Quick Volume /Raid Setup Configuration for information on creating this initial volume set.
The McRAID storage manager is firmware-based utility, which is accessible via the browser installed on your operating system. The web browser-based McRAID storage manager is a HTML-based application, which utilizes the browser (IE, Netscape and Mozilla etc) installed on your monitor station.
It can be accessed through the In-Band PCI-X/PCIe bus or Out-of-Band
Ethernet port. The In-Band method via archttp proxy server to launch the web browser-based McRAID storage manager. The firmware-embedded web browser-based McRAID storage manager allows local or remote to access it from any standard internet browser via a LAN or
WAN with no software or patches required. The firmware contains
SMTP manager monitors all system events and user can select either single or multiple user notifications to be sent via LAN with “Plain English” e-mails. The firmware-embedded SNMP agent allows remote to monitor events via LAN with no SNMP agent required.
• Create RAID set,
• Expand RAID set,
• Define volume set,
• Add physical drive ,
• Modify volume set,
• Modify RAID level/stripe size,
• Define pass-through Disk drives,
• Modify system function,
• Update firmware, and
• Designate drives as hot spares.
6.1 Start-up McRAID Storage Manager
With the McRAID storage manager, you can locally manage a system containing a SATA RAID controller that has Windows,
107
WEB BROWSER-BASED CONFIGURATION
Linux or more and a supported browser. A locally managed system requires all of the following components:
• A supported web browser, which should already be installed on
the system.
• Install ArcHttp proxy server on the SATA RAID system. (Refer to
Chapter 5, Archttp Proxy Server Installation)
• Remote and managed systems must have a TCP/IP connection.
• Start-up McRAID Storage Manager from Windows
Local Administration
Screen captures in this section are taken from a Windows XP installation. If you are running another version of Windows, your screens may look different, but the ArcHttp proxy server installation is essentially the same.
1. To start the McRAID storage manager for browser-based management, selecting "Controller#01(PCI)" and then click the
“Start“ button.
108
The “Enter Network Password” dialog screen appears, type the user name and password. The SATA RAID controller default user name is “admin” and the password is “0000”. After entering the user name and password, press “Enter” to access the McRAID storage manager.
WEB BROWSER-BASED CONFIGURATION
• Start-up McRAID Storage Manager from Linux/
FreeBSD/Solaris/Mac Local Administration
To configure the SATA RAID controller. You need to know its IP address. You can find the IP address assigned by the Archttp proxy server installation:Binding IP:[X.X.X.X] for[Computer IP
Address] and controller listen to port for [Port Number].
(1). Launch your McRAID storage manager by entering http://
[Computer IP Address]:[Port Number] in the web browser.
(2). When connection is established, the "System Login" screen appears. The SAS RAID controller default User Name is “admin” and the Password is “0000”
• Start-up McRAID Storage Manager Through Ethernet port (Out-of-Band)
Areca now offers an alternative means of communication for the PCIe RAID controller – Web browser-based McRAID storage manager program. User can access the built-in configuration without needing system starting up running the ArcHttp proxy sever. The web browser-based McRAID storage manager program is an HTML-based application, which utilizes the browser installed on your remote system.
To ensure proper communications between the PCIe RAID controller and web browser-based McRAID storage manager,
Please connect the RAID controller LAN port to any LAN switch port.
The controller has embedded the TCP/IP & web browser-based
RAID manager in the firmware. User can remote manage the
RAID controller without adding any user specific software
(platform independent) via standard web browsers directly connected to the 10/100 RJ45 LAN port.
109
WEB BROWSER-BASED CONFIGURATION
To configure RAID controller on a remote machine, you need to know its IP address. The IP address will default show in
McBIOS RAID manager of “Ethernet Configuration” or “System
Information” option. Launch your firmware-embedded TCP/IP & web browser-based McRAID storage manager by entering http://
[IP Address] in the web browser.
Note:
You can find controller Ethernet port IP address in McBIOS
RAID manager “System Information” option.
6.2 McRAID Storage Manager
The McRAID storage manager start-up configuration screen displays the current configuration of your SATA RAID controller. It displays the Raid Set List, Volume Set List, and Physical Disk List.
The RAID set information, volume set information, and drive information can also be viewed by clicking on the “Raid Set Hierarchy” screen. The current configuration can also be viewed by clicking on
“Raid Set Hierarchy” in the main menu.
110
To display RAID set information, move the mouse cursor to the desired RAID set number, then click it. The RAID set information will display. To display volume set information, move the mouse cursor to the desired volume set number, then click it. The volume set information will display. To display drive information, move the mouse cursor to the desired physical drive number, then click it.
The drive information will display.
WEB BROWSER-BASED CONFIGURATION
6.3 Main Menu
The main menu shows all available functions, accessible by clicking on the appropriate link.
Individual Category
Quick Function
RaidSet Functions
VolumeSet Functions
Physical Drives
System Controls
Information
Description
Create a default configuration, which is based on the number of physical disks installed; it can modify the volume set Capacity, Raid Level, and
Stripe Size.
Create a customized RAID set.
Create customized volume sets and modify the existed volume sets parameter.
Create pass through disks and modify the existing pass through drives parameters. Also provides the function to identify disk drives (blinking Fault
LED).
Setting the raid system configuration.
Viewing the controller information. The “RaidSet
Hierarchy” can be viewed through the “RaidSet
Hierarchy” item.
6.4 Quick Function
Note:
In Quick Create, your volume set is automatically configured based on the number of disks in your system. Use the “Raid
Set Functions” and “Volume Set Functions” if you prefer to customize your system.
111
WEB BROWSER-BASED CONFIGURATION
The number of physical drives in the SATA RAID controller determines the RAID levels that can be implemented with the RAID set.
You can create a RAID set associated with exactly one volume set.
The user can change the RAID level, stripe size, and capacity. A hot spare option is also created depending upon the existing configuration.
Click the “Confirm The Operation” check box and click on the “Submit” button in the “Quick Create” screen, the RAID set and volume set will start to initialize.
Note:
If volume capacity exceeds 2TB, controller will show the
"Greater Two TB Volume Support" sub-menu. Greater Two TB
Volume Support option: No, 64bit LBA and For Windows.
For more details please download PDF file from ftp://ftp. areca.com.tw/RaidCards/Documents/Manual_Spec/
Over2TB_050721.zip
6.5 RaidSet Functions
Use the “Raid Set Functions” and “Volume Set Functions” if you prefer to customize your system. Manual configuration can provide full control of the RAID set settings, but it will take longer to complete than the “Quick Volume/Raid Setup” configuration. Select the
“Raid Set Functions” to manually configure the RAID set for the first time or delete and reconfigure existing RAID sets. (A RAID set is a group of disks containing one or more volume sets.)
6.5.1 Create Raid Set
To create a RAID set, click on the “Create Raid Set” link. A “Select The Drive For RAID Set” screen will be displayed showing the drive(s) connected to the current controller. Click on the selected physical drives within the current RAID set. The default RAID set name will always appear as “Raid Set. #”.
Click the “Confirm The Operation” check box and click on the
“Submit” button on the screen; the RAID set will start to initialize.
112
WEB BROWSER-BASED CONFIGURATION
6.5.2 Delete Raid Set
To delete a RAID set, click on the “Deleted Raid Set” link. The
“Select The Raid Set To Delete” screen is displayed showing all existing RAID sets in the current controller. Click the RAID set number you which to delete in the select column on the delete screen. Click the “Confirm The Operation” check box and click on the “Submit” button in the screen to delete it.
6.5.3 Expand Raid Set
Instead of deleting a RAID set and recreating it with additional disk drives, the “Expand Raid Set” function allows the users to add disk drives to the RAID set that have already been created.
To expand a RAID set:
Select the “Expand Raid Set” option. If there is an available disk, then the “Select SATA Drives For Raid Set Expansion” screen appears.
Select the target RAID set by clicking on the appropriate radio button. Select the target disk by clicking on the appropriate check box.
Press the Yes key to start the expansion on the RAID set.
113
WEB BROWSER-BASED CONFIGURATION
The new additional capacity can be utilized by one or more volume sets. The volume sets associated with this RAID set appear for you to have chance to modify RAID level or stripe size. Follow the instruction presented in the “Modify Volume Set ” to modify the volume sets; operation system specific utilities may be required to expand operating system partitions.
Note:
1. Once the “Expand Raid Set” process has started, user can not stop it. The process must be completed.
2. If a disk drive fails during raid set expansion and a hot spare is available, an auto rebuild operation will occur after the RAID set expansion completes.
6.5.4 Activate Incomplete Raid Set
If one of the disk drives is removed in power off state, the RAID set state will change to “Incomplete State”. If the user wants to continue to use the SATA RAID controller, the user can use the
“Activate Raid Set” option to active the RAID set. After the user completes this function, the RAID set state will change to “Degraded Mode”.
114
WEB BROWSER-BASED CONFIGURATION
To activate the incomplete the RAID set, click on the “Activate
Raid Set” link. A “Select The RAID SET To Activate” screen is displayed showing all RAID sets existing on the current controller.
Click the RAID set number to activate in the select column.
Click on the “Submit” button on the screen to activate the RAID set that had a disk removed (or failed) in the power off state. The
SATA RAID controller will continue to work in degraded mode.
6.5.5 Create Hot Spare
When you choose the “Create Hot Spare” option in the “Raid Set
Functions”, all unused physical devices connected to the current controller appear. Select the target disk by clicking on the appropriate check box. Click the “Confirm The Operation” check box and click the “Submit” button in the screen to create the hot spares.
The “Create Hot Spare” option gives you the ability to define a global hot spare.
6.5.6 Delete Hot Spare
Select the target Hot Spare disk to be deleted by clicking on the appropriate check box.
Click the “Confirm The Operation” check box and click the “Submit” button on the screen to delete the hot spares.
6.5.7 Rescue Raid Set
When the system is powered off in the RAID set update/creation period, it possibly could disappear due to this abnormal condition.
The “RESCUE” function can recover the missing RAID set informa-
115
WEB BROWSER-BASED CONFIGURATION tion. The RAID controller uses the time as the RAID set signature.
The RAID set may have different time after the RAID set is recovered. The “SIGANT” function can regenerate the signature for the
RAID set.
116
6.5.8 Offline Raid Set
This function is for customer being able to unmount and remount a multi-disk volume. All Hdds of the selected RAID set will be put into offline state, spun down and fault LED in fast blinking mode.
User can remove those Hdds and insert new Hdds on those empty slots without needing power down the controller.
6.6 Volume Set Functions
A volume set is seen by the host system as a single logical device.
It is organized in a RAID level with one or more physical disks.
RAID level refers to the level of data performance and protection of a volume set. A volume set capacity can consume all or a portion of the disk capacity available in a RAID set. Multiple volume sets can exist on a group of disks in a RAID set. Additional volume sets created in a specified RAID set will reside on all the physical disks in the RAID set. Thus each volume set on the RAID set will have its data spread evenly across all the disks in the RAID set.
6.6.1 Create Volume Set
1. Volume sets of different RAID levels may coexist on the same
RAID set.
2. Up to 16 volume sets can be created by the SATA RAID controller.
WEB BROWSER-BASED CONFIGURATION
3. The maximum addressable size of a single volume set is not limited to 2 TB because the controller is capable of 64-bit mode.
However, the operating system itself may not be capable of addressing more than 2 TB. See the Areca website for details.
To create a volume set on a RAID set, move the cursor bar to the main menu and click on the “Create Volume Set” link. This “Select
The Raid Set To Create On It” screen will show all RAID set numbers. Click the RAID set number that to be used and then click the “Submit” button.
The “Enter Volume Attributeid Set=xx” option allows users to select the volume name, capacity, RAID level, strip size, SCSI
ID/LUN, cache mode, and tag queuing.
•
Volume Name
The default volume name will always appear as “Volume Set.
#”. You can rename the volume set providing it does not exceed the 15 characters limit.
•
Raid Level
Set the RAID level for the volume set. Highlight the desired
RAID Level and press Enter key. The available RAID levels for the current volume set are displayed. Select a RAID level and press "Enter" to confirm.
•
Capacity
The maximum volume size is the default initial setting. Enter the appropriate volume size to fit your application.
•
Greater Two TB Volume Support
If volume capacity exceeds 2TB, controller will show the
117
WEB BROWSER-BASED CONFIGURATION
"Greater Two TB Volume Support" sub-menu. Greater Two TB
Volume Supports: No, 64bit LBA and For Windows options.
For more details please download PDF file from ftp://ftp. areca.com.tw/RaidCards/Documents/Manual_Spec/
Over2TB_050721.zip
•
Initialization Mode
Press Enter key to define “Background Initialization”,
“Foreground Initialization” or “No Init (To Rescue Volume)”.
When “Background Initialization”, the initialization proceeds as a background task, the volume set is fully accessible for system reads and writes. The operating system can instantly access to the newly created arrays without requiring a reboot and waiting the initialization complete. When “Foreground Initialization”, the initialization proceeds must be completed before the volume set ready for system accesses. There is no initialization happed when you select “No Init” option. “No Init“ is for customer to rescue volume without losing data in the disk.
•
Stripe Size
This parameter sets the size of the stripe written to each disk in a RAID level 0, 1, 10, 5 or 6 logical drive. You can set the stripe size to 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, or 128 KB.
A larger stripe size produces better read performance, especially if your computer does mostly sequential reads. However, if you are sure that your computer does random reads more often, select a smaller stripe size.
Note: RAID level 3 can’t modify cache stripe size.
•
Cache Mode
The SATA RAID controller supports “Write Through” and “Write
Back” cache.
•
SCSI Channel/SCSI ID/SCSI Lun
SCSI Channel: The SATA RAID controller function is simulated as a SCSI RAID controller. The host bus is represented as a
SCSI channel. Choose the SCSI Channel.
SCSI ID: Each SCSI device attached to the SCSI card, as well as the card itself, must be assigned a unique SCSI ID number.
A SCSI channel can connect up to 15 devices. The SATA RAID
118
WEB BROWSER-BASED CONFIGURATION controller is a large SCSI device. Assign an ID from a list of
SCSI IDs.
SCSI LUN: Each SCSI ID can support up to 8 LUNs. Most SATA
RAID controllers treat each LUN like a SATA disk.
•
Tag Queuing
The Enabled option is useful for enhancing overall system performance under multi-tasking operating systems. The Command Tag (Drive Channel) function controls the SCSI command tag queuing support for each drive channel. This function should normally remain enabled. Disable this function only when using older SATA drives that do not support command tag queuing
6.6.2 Delete Volume Set
To delete a volume set from RAID set, move the cursor bar to the main menu and click on the “Delete Volume Set” link. The “Select
The Raid Set To Delete” screen will show all RAID set numbers.
Click a RAID set number and the “Confirm The Operation” check box and then click the “Submit” button to show all volume set items in the selected RAID set. Click a volume set number and the “Confirm The Operation” check box and then click the “Submit” button to delete the volume set.
6.6.3 Modify Volume Set
To modify a volume set from a RAID set:
(1). Click on the “Modify Volume Set” link.
(2). Click the volume set check box from the list that you wish to
119
WEB BROWSER-BASED CONFIGURATION modify. Click the “Submit” button. The following screen appears.
Use this option to modify the volume set configuration. To modify volume set attributes, move the cursor bar to the volume set attribute menu and click it. The “Enter The Volume Attribute” screen appears. Move the cursor to an attribute item and then click the attribute to modify the value. After you complete the modification, click the “Confirm The Operation” check box and click the
“Submit” button to complete the action. The user can modify all values except capacity.
6.6.3.1 Volume Growth
Use “Expand RAID Set’ function to add disk to a RAID set. The additional capacity can be used to enlarge the last volume set size or to create another volume set. The “Modify Volume Set” function can support the “Volume Modification” function. To expand the last volume set capacity , move the cursor bar to the “ Capacity” item and entry the capacity size. When finished the above action, press the ESC key and select the Yes option to complete the action. The last volume set starts to expand its capacity.
To expand an existing volume noticed:
• Only the last volume can expand capacity.
• When expand volume capacity, you can’t modify stripe size or
modify RAID revel simultaneously.
• You can expand volume capacity, but can’t reduce volume
capacity size.
• After volume expansion, the volume capacity can't be
decreased.
For greater 2TB expansion:
• If your system installed in the volume, don't expand the
volume capacity greater 2TB, currently OS can’t support boot
up from a greater 2TB capacity device.
• Expand over 2TB used LBA64 mode. Please make sure your
OS supports LBA64 before expand it.
120
WEB BROWSER-BASED CONFIGURATION
6.6.3.2 Volume Set Migration
Migrating occurs when a volume set is migrating from one RAID level to another, when a volume set strip size changes, or when a disk is added to a RAID set. Migration status is displayed in the volume state area of the “Volume Set Information” screen.
6.6.4 Check Volume Set
To check a volume set from a RAID set:
(1). Click on the “Check Volume Set” link.
(2). Click on the volume set from the list that you wish to check.
Tick on “Confirm The Operation” and click on the “Submit” button.
Use this option to verify the correctness of the redundant data in a volume set. For example, in a system with dedicated parity, volume set check means computing the parity of the data disk drives and comparing the results to the contents of the dedicated parity disk drive. The checking percentage can also be viewed by clicking on “RaidSet Hierarchy” in the main menu.
121
WEB BROWSER-BASED CONFIGURATION
6.6.5 Stop VolumeSet Check
Use this option to stop the “Check Volume Set function”.
6.7 Physical Drive
Choose this option to select a physical disk from the main menu and then perform the operations listed below.
6.7.1 Create Pass Through Disk
To create pass through disk, move the mouse cursor to the main menu and click on the “Create Pass Through” link. The “Select the
IDE Drive For Pass Through” screen appears. A pass through disk is not controlled by the SATA RAID controller firmware, it cannot be a part of a volume set. The disk is available to the operating system as an individual disk. It is typically used on a system where the operating system is on a disk not controlled by the
RAID firmware. The user can also select the Cache Mode, Tagged
Command Queuing, SCSI channel/SCSI_ID/SCSI_LUN for this pass through disk.
122
WEB BROWSER-BASED CONFIGURATION
6.7.2 Modify Pass Through Disk
Use this option to modify the “Pass Through Disk Attribute”. The user can modify the Cache Mode, Tagged Command Queuing, and SCSI channel/ID/LUN on an existing pass through disk.
To modify the pass through drive attribute from the pass through drive pool, move the mouse cursor bar and click on the “Modify
Pass Through” link. The “Select The Pass Through Disk For Modification” screen appears mark the checkbox for the pass through disk from the pass through drive pool and click on the “Submit” button to select drive.
When the “Enter Pass Through Disk Attribute” screen appears, modify the drive attribute values, as you want.
After you complete the selection, mark the check box for “Confirm
The Operation” and click on the “Submit” button to complete the selection action.
6.7.3 Delete Pass Through Disk
To delete a pass through drive from the pass through drive pool, move the mouse cursor bar to the main menus and click the “Delete Pass Through” link.
After you complete the selection, mark the check box for “Confirm The Operation” and click the “Submit” button to complete the delete action.
123
WEB BROWSER-BASED CONFIGURATION
6.7.4 Identify Selected Drive
To prevent removal of the wrong drive, the selected fault LED will blink so as to physically locate the intended disk when “Identify
Selected Drive” is selected.
To identify the selected drive from the drives pool, click “Identify
Selected Drive”. The “Select The IDE Device For Identification” screen appears mark the check box for the SATA device from the drive pool. After completing the selection, click on the “Submit” button to identify selected drive.
124
6.8 System Controls
6.8.1 System Config
To set the RAID system configuration, move the cursor to the main menu and click the “System Controls” link. The “System
Controls” will show all items, then select the desired function.
•
System Beeper Setting
The “System Beeper Setting” function item is used to “Disable” or “Enable” the SATA RAID controller alarm tone generator.
•
Background Task Priority
The ‘Background Task Priority” is a relative indication of how much time the controller devotes to a rebuild operation. The
SATA RAID controller allows the user to choose the rebuild priority (UltraLow, Low, Normal, High) to balance volume set access and rebuild tasks appropriately. For high array performance, specify a “Low” value.
WEB BROWSER-BASED CONFIGURATION
•
JBOD/RAID Configuration
JBOD is an acronym for “Just a Bunch Of Disk”. A group of hard disks in a RAID controllers are not set up as any type of RAID configuration. All drives are available to the operating system as an individual disk. JBOD does not provide data redundancy. User needs to delete the RAID set, when you want to change the option from the RAID to the JBOD function.
•
Maximun SATA Supported
The SATA RAID controller can support up to SATA ll, which runs up to 300MB/s. NCQ is a command protocol in Serial ATA that can only be implemented on native Serial ATA hard drives. It allows multiple commands to be outstanding within a drive at the same time. Drives that support NCQ have an internal queue where outstanding commands can be dynamically rescheduled or re-ordered, along with the necessary tracking mechanisms for outstanding and completed portions of the workload. The RAID subsystem allows user to choose the SATA mode (slowest to fastest): SATA150, SATA150+NCQ, SATA300, SATA300+NCQ.
• HDD Read Ahead Cache
Allow Read Ahead (Default: Enabled)—When Enabled, the drive’s read ahead cache algorithm is used, providing maximum performance under most circumstances.
•
Stagger Power on
In a PC system with only one or two drives, the power is able to supply enough power to spin up both drives simultaneously. But in systems with more than two drives, the startup current from spinning up the drives all at once can overload the power supply, causing damage to the power supply, disk drives and other system components. This damage can be avoided by allowing the host to stagger the spin-up of the drives. New SATA drives have support stagger spin-up capabilities to boost reliability.
Staggered spin-up is a very useful feature for managing multiple disk drives in a storage subsystem. It gives the host the ability to spin up the disk drives sequentially or in groups, allowing the drives to come ready at the optimum time without straining the system power supply. Staggering drive spin-up in a multiple drive environment also avoids the extra cost of a power supply designed to meet short-term startup power demand as well as steady state conditions.
125
WEB BROWSER-BASED CONFIGURATION
SATA RAID controller has included the option for customer to select the disk drives sequentially stagger power up value.
The values can be selected from 0.4s to 6s per step which powers up one drive.
•
Empty HDD Slot LED
The firmware has added the "Empty HDD Slot LED" option to setup the fault LED light "ON "or "OFF" when there is no HDD installed. When each slot has a power LED for the HDD installed identify, user can set this option to "OFF ". Choose option "ON", the SATA RAID controller will light the fault LED; if no HDD installed.
126
WEB BROWSER-BASED CONFIGURATION
•
HDD SMART Status Polling
An external RAID enclosure has the hardware monitor in the dedicated backplane that can report HDD temperature status to the controller. However, PCI type controllers do not use backplanes if the drives are internal to the main server chassis.
The type of enclosure cannot report the HDD temperature to the controller. For this reason, "HDD SMART Status Polling" function was added to enable scanning of the HDD temperature function.
It is necessary to enable “HDD SMART Status Polling” function before SMART information is accessible. This function is disabled by default.
The following screen shot shows how to change the setting to enable the polling function.
•
Disk Write Cache Mode
A user can set the “Disk Write Cache Mode”: Auto, Enabled, or
Disabled.
127
WEB BROWSER-BASED CONFIGURATION
•
Disk Capacity Truncation Mode
SATA RAID controller use drive truncation so that drives from differing vendors are more likely to be able to be used as spares for each other. Drive truncation slightly decreases the usable capacity of a drive that is used in redundant units.
The controller provides three truncation modes in the system configuration: “Multiples Of 10G”, “Multiples Of 1G”, and “No
Truncation”.
Multiples Of 10G: If you have 120 GB drives from different vendors; chances are that the capacity varies slightly. For example, one drive might be 123.5 GB, and the other 120 GB.
“Multiples Of 10G” truncates the number under tens. This makes the same capacity for both of these drives so that one could replace the other.
Multiples Of 1G: If you have 123 GB drives from different vendors; chances are that the capacity varies slightly. For example, one drive might be 123.5 GB, and the other 123.4 GB.
“Multiples Of 1G” truncates the fractional part. This makes the same capacity for both of these drives so that one could replace the other.
No Truncation: It does not truncate the capacity.
128
WEB BROWSER-BASED CONFIGURATION
6.8.2 Ethernet Configuration (12/16/24-port)
Use this feature to set the controller Ethernet port configuration.
A customer doesn’t need to create a reserved space on the arrays before the Ethernet port and HTTP service are working. The firmware-embedded Web Browser-based RAID storage manager can access it from any standard internet browser or from any host computer either directly connected or via a LAN or WAN with no software or patches required.
DHCP (Dynamic Host Configuration Protocol) is a protocol that lets network administrators manage centrally and automate the assignment of IP (Internet Protocol) configurations on a computer network. When using the internet’s set of protocols (TCP/IP), in order for a computer system to communicate to another computer system, it needs a unique IP address. Without DHCP, the
IP address must be entered manually at each computer system.
DHCP lets a network administrator supervise and distribute IP addresses from a central point. The purpose of DHCP is to provide the automatic (dynamic) allocation of IP client configurations for a specific time period (called a lease period) and to eliminate the work necessary to administer a large IP network.
To configure the RAID controller Ethernet port, move the cursor bar to the main menu and click on the “System Controls” link.
The “System Controls” menu will show all items. Move the cursor bar to the “EtherNet Config” item, then press “Enter” key to select the desired function.
129
WEB BROWSER-BASED CONFIGURATION
6.8.3 Alert by Mail Configuration (12/16/24port)
To configure the SATA RAID controller e-mail function, move the cursor bar to the main menu and click on the “System Controls” link. The “System Controls” menu will show all items. Move the cursor bar to the “Alert By Mail Config” item, then select the desired function. This function can only be set via web-based configuration.
The firmware contains a SMTP manager monitoring all system events. Single or multiple user notifications can be sent via “Plain
English” e-mails with no software required.
130
6.8.4 SNMP Configuration (12/16/24-port)
To configure the RAID controller SNMP function, click on the “System Controls” link. The “System Controls” menu will show available items. Select the “SNMP Configuration” item. This function can only set via web-based configuration.
The firmware SNMP agent manager monitors all system events and the SNMP function becomes functional with no agent software required.
WEB BROWSER-BASED CONFIGURATION
•
SNMP Trap Configurations
Enter the SNMP Trap IP Address.
• SNMP System Configurations
About community, please refer to Appendix C of SNMP community name. The system Contact, Name and Location that will be shown in the outgoing SNMP Trap.
•
SNMP Trap Notification Configurations
Please refer to Appendix D of Event Notification Table.
6.8.5 NTP Configuration (12/16/24-port)
The “Network Time Protocol (NTP)” is used to synchronize the time of a computer client or server to another server or reference time source, such as a radio or satellite receiver or modem. It provides accuracies typically within a millisecond on LANs and up to a few tens of milliseconds on WANs relative to Coordinated Universal Time (UTC) via a Global Positioning Service (GPS) receiver, for example:
131
WEB BROWSER-BASED CONFIGURATION
132
•
NTP Sever Address
The most important factor in providing accurate, reliable time is the selection of NTP servers to be used in the configuration file.
Typical NTP configurations utilize multiple redundant servers and diverse network paths in order to achieve high accuracy and reliability. Our NTP configuration supports two existing public NTP synchronization subnets.
•
Time Zone
The "Time Zone" conveniently runs in the system tray and allows you to view the date and time in various locations around the world easily. You are also able to add your own personal locations to customize time zone the way you want with great ease and less hassle.
•
Automatic Daylight Saving
The “Automatic Daylight Saving” will normally attempt to automatically adjust the system clock for daylight saving changes based on the computer time zone. This tweak allows you to disable the automatic adjustment.
Note:
NTP feature works through onboard Ethernet port. So you must make sure that you have connected onboard
Ethernet port.
WEB BROWSER-BASED CONFIGURATION
6.8.6 View Events/Mute Beeper
To view the SATA RAID controller’s information, click on the “View
Events/Mute Beeper” link. The SATA RAID controller “System
Events information” screen appears.
Choose this option to view the system events information: Timer,
Device, Event type, Elapse Time and Errors. The RAID system does not have a built-in real time clock. The time information is the relative time from the SATA RAID controller power on.
6.8.7 Generate Test Event
This feature is used to generate events for testing purposes.
133
WEB BROWSER-BASED CONFIGURATION
6.8.8 Clear Events Buffer
Use this feature to clear the entire events buffer information.
6.8.9 Modify Password
To set or change the SATA RAID controller password, select “Modify Password” from the menu and click on the “Modify Password” link. The “Modify System Password” screen appears.
The manufacture default password is set to 0000. The password option allows user to set or clear the SATA RAID controller’s password protection feature. Once the password has been set, the user can only monitor and configure the SATA RAID controller by providing the correct password.
The password is used to protect the SATA RAID controller from unauthorized entry. The controller will check the password only when entering the main menu from the initial screen. The SATA
RAID controller will automatically go back to the initial screen when it does not receive any command in ten seconds.
To disable the password, leave the fields blank. Once the user confirms the operation and clicks the “Submit” button, the existing password will be cleared. After which, no password checking will occur when entering the main menu from the starting screen.
134
WEB BROWSER-BASED CONFIGURATION
6.8.10 Update Firmware
Please refer to the appendix A Upgrading Flash ROM Update Process.
6.9 Information
6.9.1 RaidSet Hierarchy
Use this feature to view the SATA RAID controller current RAID set, current volume set and physical disk configuration. Please reference this chapter “Configuring Raid Sets and Volume Sets”.
135
WEB BROWSER-BASED CONFIGURATION
6.9.2 System Information
To view the SATA RAID controller’s information, move the mouse cursor to the main menu and click on the “System Information” link. The “Raid Subsystem Information” screen appears.
Use this feature to view the SATA RAID controller’s information.
The controller name, firmware version, serial number, main processor, CPU data/instruction cache size and system memory size/ speed appear in this screen.
136
6.9.3 Hardware Monitor
To view the RAID controller’s hardware monitor information, move the mouse cursor to the main menu and click the “Hardware Monitor” link. The “Hardware Monitor Information” screen appears.
The “Hardware Monitor Information” provides the temperature, and fan speed (I/O Processor fan) of the SATA RAID controller.
The ARC-1231/1261/1280/1280ML card interface lists two temperatures; one for the I/O processor and the other one for the controller. The I/O processor temperature is a new feature which detects by a thermal sensor under the IOP341. The processor safe range is 90 Celsius degree and the controller safe range is
70 Celsius degree. If any sensor detects over the safe ranges on these temperatures, you will get a warning event.
WEB BROWSER-BASED CONFIGURATION
137
APPENDIX
Appendix A
Upgrading Flash ROM Update Process
Since the SATA RAID controller features flash firmware, it is not necessary to change the hardware flash chip in order to upgrade the RAID firmware. The user can simply re-program the old firmware through the In-Band PCI-X/PCIe bus or Out-of-Band Enthernet port McRAID storage manager. New releases of the firmware are available in the form of a DOS file on the shipped CD or Areca’s web site. The files available at the FTP site for each model contain the following files in each version:
ARCXXXXNNNN.BIN Software Binary Code (where “XXXX” refers to the model name and “NNNN” refers to the software code type)
ARCXXXXBIOS.BIN : → PCI card BIOS for system board using
ARCXXXXBOOT.BIN : → RAID controller hardware initialization
ARCXXXXFIRM.BIN : → RAID kernel program
ARCXXXXMBR0.BIN: → Master Boot Record for supporting Dual
Flash Image in the SATA ll RAID controller
README.TXT contains the history information of the software code change in the main directory. Read this file first to make sure you are upgrading to the proper binary file. Select the right file for the upgrade. Normally, user upgrades the ARCXXXXBIOS.BIN for system M/B compatibility and ARCXXXXFIRM.BIN for RAID function upgrades.
Note:
Please update all Binary Code (BIOS, BOOT, FIRM and MBR0
) before you reboot system. Otherwise, a mixed firmware package may hang the controller.
138
Upgrading Firmware Through McRAID
Storage Manager
Get the new version firmware for your SATA RAID controller. For example, download the bin file from your OEM’s web site onto the C: drive
APPENDIX
1. To upgrade the RAID controller firmware, move the mouse cursor to “Upgrade Firmware” link. The “Upgrade The Raid System
Firmware” screen appears.
2. Click "Browser". Look in the location to which the firmware upgrade software was downloaded. Select the file name and click
“Open”. All files (BIOS, BOOT, FIRM and MBR0) can be updated through this function.
3. Click “Confirm The Operation” and press the “Submit” button.
4. The web browser begins to download the firmware binary to the controller and start to update the flash ROM.
5. After the firmware upgrade is complete, a bar indicator will show
“Firmware Has Been Updated Successfully”
6. After the new firmware has completed downloading, find a chance to restart the controller/computer for the new firmware to take effect.
The web browser-based McRAID storage manager can be accessed through the In-Band PCI-X/PCIe bus or Out-of-Band LAN port.
The In-Band method uses the ArcHttp proxy server to launch the
McRAID storage manager. The Out-of-Band method allows local or remote to access the McRAID storage manager from any standard internet browser via a LAN or WAN with no software or patches required.
Controller with onboard LAN port, you can directly plug an Ethernet cable to the controller LAN port, then enter the McBIOS RAID manager to configure the network setting. After network setting configured and saved, you can find the current IP address in the
"System Information" page.
139
APPENDIX
From a remote PC, you can directly open a web browser and enter the IP address. Then enter user name and password to login and start your management. You can find the firmware update feature in the browser console: "System Controls" option.
Upgrading Firmware Through nflash DOS
Utility
Areca now offers an alternative means communication for the SATA
RAID controller – Upgrade the all files (BIOS, BOOT, FIRM and
MBR0) without necessary system starting up to running the ArcHttp proxy server. The nflash utility program is a DOS application, which runs in the DOS operating system. Be sure of ensuring properly to communicate between SATA RAID controller and nflash DOS utility.
Please make a bootable DOS floppy diskette or UBS devices from other Windows operating system and boot up the system from those bootable devices.
• Starting the nflash Utility
You do not need to short any jumper cap on running nflash utility.
The nflash utility provides an on-line table of contents, brief descriptions of the help sub-commands. The nflash utility put on the <CD-ROM>\Firmware directory. You can run the <nflash> to get more detailed information about the command usage.
Typical output looks as below:
A:\nflash
Raid Controller Flash Utility
V1.11 2007-11-8
Command Usage:
NFLASH FileName
NFLASH FileName /cn --> n=0,1,2,3 write binary to controller#0
FileName May Be ARC1110FIRM.BIN or ARC1210*
For ARC1110* Will Expand To ARC1110BOOT /FIRM/BIOS.BIN
A:\>nflash arc1210FIRM.BIN
Raid Controller Flash Utility
V1.11 2007-11-8
NODEL : ARC-1110
MEM FE620000 FE7FF000
File ARC1110FIRM.BIN : >>*** => Flash 0K
140
APPENDIX
Note:
Areca SAS and SATA ll RAID controller firmware version
1.43 date: Feb 2007 and later has supported the ATA-8 spec for HDD microcode download, allowing customer using nflash DOS utility or web browser to upgrade ATA-8 spec for microcode download supported HDD's firmware connected with Areca's entire family RAID controllers without necessary removing any single drive and upgrade. Areca has provided one utility for customer to make the ATA-8 spec for microcode download drive’s firmware for readable by Areca firmware.
141
APPENDIX
Appendix B
Battery Backup Module (ARC6120-BAT-
Txx)
The SATA RAID controller operates using cache memory. The Battery Backup Module is an add-on module that provides power to the SATA RAID controller cache memory in the event of a power failure. The Battery Backup Module monitors the write back cache on the SATA RAID controller, and provides power to the cache memory if it contains data not yet written to the hard drives when power failure occurs.
BBM Components
142
Status of BBM
• D13 (Green) : lights when BBM activated
• D14 (Red) : lights when BBM charging
• D15 (Green) : lights when BBM normal
Installation
1. Make sure all power to the system is disconnected.
2. Connector J1 is available for the optional battery backup module. Connect the BBM cable to the 12-pin battery connector on the controller.
3. Integrators may provide pre-drilled holes in their cabinet for securing the BBM using its three mounting positions.
APPENDIX
Battery Backup Capacity
Battery backup capacity is defined as the maximum duration of a power failure for which data in the cache memory can be maintained by the battery. The BBM’s backup capacity varied with the memory chips that installed on the SATA RAID controller.
Capacity
128MB DDR
Memory Type
Low Power (18mA)
Battery Backup duration (Hours)
56
Operation
1. Battery conditioning is automatic. There are no manual procedures for battery conditioning or preconditioning to be performed by the user.
2. In order to make sure of all the capacity is available for your battery cells, allow the battery cell to be fully charged when installed for the first time. The first time charge of a battery cell takes about 24 hours to complete.
Changing the Battery Backup Module
At some point, the LI-ION battery will no longer accept a charge properly. LI-ION battery life expectancy is anywhere from approximately 1 to 5 years.
1. Shutdown the operating system properly. Make sure that cache memory has been flushed.
143
APPENDIX
2. Disconnect the BBM cable from J2 on the RAID controller.
3. Disconnect the battery pack cable from JP2 on the BBM.
4. Install a new battery pack and connect the new battery pack to JP2.
5. Connect the BBM to J2 on the SATA RAID controller.
6. Disable the write-back function from the McBIOS or Utility.
Note:
Do not remove BBM while system is running.
Battery Functionality Test Procedure:
1. Writing amount of data into controller volume, about 5GB or
bigger.
2. Waiting for few seconds, power failed system by remove the
power cable
3. Check the battery status, make sure the D13 is bright light,
and battery beeps every few seconds.
4. Power on system, and press Tab/F6 to login controller.
5. Check the controller event log, make sure the event shows
controller boot up with power recovered.
BBM Specifications
Mechanical
• Module Dimension (W x H x D)
37.3 x 13 x 81.6 mm
• BBM Connector
2 * 6 box header
Environmental
• Operating Temperature
Temperature: -25 O C to +60 O C
• Humidity: 45-85%, non-condensing
• Storage Temperature
Temperature: -40 O C to 85 O C
• Humidity: 45-85%, non-condensing
Electrical
• Input Voltage
+3.6VDC
• On Board Battery Capacity
1100mAH (1*1100mAH)
144
APPENDIX
Appendix C
SNMP Operation & Definition
Overview
The McRAID storage manager includes a firmware-embedded Simple Network Management Protocol (SNMP) agent and SNMP Extension Agent for the SATA RAID controller. An SNMP-based management application (also known as an SNMP manager) can monitor the disk array. An example of An SNMP management application is
Hewlett-Packard’s Open View. The SNMP Extension Agent can be used to augment the SATA RAID controller if you are already running an SNMP management application at your site.
SNMP Definition
SNMP, an IP-based protocol, has a set of commands for getting the status of target devices. The SNMP management platform is called the SNMP manager, and the managed devices have the SNMP agent loaded. Management data is organized in a hierarchical data structure called the Management Information Base (MIB). These
MIBs are defined and sanctioned by various industry associations. The objective is for all vendors to create products in compliance with these MIBs so that inter-vendor interoperability can be achieved. If a vendor wishes to include additional device information that is not specified in a standard MIB, then that is usually done through MIB extensions.
145
APPENDIX
MIB Compilation and Definition File Creation
Before the manager application accesses the SATA RAID controller, it is necessary to integrate the MIB into the management application’s database of events and status indicator codes. This process is known as compiling the MIB into the application. This process is highly vendor-specific and should be well-covered in the User’s
Guide of your SNMP application. Ensure the compilation process successfully integrates the contents of the ARECARAID.MIB file into the traps database.
146
SNMP Installation
The installation of the SNMP manager is accomplished in several phases:
• Starting the firmware-embedded SNMP community configura-
tion.
• Installing the SNMP Extension Agent on the server
• Installing the SNMP manager software on the client
• Placing a copy of the Management Information Base (MIB) in a
directory which is accessible to the management application
• Compiling the MIB description file with the management appli-
cation
Starting the SNMP Function Setting
APPENDIX
• Community Name
Community name acts as a password to screen accesses to the
SNMP agent of a particular network device. Type in the community names of the SNMP agent. Before access is granted to a request station, this station must incorporate a valid community name into its request; otherwise, the SNMP agent will deny access to the system.
Most network devices use “public” as default of their community names. This value is case-sensitive.
SNMP Extension Agent Installation for Windows
You must have the administrative level permission to install SATA
RAID software. This procedure assumes that the SATA RAID hardware and Windows are both installed and operational in your system.
To enable the SNMP agent for Windows, configure Windows for
TCP/IP and SNMP services. The Areca SNMP Extension Agent file
is ARCSNMP.DLL.
Screen captures in this section are taken from a Windows XP installation. If you are running another version of Windows, your screens may look different, but the Areca SNMP Extension Agent installation is essentially the same.
1. Insert the SATA RAID controller CD in the CD-ROM drive.
147
APPENDIX
2. Run the setup.exe file that resides at: <CD-ROM>\packages\ windows\http\setup.exe on the CD-ROM. (If SNMP service was not installed, please install SNMP service first.)
3. Click on the “Setup.exe” file then the welcome screen appears.
4. Click the “Next” button and then the “Ready Install the Program” screen appears. Follow the on-screen prompts to complete
Areca SNMP Extension Agent installation.
148
APPENDIX
5. A Progress bar appears that measures the progress of the
Areca SNMP Extension Agent setup. When this screen complete, you have completed the Areca SNMP Extension Agent setup.
6. After a successful installation, the “Installshield Wizard Completed” dialog box of the installation program is displayed. Click the “Finish” button to complete the installation.
Starting SNMP Trap Notification Configruations
To start "SNMP Trap Notification Configruations", There have two methods. First, double-click on the "Areca Raid Controller".
Second, you may also use the "Taskbar Start/programs/Areca
Technology Corp/ArcSnmpConf" menus shown below.
149
APPENDIX
SNMP Community Configurations
Please refer to the community name in this appendix.
SNMP Trap Notification Configruations
The "Community Name" should be the same as firmware- embedded SNMP community. The "SNMP Trap Notification
Configruations" include level 1: Serious, level 2: Error, level 3:
Warning and level 4: Information. The level 4 covers notification events such as initialization of the controller and initiation of the rebuilding process; Level 3 includes events which require the issuance of warning messages; Level 2 covers notification events which once have happen; Level 1 is the highest level, and covers events the need immediate attention (and action) from the administrator.
150
SNMP Extension Agent Installation for Linux
You must have administrative level permission to install SATA
RAID software. This procedure assumes that the SATA RAID
APPENDIX hardware and Linux are installed and operational in your system.
For the SNMP Extension Agent Installation for Linux procedure, please refer to <CD-ROM>\packages\Linux\SNMP\Readme or download from http://www.areca.com.tw
SNMP Extension Agent Installation for FreeBSD
You must have administrative level permission to install SATA
RAID software. This procedure assumes that the SATA RAID hardware and FreeBSD are installed and operational in your system. For the SNMP Extension Agent Installation for FreeBSD procedure please refer to <CD-ROM>\packages\FreeBSD\
SNMP\Readme or download from http://www.areca.com.tw
151
APPENDIX
Appendix D
Event Notification Configurations
The controller classifies disk array events into four levels depending on their severity. These include level 1: Urgent, level 2: Serious, level
3: Warning and level 4: Information. The level 4 covers notificational events such as initialization of the controller and initiation of the rebuilding process; Level 2 covers notification events which once have happen; Level 3 includes events which require the issuance of warning messages; Level 1 is the highest level, and covers events the need immediate attention (and action) from the administrator. The following lists sample events for each level:
A. Device Event
Event
Device Inserted
Action
Device Removed Warning HDD removed
Reading Error Warning HDD reading error Keep Watching HDD status, may be it caused by noise or HDD unstable.
Writing Error
Level Meaning
Warning HDD inserted
ATA Ecc Error
Change ATA
Mode
Time Out Error
Device Failed
PCI Parity Error
Device
Failed(SMART)
Warning
Warning
HDD writing error
HDD ECC error
Keep Watching HDD status, may be it caused by noise or HDD unstable.
Keep Watching HDD status, may be it caused by noise or HDD unstable.
Check HDD connection Warning HDD change ATA mode
Warning HDD time out Keep Watching HDD status, maybe it caused by noise or HDD unstable.
Urgent
Serious
Urgent
HDD failure
PCI parity error
Replace HDD
If only happen once, it may be caused by noise. If always happen, please check power supply or contact to us.
HDD SMART failure Replace HDD
152
APPENDIX
PassThrough Disk
Created
Inform
PassThrough Disk
Modified
Inform
PassThrough Disk
Deleted
Inform
Pass Through Disk created
Pass Through Disk modified
Pass Through Disk deleted
B. Volume Event
Event
Start Initialize
Level Meaning
Warning Volume initialization has started
Start Rebuilding Warning Volume rebuilding has started
Start Migrating Warning Volume migration has started
Start Checking Warning Volume parity checking has started
Complete Init Warning Volume initialization completed
Complete Rebuild Warning Volume rebuilding completed
Complete Migrate Warning Volume migration completed
Complete Check Warning Volume parity checking completed
Create Volume
Delete Volume
Modify Volume
Warning New volume created
Warning Volume deleted
Warning Volume modified
Volume Degraded Urgent
Volume Failed Urgent
Urgent Failed Volume
Revived
Abort
Initialization
Warning
Volume degraded
Volume failure
Failed Volume revived
Initialization been aborted
Abort Rebuilding Warning Rebuilding aborted
Abort Migration Warning Migration aborted
Abort Checking Warning Parity check aborted
Stop Initialization Warning Initialization stopped
Stop Rebuilding
Stop Migration
Stop Checking
Warning Rebuilding stopped
Warning Migration stopped
Warning Parity check stopped
Action
Replace HDD
153
APPENDIX
C. RAID Set Event
Event
Create RaidSet
Delete RaidSet
Expand RaidSet
Rebuild RaidSet
RaidSet
Degraded
Level Meaning
Warning New raidset created
Warning Raidset deleted
Warning Raidset expanded
Warning Raidset rebuilding
Urgent Raidset degraded
Action
Replace HDD
D. Hardware Monitor Event
Event Level
DRAM 1-Bit ECC Urgent
DRAM Fatal
Error
Controller Over
Temperature
Hdd Over
Temperature
Fan Failed
Urgent
Urgent
Meaning Action
DRAM 1-Bit ECC error Check DRAM
DRAM fatal error encountered
Check the DRAM module and replace with new one if required.
Abnormally high temperature detected on controller (over 60 degree)
Check air flow and cooling fan of theenclosure, and contact us.
Urgent
Urgent
Abnormally high temperature detected on Hdd (over 55 degree)
Check air flow and cooling fan of the enclosure.
Cooling Fan # failure or speed below
1700RPM
Serious Controller temperature back to normal level
Check cooling fan of the enclosure and replace with a new one if required.
Controller
Temp.
Recovered
Hdd Temp.
Recovered
Raid Power On
Test Event
Power On With
Battery Backup
Incomplete
RAIDDiscovered
HTTP Log In
Warning
Urgent
Raid power on
Test event
Warning Raid power on with battery backuped
Serious Some RAID set member disks missing before power on
Serious a HTTP login detected
Check disk information to find out which channel missing.
154
APPENDIX
Telnet Log Serious a Telnet login detected
InVT100 Log In Serious a VT100 login detected
API Log In
Lost Rebuilding/
MigrationLBA
Serious a API login detected
Urgent Some rebuilding/ migration raidset member disks missing before power on.
Reinserted the missing member disk back, controller will continue the incompleted rebuilding/ migration.
Note:
It depends on models, not every model will encounter all events.
155
APPENDIX
Appendix E
RAID Concept
RAID Set
A RAID set is a group of disks connected to a RAID controller. A
RAID set contains one or more volume sets. The RAID set itself does not define the RAID level (0, 1, 10, 3, 5, 6, etc); the
RAID level is defined within each volume set. Therefore, volume sets are contained within RAID sets and RAID Level is defined within the volume set. If physical disks of different capacities are grouped together in a RAID set, then the capacity of the smallest disk will become the effective capacity of all the disks in the RAID set.
Volume Set
Each volume set is seen by the host system as a single logical device (in other words, a single large virtual hard disk). A volume set will use a specific RAID level, which will require one or more physical disks (depending on the RAID level used). RAID level refers to the level of performance and data protection of a volume set. The capacity of a volume set can consume all or a portion of the available disk capacity in a RAID set. Multiple volume sets can exist in a RAID set. For the SATA RAID controller, a volume set must be created either on an existing RAID set or on a group of available individual disks (disks that are about to become part of a RAID set). If there are pre-existing RAID sets with available capacity and enough disks for the desired RAID level, then the volume set can be created in the existing RAID set of the user’s choice.
156
APPENDIX
In the illustration, volume 1 can be assigned a RAID level 5 of operation while volume 0 might be assigned a RAID level 10 of operation. Alternatively, the free space can be used to create volume 2, which could then be set to use RAID level 5.
Ease of Use Features
•
Foreground Availability/Background Initialization
RAID 0 and RAID 1 volume sets can be used immediately after creation because they do not create parity data. However,
RAID 3, 5 and 6 volume sets must be initialized to generate parity information. In background Initialization, the initialization proceeds as a background task, and the volume set is fully accessible for system reads and writes. The operating system can instantly access the newly created arrays without requiring a reboot and without waiting for initialization to complete.
Furthermore, the volume set is protected against disk failures while initialing. If using Foreground Initialization, the initialization process must be completed before the volume set is ready for system accesses.
•
Online Array Roaming
The SATA RAID controllers store RAID configuration information on the disk drives. The controller therefore protects the configuration settings in the event of controller failure. Array roaming allows the administrators the ability to move a completed RAID set to another system without losing RAID configuration information or data on that RAID set. Therefore, if a server fails, the
RAID set disk drives can be moved to another server with an
Areca RAID controller and the disks can be inserted in any order.
•
Online Capacity Expansion
Online Capacity Expansion makes it possible to add one or more physical drives to a volume set without interrupting server operation, eliminating the need to backup and restore after reconfiguration of the RAID set. When disks are added to a RAID set, unused capacity is added to the end of the RAID set. Then, data
157
APPENDIX on the existing volume sets (residing on the newly expanded
RAID set) is redistributed evenly across all the disks. A contiguous block of unused capacity is made available on the RAID set.
The unused capacity can be used to create additional volume sets.
A disk, to be added to a RAID set, must be in normal mode (not failed), free (not spare, in a RAID set, or passed through to host) and must have at least the same capacity as the smallest disk capacity already in the RAID set.
Capacity expansion is only permitted to proceed if all volumes on the RAID set are in the normal status. During the expansion process, the volume sets being expanded can be accessed by the host system. In addition, the volume sets with RAID level 1,
10, 3, 5 or 6 are protected against data loss in the event of disk failure(s). In the case of disk failure, the volume set changes from “migrating” state to “migrating+degraded“ state. When the expansion is completed, the volume set would then change to
“degraded” mode. If a global hot spare is present, then it further change to the “rebuilding” state.
The expansion process is illustrated as following figure.
158
The SATA RAID controller redistributes the original volume set over the original and newly added disks, using the same faulttolerance configuration. The unused capacity on the expand
RAID set can then be used to create an additional volume set, with a different fault tolerance setting (if required by the user.)
APPENDIX
• Online RAID Level and Stripe Size Migration
For those who wish to later upgrade to any RAID capabilities, a system with Areca online RAID level/stripe size migration allows a simplified upgrade to any supported RAID level without having to reinstall the operating system.
The SATA RAID controllers can migrate both the RAID level and stripe size of an existing volume set, while the server is online and the volume set is in use. Online RAID level/stripe size migration can prove helpful during performance tuning activities as well as when additional physical disks are added to the SATA
RAID controller. For example, in a system using two drives in
RAID level 1, it is possible to add a single drive and add capacity and retain fault tolerance. (Normally, expanding a RAID level
1 array would require the addition of two disks). A third disk can be added to the existing RAID logical drive and the volume set can then be migrated from RAID level 1 to 5. The result would be parity fault tolerance and double the available capacity without taking the system down. A forth disk could be added to migrate to RAID level 6. It is only possible to migrate to a higher
RAID level by adding a disk; disks in an existing array can’t be reconfigured for a higher RAID level without adding a disk.
Online migration is only permitted to begin, If all volumes to be migrated are in the normal mode. During the migration process, the volume sets being migrated are accessed by the host system. In addition, the volume sets with RAID level 1, 10, 3, 5 or
6 are protected against data loss in the event of disk failure(s).
In the case of disk failure, the volume set transitions from migrating state to (migrating+degraded) state. When the migration
159
APPENDIX is completed, the volume set transitions to degraded mode. If a global hot spare is present, then it further transitions to rebuilding state.
• Online Volume Expansion
Performing a volume expansion on the controller is the process of growing only the size of the lastest volume. A more flexible option is for the array to concatenate an additional drive into the
RAID set and then expand the volumes on the fly. This happens transparently while the volumes are online, but, at the end of the process, the operating system will detect free space at after the existing volume.
Windows, NetWare and other advanced operating systems support volume expansion, which enables you to incorporate the additional free space within the volume into the operating system partition. The operating system partition is extended to incorporate the free space so it can be used by the operating system without creating a new operating system partition.
You can use the Diskpart.exe command line utility, included with
Windows Server 2003 or the Windows 2000 Resource Kit, to extend an existing partition into free space in the dynamic disk.
Third-party software vendors have created utilities that can be used to repartition disks without data loss. Most of these utilities work offline. Partition Magic is one such utility.
High availability
•
Global Hot Spares
A global hot spare is an unused online available drive, which is ready for replacing the failure disk. The global hot spare is one of the most important features that SATA RAID controllers provide to deliver a high degree of fault-tolerance. A global hot spare is a spare physical drive that has been marked as a global hot spare and therefore is not a member of any RAID set. If a disk drive used in a volume set fails, then the global hot spare will automat-
160
INTRODUCTION ically take its place and the data previously located on the failed drive is reconstructed on the global hot spare.
For this feature to work properly, the global hot spare must have at least the same capacity as the drive it replaces. Global hot spares only work with RAID level 1, 10, 3, 5, or 6 volume set. You can configure up to three global hot spares with ARC-11xx/12xx.
The “Create Hot Spare” option gives you the ability to define a global hot spare disk drive. To effectively use the global hot spare feature, you must always maintain at least one drive that is marked as a global spare.
Important
:
The hot spare must have at least the same capacity as the drive it replaces.
•
Hot-Swap Disk Drive Support
The SATA controller chip includes a protection circuit that supports the replacement of SATA hard disk drives without having to shut down or reboot the system. A removable hard drive tray can deliver “hot swappable” fault-tolerant RAID solutions at prices much less than the cost of conventional SCSI hard disk RAID controllers. This feature provides advanced fault tolerant RAID protection and “online” drive replacement.
•
Auto Declare Hot-Spare
If a disk drive is brought online into a system operating in degraded mode, The SATA RAID controllers will automatically declare the new disk as a spare and begin rebuilding the degraded volume. The auto declare hot-spare function requires that the smallest drive contained within the volume set in which the failure occurred.
In the normal status, the newly installed drive will be reconfigured an online free disk. But, the newly-installed drive is automatically assigned as a hot spare if any hot spare disk was used to rebuild and without new installed drive replaced it. In this condition, the auto declare hot-spare status will be disappeared if the RAID subsystem has since powered off/on.
161
APPENDIX
The Hot-Swap function can be used to rebuild disk drives in arrays with data redundancy such as RAID level 1, 10, 3, 5, and 6.
•
Auto Rebuilding
If a hot spare is available, the rebuild starts automatically when a drive fails. The SATA RAID controllers automatically and transparently rebuild failed drives in the background at user-definable rebuild rates.
If a hot spare is not available, the failed disk drive must be replaced with a new disk drive so that the data on the failed drive can be automatically rebuilt and so that fault tolerance can be maintained.
The SATA RAID controllers will automatically restart the system and the rebuilding process if the system is shut down or powered off abnormally during a reconstruction procedure condition.
When a disk is hot swapped, although the system is functionally operational, the system may no longer be fault tolerant. Fault tolerance will be lost until the removed drive is replaced and the rebuild operation is completed.
During the automatic rebuild process, system activity will continue as normal, however, the system performance and fault tolerance will be affected.
•
Adjustable Rebuild Priority
Rebuilding a degraded volume incurs a load on the RAID subsystem. The SATA RAID controllers allow the user to select the rebuild priority to balance volume access and rebuild tasks appropriately. The “Background Task Priority” is a relative indication of how much time the controller devotes to a background operation, such as rebuilding or migrating.
The SAS RAID controller allows user to choose the task priority
(Ultra Low (5%), Low (20%), Medium (50%), High (80%)) to balance volume set access and background tasks appropriately. For high array performance, specify an “Ultra Low” value. Like volume
162
APPENDIX initialization, after a volume rebuilds, it does not require a system reboot.
High Reliability
•
Hard Drive Failure Prediction
In an effort to help users avoid data loss, disk manufacturers are now incorporating logic into their drives that acts as an "early warning system" for pending drive problems. This system is called
SMART The disk integrated controller works with multiple sensors to monitor various aspects of the drive's performance, determines from this information if the drive is behaving normally or not, and makes available status information to RAID controller firmware that probes the drive and look at it.
The SMART can often predict a problem before failure occurs.
The controllers will recognize a SMART error code and notify the administer of an impending hard drive failure.
•
Auto Reassign Sector
Under normal operation, even initially defect-free drive media can develop defects. This is a common phenomenon. The bit density and rotational speed of disks is increasing every year, and so are the potential of problems. Usually a drive can internally remap bad sectors without external help using cyclic redundancy check
(CRC) checksums stored at the end of each sector.
SATA drives perform automatic defect re-assignment for both read and write errors. Writes are always completed - if a location to be written is found to be defective, the drive will automatically relocate that write command to a new location and map out the defective location. If there is a recoverable read error, the correct data will be transferred to the host and that location will be tested by the drive to be certain the location is not defective. If it is found to have a defect, data will be automatically relocated, and the defective location is mapped out to prevent future write attempts.
In the event of an unrecoverable read error, the error will be reported to the host and the location will be flagged as being po-
163
APPENDIX tentially defective. A subsequent write to that location will initiate a sector test and relocation should that location prove to have a defect. Auto Reassign Sector does not affect disk subsystem performance because it runs as a background task. Auto Reassign
Sector discontinues when the operating system makes a request.
•
Consistency Check
A consistency check is a process that verifies the integrity of redundant data. To verify RAID 3, 5 or 6 redundancy, a consistency check reads all associated data blocks, computes parity, reads parity, and verifies that the computed parity matches the read parity.
Consistency checks are very important because they detect and correct parity errors or bad disk blocks in the drive. A consistency check forces every block on a volume to be read, and any bad blocks are marked; those blocks are not used again. This is critical and important because a bad disk block can prevent a disk rebuild from completing. We strongly recommend that you run consistency checks on a regular basis—at least once per week.
Note that consistency checks degrade performance, so you should run them when the system load can tolerate it.
Data Protection
•
Battery Backup
The SATA RAID controllers are armed with a Battery Backup Module (BBM). While a Uninterruptible Power Supply (UPS) protects most servers from power fluctuations or failures, a BBM provides an additional level of protection. In the event of a power failure, a
BBM supplies power to retain data in the RAID controller’s cache, thereby permitting any potentially dirty data in the cache to be flushed out to secondary storage when power is restored.
The batteries in the BBM are recharged continuously through a trickle-charging process whenever the system power is on. The batteries protect data in a failed server for up to three or four days, depending on the size of the memory module. Under nor-
164
APPENDIX mal operating conditions, the batteries last for three years before replacement is necessary.
•
Recovery ROM
The SATA RAID controller firmware is stored on the flash ROM and is executed by the I/O processor. The firmware can also be updated through the PCI-X/PCIe bus port or Ethernet port (if equipped) without the need to replace any hardware chips. During the controller firmware upgrade flash process, it is possible for a problem to occur resulting in corruption of the controller firmware. With our Redundant Flash Image feature, the controller will revert back to the last known version of firmware and continue operating. This reduces the risk of system failure due to firmware crash.
Appendix F
Understanding RAID
RAID is an acronym for Redundant Array of Independent Disks. It is an array of multiple independent hard disk drives that provides high performance and fault tolerance. The SATA RAID controller implements several levels of the Berkeley RAID technology. An appropriate RAID level is selected when the volume sets are defined or created. This decision should be based on the desired disk capacity, data availability (fault tolerance or redundancy), and disk performance. The following section discusses the RAID levels supported by the SATA RAID controller.
The SATA RAID controller makes the RAID implementation and the disks’ physical configuration transparent to the host operating system. This means that the host operating system drivers and software utilities are not affected, regardless of the RAID level selected. Correct installation of the disk array and the controller requires a proper understanding of RAID technology and the concepts.
165
APPENDIX
•
RAID 0
RAID 0, also referred to as striping, writes stripes of data across multiple disk drives instead of just one disk drive. RAID 0 does not provide any data redundancy, but does offer the best highspeed data throughput. RAID 0 breaks up data into smaller blocks and then writes a block to each drive in the array. Disk striping enhances performance because multiple drives are accessed simultaneously; the reliability of RAID level 0 is less because the entire array will fail if any one disk drive fails.
166
•
RAID 1
RAID 1 is also known as “disk mirroring”; data written on one disk drive is simultaneously written to another disk drive. Read performance will be enhanced if the array controller can, in parallel, access both members of a mirrored pair. During writes, there will be a minor performance penalty when compared to writing to a single disk. If one drive fails, all data (and software applications) are preserved on the other drive. RAID 1 offers extremely high data reliability, but at the cost of doubling the required data storage capacity.
APPENDIX
•
RAID 10
RAID 10 is a combination of RAID 0 and RAID 1, combing stripping with disk mirroring. RAID Level 10 combines the fast performance of Level 0 with the data redundancy of Leve1 1. In this configuration, data is distributed across several disk drives, similar to Level 0, which are then duplicated to another set of drive for data protection. RAID 10 has been traditionally implemented using an even number of disks, some hybrids can use an odd number of disks as well. Illustration is an example of a hybrid RAID 10 array comprised of five disks; A, B, C, D and E.
In this configuration, each strip is mirrored on an adjacent disk with wrap-around. Areca RAID 10 offers a little more flexibility in choosing the number of disks that can be used to constitute an array. The number can be even or odd.
•
RAID 3
RAID 3 provides disk striping and complete data redundancy though a dedicated parity drive. RAID 3 breaks up data into smaller blocks, calculates parity by performing an exclusive-or on the blocks, and then writes the blocks to all but one drive in the array. The parity data created during the exclusive-or is then written to the last drive in the array. If a single drive fails, data is still available by computing the exclusive-or of the contents corresponding strips of the surviving member disk. RAID 3 is best for applications that require very fast data- transfer rates or long data blocks.
167
APPENDIX
•
RAID 5
RAID 5 is sometimes called striping with parity at byte level. In
RAID 5, the parity information is written to all of the drives in the controllers rather than being concentrated on a dedicated parity disk. If one drive in the system fails, the parity information can be used to reconstruct the data from that drive. All drives in the array system can be used for seek operations at the same time, greatly increasing the performance of the RAID system. This relieves the write bottleneck that characterizes RAID 4, and is the primary reason that RAID 5 is more often implemented in RAID arrays.
168
APPENDIX
•
RAID 6
RAID 6 provides the highest reliability. It is similar to RAID 5, but it performs two different parity computations or the same computation on overlapping subsets of the data. RAID 6 can offer fault tolerance greater than RAID 1 or RAID 5 but only consumes the capacity of 2 disk drives for distributed parity data. RAID 6 is an extension of RAID 5 but uses a second, independent distributed parity scheme. Data is striped on a block level across a set of drives, and then a second set of parity is calculated and written across all of the drives.
Summary of RAID Levels
The SATA RAID controller supports RAID Level 0, 1, 10, 3, 5 and 6.
The table below provides a summary of RAID levels.
RAID
Level
0
Description
Features and Performance
Min.
Drives
Data
Reliability
Also known as stripping
Data distributed across multiple drives in the array. There is no data protection.
1 No data
Protection
Data
Transfer
Rate
Very
High
I/O Request
Rates
Very High for
Both Reads and Writes
169
APPENDIX
1
10
3
Also known as mirroring
All data replicated on N separated disks.
N is almost always 2.
This is a high availability solution, but due to the 100% duplication, it is also a costly solution. Half of drive capacity in array devoted to mirroring.
Also known Block-Interleaved
Parity.
Data and parity information is subdivided and distributed across all disks. Parity must be the equal to the smallest disk capacity in the array. Parity information normally stored on a dedicated parity disk.
Also known Bit-Interleaved Parity.
Data and parity information is subdivided and distributed across all disks. Parity data consumes the capacity of 1 disk drive. Parity information normally stored on a dedicated parity disk.
2
3
3
5 Also known Block-Interleaved
Distributed Parity.
Data and parity information is subdivided and distributed across all disk. Parity data consumes the capacity of 2 disk drive.
3
Lower than
RAID 6;
Higher than
RAID
3, 5
Writes similar to a single disk
Reads are twice as fast as a single disk;
Write are similar to a single disk.
Lower than
RAID 6;
Higher than
RAID
3, 5
Transfer rates more similar to RAID
1 than
RAID 0
Reads are twice as fast as a single disk;
Writes are similar to a single disk.
Lower than
RAID 1,
10, 6;
Reads are similar to
RAID 0;
Higher than a single drive
Lower than
RAID 1,
10, 6;
Writes are slower than a single disk
Reads are similar to
RAID 0;
Higher than a single drive
Writes are slower than a single disk
Reads are close to being twice as fast as a single disk;
Writes are similar to a single disk.
Reads are similar to
RAID 0;
Writes are slower than a single disk.
170
6 RAID 6 provides the highest reliability. Similar to RAID 5, but does two different parity computations. RAID 6 offers fault tolerance greater that RAID 1 or
RAID 5. Parity data consumes the capacity of 2 disk drives.
4
APPENDIX
Highest reliability
Reads are similar to
RAID 0;
Writes are slower than a single disk
Reads are similar to
RAID 0;
Writes are slower than a single disk.
171
APPENDIX
Appendix G
Technical Support
Areca Technical Support provides several options for Areca users to access information and updates. We encourage you to use one of our electric services, for the lastest product information updates and efficient support service. If you have decided to contact us, please have the following information ready. Kindly provide us the product model, serial number, BIOS, driver version, and a detailed description of the problem at http://www.areca.com.tw/
support/ask_a_question.htm Our support team will be glad to answer all your techinical enquires.
172

Download
Advertisement
Key features
- Support RAID levels 0, 1, 10, 3, 5, 6, JBOD
- Extreme-availability RAID 6 functionality
- Online capacity expansion & RAID level migration
- Global Online Spare, Automatic Drive Failure Detection
- Automatic Failed Drive Rebuilding, Disk Hot-Swap
- Hot key boot-up McBIOS RAID manager
- Browser-based management utility via ArcHttp
- Support controller’s API library, allowing customer to write its own AP
- Support Command Line Interface (CLI)