Introduction to RAID. Promise Technology VTrack M-Class M500p, VTrak M200f, VTrak M500i, VTrak M300p, VTrak M500f, VTrak M500p, VTrak M300f, VTrak M300i, VTrak M200i, VTrak M200p
Add to my manuals
316 Pages
Promise Technology VTrak M200i is a high-performance, enterprise-class storage solution that offers a range of features to meet the demanding needs of businesses of all sizes. With its support for both Fibre Channel and iSCSI connectivity, the VTrak M200i can be easily integrated into existing storage networks. The device also includes a number of advanced features, such as RAID protection, snapshot capabilities, and thin provisioning, which make it an ideal solution for a variety of applications, including data storage, backup, and disaster recovery.
advertisement
Chapter 7: Technology Background
• Introduction to RAID (below)
•
Choosing a RAID Level (page 241)
•
•
•
•
•
•
•
Partition and Format the Logical Drive (page 248)
•
RAID Level Migration (page 248)
•
•
Predictive Data Migration (PDM) (page 251)
•
Introduction to RAID
RAID (Redundant Array of Independent Disks) allows multiple hard drives to be combined together in a disk array. Then all or a portion of the disk array is formed into a logical drive. The operating system sees the logical drive as a single storage device, and treats it as such. The RAID software and/or controller handle all of the individual drives on its own. The benefits of a RAID can include:
• Higher data transfer rates for increased server performance
• Increased overall storage capacity for a single drive designation (such as, C,
D, E, etc.)
• Data redundancy/fault tolerance for ensuring continuous system operation in the event of a hard drive failure
Different types of disk arrays use different organizational models and have
varying benefits. Also see Choosing RAID Level on page 241. The following
outline breaks down the properties for each type of RAID disk array:
233
VTrak M-Class Product Manual
RAID 0 – Stripe
When a disk array is striped, the read and write blocks of data are interleaved between the sectors of multiple drives. Performance is increased, since the workload is balanced between drives or “members” that form the disk array.
Identical disk drives are recommended for performance as well as data storage efficiency. The disk array’s data capacity is equal to the number of disk drive members multiplied by the smallest drive's capacity.
Data
Stripe
Disk Drives
Figure 1. RAID 0 Striping interleaves data across multiple drives
For example, one 100GB and three 120GB drives will form a 400GB (4 x 100GB) disk array instead of 460 GB.
RAID 0 arrays require one or more physical drives.
Recommended applications: Image Editing, Pre-Press Applications, other applications requiring high bandwidth.
234
Chapter 7: Technology Background
RAID 1 – Mirror
When a disk array is mirrored, identical data is written to a pair of drives, while reads are performed in parallel. The reads are performed using elevator seek and load balancing techniques where the workload is distributed in the most efficient manner. Whichever drive is not busy and is positioned closer to the data will be accessed first. With RAID 1, if one drive fails or has errors, the other mirrored drive continues to function. This is called Fault Tolerance. Moreover, if a spare drive is present, the spare drive will be used as the replacement drive and data will begin to be mirrored to it from the remaining good drive.
Data Mirror
Disk Drives
Figure 2. RAID 1 Mirrors identical data to two drives
Due to the data redundancy of mirroring, the drive capacity of the disk array is only the size of the smallest drive. For example, two 100GB drives which have a combined capacity of 200GB instead would have 100GB of usable storage when set up in a mirrored disk array. Similar to RAID 0 striping, if drives of different capacities are used, there will also be unused capacity on the larger drive.
RAID 1 arrays use two physical drives. You can create multiple RAID 1 disk arrays on the same Promise product.
Recommended applications: Accounting, Payroll, Financial, other applications requiring very high availability.
235
VTrak M-Class Product Manual
RAID 1E – Enhanced Mirror
RAID 1E offers the security of mirrored data provided by RAID 1 plus the added capacity of more than two disk drives. It also offers overall increased read/write performance plus the flexibility of using an odd number of disk drives. With RAID
1E, each data stripe is mirrored onto two disk drives. If one drive fails or has errors, the other drives continue to function, providing fault tolerance.
Enhanced Data Mirrors
Disk Drives
The advantage of RAID 1E is the ability to use an odd number of disk drives, unlike RAID 1 and RAID 10. You can also create a RAID 1E Logical Drive with an even number of disk drives. However, if you have an even number of disks, you will obtain greater security with comparable performance using RAID 10.
RAID 1E arrays consist of three or more physical drives. You can create an array with just two physical drives and specify RAID 1E. But the resulting array will actually be a RAID 1.
Recommended applications: Imaging Applications, Database Servers, General
Fileservers.
236
Chapter 7: Technology Background
RAID 5 – Block and Parity Stripe
RAID 5 organizes block data and parity data across the physical drives.
Generally, RAID Level 5 tends to exhibit lower random write performance due to the heavy workload of parity recalculation for each I/O. RAID 5 is generally considered to be the most versatile RAID level
Distributed Parity
Data
Blocks
Disk Drives
Figure 3. RAID 5 Stripes all drives with data and parity information
The capacity of a RAID 5 disk array is the smallest drive size multiplied by the number of drives less one. Hence, a RAID 5 disk array with (4) 100 GB hard drives will have a capacity of 300GB. A disk array with (8) 120GB hard drives and
(1) 100GB hard drive will have a capacity of 800GB.
RAID 5 arrays consist of three or more physical drives.
Recommended applications: File and Application Servers; WWW, E-mail, News servers, Intranet Servers
237
VTrak M-Class Product Manual
RAID 10 – Mirror + Stripe
Mirroring/striping combines both of the previous RAID 1 and RAID 0 disk array types. RAID 10 is similar though not identical to RAID 0+1. RAID 10 can increase performance by reading and writing data in parallel while protecting data with duplication. At least four drives are needed for RAID 10 to be installed. With four disk drives, the drive pairs are striped together with one pair mirroring the first pair. The data capacity is similar to a RAID 1 disk array, with half of the total storage capacity used for redundancy. An added plus for using RAID 10 is that, in many situations, such a disk array offers double fault tolerance. Double fault tolerance may allow your logical drive to continue to operate depending on which two disk drives fail.
Data Stripe
Data
Mirror
Disk Drives
Figure 4. RAID 10 takes a data mirror on one drive pair and stripes it over two drive pairs
RAID 10 arrays require an even number of physical drives and a minimum of four.
For RAID 10 characteristics with an odd number of disk drives, use RAID 1E.
Recommended applications: Imaging Applications, Database Servers, General
Fileservers.
238
Chapter 7: Technology Background
RAID 50 – Striping of Distributed Parity
RAID 50 combines both RAID 5 and RAID 0 features. Data is striped across disks as in RAID 0, and it uses distributed parity as in RAID 5. RAID 50 provides data reliability, good overall performance and supports larger volume sizes.
Distributed Parity
Axle 1
Data
Stripes
Axle 2
Disk Drives
Figure 5. RAID 50 Striping of Distributed Parity disk arrays
RAID 50 also provides high reliability because data is still available even if multiple disk drives fail (one in each axle). The greater the number of axles, the greater the number of disk drives that can fail without the RAID 50 array going offline.
RAID 50 arrays consist of six or more physical drives.
Recommended applications: File and Application Servers, Transaction
Processing, Office applications with many users accessing small files.
RAID 50 Axles
When you create a RAID 50, you must specify the number of axles. An axle refers to a single RAID 5 array that is striped with other RAID 5 arrays to make
RAID 50. An axle can have from three to eight physical drives, depending on the number of physical drives in the array.
The chart below shows RAID 50 arrays with 6 to 15 disk drives, the available number of axles, and the resulting distribution of disk drives on each axle. VTrak
239
VTrak M-Class Product Manual attempts to distribute the number of disk drives equally among the axles but in some cases, one axle will have more disk drives than another.
No. of Drives in RAID 50
Array
No. of Axles in RAID 50
Array
8
9
6
7
10
11
12
13
14
15
4
2
2
3
3
4
3
2
3
2
3
2
2
2
2
2
2
3
3
4
4
5
No. of Drives per Axle
3,4,4
6,6
4,4,4
3,3,3,3
6,7
4,4,5
3,3,3,4
7,7
3,3
3,4
4,4
4,5
3,3,3
5,5
3,3,4
5,6
4,5,5
3,3,4,4
7,8
5,5,5
3,4,4,4
3,3,3,3,3
240
advertisement
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Key Features
- High-performance, enterprise-class storage solution
- Support for both Fibre Channel and iSCSI connectivity
- Advanced features, such as RAID protection, snapshot capabilities, and thin provisioning
Related manuals
Frequently Answers and Questions
What is the Promise Technology VTrak M200i?
What types of connectivity does the VTrak M200i support?
What are some of the advanced features of the VTrak M200i?
What is the maximum storage capacity of the VTrak M200i?
What types of disk drives can be used with the VTrak M200i?
advertisement
Table of contents
- 9 About This Manual
- 10 Overview
- 11 Architectural Description
- 12 Features and Benefits
- 14 Specifications
- 14 M500f/i/p
- 16 M300f/i/p, M200f/i/p
- 17 FCC Statement
- 19 Unpack the VTrak
- 20 Mount VTrak M500f/i/p in a Rack
- 22 Mount VTrak M300f/i/p or M200f/i/p in a Rack
- 24 Install Disk Drives
- 27 Drive Numbering
- 28 Set Up Network Cable Connections
- 28 Fibre Channel Storage Area Network.
- 29 Fibre Channel Direct Attached Storage
- 30 iSCSI Storage Area Network
- 31 iSCSI Direct Attached Storage
- 32 SCSI Direct Attached Storage
- 33 Set Up Serial Cable Connections
- 34 Connect the Power
- 37 VTrak Setup with CLI or CLU
- 38 CLI: Fibre Channel and SCSI Models (M500f/p, M300f/p, M200f/p)
- 38 CLI: iSCSI Models (M500i, M300i, M200i)
- 40 CLU: Fibre Channel and SCSI Models (M500f/p, M300f/p, M200f/p)
- 40 System Date and Time
- 41 Management Port
- 42 Exit the CLU
- 42 CLU: iSCSI Models (M500i, M300i, M200i)
- 43 System Date and Time
- 43 Management Port
- 45 iSCSI Ports
- 46 Exit the CLU
- 47 Install iSCSI Initiator on the Host PC
- 47 Software-based iSCSI Initiator
- 49 VTrak Setup with WebPAM PROe
- 49 Log-in to WebPAM PROe
- 49 Regular Connection
- 49 Secure Connection
- 52 Language Selection
- 53 Create a Disk Array
- 53 Automatic
- 54 Express
- 56 Advanced
- 62 Additional Logical Drives
- 62 Log-out of WebPAM PROe
- 63 Internet Connection using WebPAM PROe
- 65 VTrak Status Indicators
- 67 Drive Status Indicators
- 68 Audible Alarm
- 69 Log-in/Log-out
- 69 Log-in to WebPAM PROe
- 69 Regular Connection
- 69 Secure Connection
- 71 Log-out of WebPAM PROe
- 72 Graphic User Interface
- 73 Header
- 73 Language Selection
- 73 View
- 74 Storage Network
- 74 Contact Us
- 75 Tree View
- 76 Management Window
- 76 Event Frame
- 76 Subsystems
- 77 Subsystem
- 77 Subsystem Information
- 77 Subsystem Settings
- 78 Subsystem Events
- 78 View Events
- 78 Clear Events
- 78 Save Events
- 79 Background Activities
- 79 Start Background Function
- 79 Change Background Settings
- 80 Scheduler
- 80 View Scheduled Activities
- 80 Schedule an Activity
- 81 Delete an Activity
- 82 Lock
- 82 View Lock Status
- 82 Set Lock
- 82 Renew Lock
- 83 Release Lock
- 83 Administrative Tools
- 83 User Management
- 84 User Information
- 84 User Settings - Administrator
- 84 User Settings - User
- 85 User Event Subscription
- 86 List of User Notification Events
- 87 User Password - Administrator
- 87 User Password - Users
- 87 Create a User
- 88 List of User Privileges
- 88 Delete a User
- 89 User Sessions
- 90 Network Management
- 90 Management Port
- 91 iSCSI Data Ports
- 91 Fibre Channel Management
- 92 Fibre Channel Node
- 92 Fibre Channel Port
- 93 Fibre Channel Port Settings
- 93 Fibre Channel Statistics
- 94 Fibre Channel SFP
- 95 Fibre Channel Logged-in Devices
- 95 Fibre Channel Initiators
- 96 iSCSI Management
- 96 iSCSI Node
- 97 iSCSI Ports
- 98 iSCSI Port Statistics
- 99 iSCSI Sessions
- 100 iSCSI iSNS
- 101 iSCSI SLP
- 101 iSCSI CHAP
- 101 View CHAPs
- 102 Add a CHAP
- 102 Edit a CHAP
- 102 Delete a CHAP
- 103 iSCSI Ping
- 103 SCSI Management
- 103 SCSI Channel Information
- 103 SCSI Channel Settings
- 104 SCSI Target Information
- 105 Storage Services
- 105 Initiators
- 105 Add an Initiator
- 105 Delete an Initiator
- 106 LUN Map - Fibre Channel and iSCSI
- 106 View LUN Map
- 106 Enable LUN Masking
- 106 Edit a LUN Map
- 107 LUN Mapping Parameters
- 107 LUN Map - SCSI
- 107 View LUN Map
- 107 Edit a LUN Map
- 108 LUN Mapping Parameters
- 108 Software Management
- 108 Email
- 109 Send a Test Message
- 109 Change Email Setting
- 109 Manual Start, Restart, Stop
- 109 SLP
- 110 Manual Start, Restart, Stop
- 110 Web Server
- 111 Change Start Setting
- 111 Manual Start, Restart, Stop
- 111 Telnet
- 112 Change Start Setting
- 112 Manual Start, Restart, Stop
- 112 SNMP
- 113 Change Start Setting
- 113 Manual Start, Restart, Stop
- 114 CIM
- 114 Change Start Setting
- 114 Manual Start, Restart, Stop
- 114 Export
- 115 Import
- 116 Firmware Update
- 116 Restore Factory Defaults
- 116 Clear Statistics
- 117 Shutdown and Restart
- 117 Shutdown
- 117 Monitor the Shutdown
- 118 Restart the Subsystem
- 118 Monitor the Restart
- 118 Controllers
- 119 Controller
- 119 Controller Information
- 121 Controller Statistics
- 121 Clear Statistics
- 121 Controller Settings
- 122 Enclosures
- 123 Identify Enclosure
- 124 Enclosure
- 124 Enclosure Information
- 125 Enclosure Settings
- 125 FRU VPD
- 126 Battery
- 126 Battery Recondition
- 127 Buzzer
- 127 Silence Buzzer
- 127 Change Buzzer Settings
- 127 Physical Drives
- 128 Identify a Physical Drive
- 128 Physical Drives Settings
- 129 Physical Drive
- 129 Physical Drive Information
- 130 Advanced Physical Drive Information
- 130 Physical Drive Statistics
- 131 Identify a Physical Drive
- 131 Physical Drive Settings
- 132 Clear Physical Drive Conditions
- 132 Force a Physical Drive Offline/Online
- 133 Physical Drive Media Patrol
- 134 Disk Arrays
- 134 Create a Disk Array - Automatic
- 135 Create a Disk Array - Express
- 136 Create a Disk Array - Advanced
- 136 Step 1 - Disk Array Creation
- 137 Step 2 - Logical Drive Creation
- 137 Step 3 - Summary
- 138 Delete a Disk Array
- 138 Disk Array
- 139 Disk Array Information
- 139 Physical Drives in the Disk Array
- 139 Logical Drives in the Disk Array
- 140 Disk Array Status
- 140 Disk Array Settings
- 141 Create a Logical Drive
- 142 Delete a Logical Drive
- 143 Disk Array Migration
- 144 Disk Array Rebuild
- 144 Manual Rebuild
- 144 Disk Array Background Activity
- 145 Start Background Function
- 145 View Progress of Background Function
- 145 Transition
- 146 Transport
- 146 Logical Drives
- 147 Logical Drive Status
- 148 Logical Drive
- 148 Logical Drive Information
- 149 Logical Drive Statistics (in alphabetical order)
- 149 Logical Drive Settings
- 150 Logical Drive Background Activity
- 150 Logical Drive Initialization
- 151 Logical Drive Redundancy Check
- 151 Logical Drive Synchronization
- 152 Logical Drive PDM
- 152 Logical Drive Check Table
- 153 Logical Drive Degraded or Offline
- 153 Logical Drive Summary
- 153 Spare Drives
- 154 Create Spare Drive
- 155 Delete Spare Drive
- 155 Spare Check - All Spare Drives
- 156 Spare Drive
- 156 Spare Drive Information
- 156 Locate a Spare Drive
- 157 Spare Drive Settings
- 158 Spare Check - Individual Spare Drive
- 159 VTrak Status Indicators
- 161 Drive Status Indicators
- 162 Audible Alarm
- 163 CLU Connection
- 163 Serial Connection
- 163 Telnet Connection
- 165 Exit the CLU
- 166 CLU Function Map
- 166 A
- 166 B
- 166 C
- 167 C, continued
- 167 D
- 167 E
- 168 E, continued
- 168 F
- 168 G
- 168 H
- 168 I
- 169 I, continued
- 169 L
- 170 L, continued
- 170 M
- 170 N
- 170 P
- 171 P, continued
- 171 R
- 171 S
- 172 S, continued
- 173 S, continued
- 173 T
- 173 U
- 174 U, continued
- 174 V
- 174 W
- 175 Quick Setup
- 175 Subsystem Management
- 175 Alias
- 175 Media Patrol
- 175 Lock Management
- 176 System Date and Time
- 176 Controller Management
- 176 Controller Settings
- 177 Alias
- 177 Physical Drive Coercion
- 177 SMART
- 178 Enclosure Management
- 178 Enclosure Status
- 178 Enclosure Management
- 178 Power Supply Units
- 178 Blowers
- 179 Voltage Sensors
- 179 Temperature Sensors
- 179 Enclosure Settings
- 179 Polling Interval
- 180 Temperature Thresholds
- 180 Batteries
- 181 Locate Enclosure
- 181 Physical Drive Management
- 181 Global Physical Drive Settings
- 181 Write Cache
- 181 Read Ahead Cache
- 181 DMA Mode
- 182 Command Queuing
- 182 Individual Physical Drive Settings
- 182 Alias
- 182 Advanced Information
- 182 Physical Drive Statistics
- 182 Clear Stale and PFA Conditions
- 183 Force Physical Drive Offline/Online
- 183 Locate Physical Drive
- 184 Disk Array Management
- 184 Create a Disk Array
- 184 Automatic
- 184 Express
- 185 Advanced
- 186 Delete a Disk Array
- 187 Disk Array Information
- 187 Disk Array Settings and Functions
- 187 Alias
- 187 Media Patrol
- 187 PDM
- 187 Transport
- 187 Rebuild
- 188 Migration
- 188 Predictive Data Migration
- 189 Transition
- 189 Accept Incomplete Array
- 189 Locate Disk Array
- 189 Create a Logical Drive
- 190 Delete a Logical Drive
- 191 Logical Drive Management
- 191 Logical Drive Information
- 191 Logical Drive Settings and Functions
- 191 Alias
- 191 Write Cache Policy
- 191 Read Cache Policy
- 192 Initialization
- 192 Redundancy Check
- 193 Locate Logical Drive
- 193 Network Management
- 193 Management Port Settings
- 193 DHCP
- 193 Manual
- 194 iSCSI Port Settings
- 194 DHCP
- 194 Manual
- 194 Fibre Channel Management
- 194 Node
- 194 Ports
- 195 Logged-in Devices
- 195 Port Settings
- 195 Port SFP
- 196 Port Statistics
- 196 Fibre Channel Initiators
- 197 iSCSI Management
- 197 Node
- 198 Ports
- 198 Port Statistics
- 198 Sessions
- 198 iSNS
- 198 SLP
- 199 CHAP
- 199 Add a CHAP
- 199 Delete a CHAP
- 199 Ping
- 200 SCSI Management
- 200 Channel Information
- 200 Channel Settings
- 201 Target Information
- 201 Background Activity
- 202 Background Activity Settings
- 203 Background Activities List
- 203 Event Viewer
- 203 Runtime Events
- 203 NVRAM Events
- 204 Additional Info and Management
- 204 Spare Drive Management
- 204 Create New Spare Drive
- 205 Spare Drive Settings
- 206 Delete Spare Drive
- 206 LUN Mapping (Fibre Channel and iSCSI)
- 206 Create New Initiator
- 207 Delete Initiator
- 207 Map a LUN to an Initiator
- 207 LUN Mapping (SCSI)
- 208 User Management
- 208 Create New User
- 208 Password
- 209 User Settings: Display Name and Email Address
- 209 User Settings: Privilege and Status
- 209 Delete User
- 210 Software Management
- 210 Email
- 211 SLP
- 211 Webserver
- 211 Telnet
- 212 SNMP
- 212 SNMP Trap Sinks
- 213 CIM
- 213 Flash through TFTP
- 213 Clear Statistics
- 213 Restore Factory Defaults
- 214 Shutdown and Restart
- 215 Buzzer
- 219 Firmware Update - WebPAM PROe
- 219 TFTP Server
- 219 Your PC
- 220 Restart VTrak
- 221 Firmware Update - CLU
- 221 Restart VTrak
- 222 Replace Power Supply - All Models
- 223 Replace Cooling Unit Fan - M500f/i/p
- 228 Replace Cooling Unit Fan - M300f/i/p and M200f/i/p
- 232 Replace Cache Battery - M500f/i/p
- 235 Replace Cache Battery - M300f/i/p and M200f/i/p
- 237 Replace SEP - M500f/i/p
- 238 Replace SEP - M300f/i/p and M200f/i/p
- 239 Replace RAID Controller - All Models
- 241 Introduction to RAID
- 242 RAID 0 - Stripe
- 243 RAID 1 - Mirror
- 244 RAID 1E - Enhanced Mirror
- 245 RAID 5 - Block and Parity Stripe
- 246 RAID 10 - Mirror + Stripe
- 247 RAID 50 - Striping of Distributed Parity
- 247 RAID 50 Axles
- 249 Choosing a RAID Level
- 249 RAID 0
- 249 RAID 1
- 250 RAID 1E
- 250 RAID 5
- 250 RAID 10
- 251 RAID 50
- 252 Stripe Size
- 252 Sector Size
- 253 Cache Policy
- 253 Read Cache Policy
- 254 Write Cache Policy
- 254 Cache Line Size
- 254 Capacity Coercion
- 255 Initialization
- 255 Hot Spare Drive(s)
- 256 Partition and Format the Logical Drive
- 256 RAID Level Migration
- 257 Ranges of Disk Array Expansion
- 258 Media Patrol
- 259 Predictive Data Migration (PDM)
- 259 PDM Triggers
- 260 Transition
- 263 VTrak is Beeping
- 264 LEDs Display Amber or Red
- 264 Front Panel
- 266 Drive Status Indicators
- 267 Back of Enclosure
- 270 CLU Reports a Problem
- 271 WebPAM PROe Reports a Problem
- 273 Event Notification Response
- 288 Critical & Offline Disk Arrays
- 288 When a Disk Drive Fails
- 288 With a Hot Spare Drive
- 289 Without a Hot Spare Drive
- 289 Rebuild Operation
- 291 Enclosure Problems
- 294 Connection Problems
- 294 SCSI Connections
- 294 Termination
- 295 iSCSI Connections
- 295 Serial Connections
- 296 Network Connections
- 297 Unsaved Data in the Controller Cache
- 299 Frequently Asked Questions
- 301 Contact Technical Support
- 302 United States
- 302 Europe, Africa, Middle East
- 302 Germany
- 302 Italy
- 303 Taiwan
- 303 China
- 304 Limited Warranty
- 305 Returning Product For Repair
- 307 Serial Connector Pinout
- 308 SNMP MIB Files
- 308 Load MIB Files
- 308 Compliance Statement