Support for Cisco UCS C220/C240 M3 Rack Servers, Create and

Add to my manuals
33 Pages

advertisement

Support for Cisco UCS C220/C240 M3 Rack Servers, Create and | Manualzz

Configuration Guide

Support for Cisco UCS C220/C240

M3 Rack Servers

Create and Modify RAID Volumes:

Configuration Guide

March 12, 2014

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 33

Contents

Purpose of This Document ..................................................................................................................................... 3

RAID Overview ......................................................................................................................................................... 3

RAID Levels Supported by Cisco UCS C220/C240 M3 Rack Servers .................................................................. 3

Verifying the Supported RAID Levels in Cisco IMC .............................................................................................. 3

RAID 0 .................................................................................................................................................................. 4

RAID 1 .................................................................................................................................................................. 4

RAID 1E ................................................................................................................................................................ 5

RAID 5 .................................................................................................................................................................. 6

RAID 6 .................................................................................................................................................................. 6

RAID 10 ................................................................................................................................................................ 7

RAID 50 ................................................................................................................................................................ 8

RAID 60 ................................................................................................................................................................ 9

Cisco Integrated Management Controller Overview ........................................................................................... 10

Management Interfaces ...................................................................................................................................... 11

Tasks You Can Perform in Cisco IMC ................................................................................................................ 11

Creating Virtual Drives from Unused Physical Disks ......................................................................................... 11

Creating a RAID Volume Based on a Single Drive Group: RAID 0, 1, 5, and 6 .................................................. 11

Creating a Hot Spare ............................................................................................................................................. 16

Creating a Global Hot Spare ............................................................................................................................... 16

Creating a Dedicated Hot Spare ......................................................................................................................... 17

Creating a RAID Volume Based on Multiple Drive Groups: RAID 00, 10, 50, and 60 ....................................... 19

Replacing a Drive and Rebuilding the Data ......................................................................................................... 23

Changing the RAID Level with the WebBIOS Utility ........................................................................................... 26

Using the LSI MegaCLI Utility ............................................................................................................................... 30

General Parameters ............................................................................................................................................ 31

Command Syntax ............................................................................................................................................... 31

Conclusion ............................................................................................................................................................. 32

For More Information ............................................................................................................................................. 32

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 2 of 33

Purpose of This Document

This document gives an overview of RAID technology and describes the various RAID levels, as well as the RAID levels supported on Cisco UCS

®

C220 and C240 M3 Rack Servers. It also describes how to work with the Cisco

®

Integrated Management Controller (Cisco IMC), including creating RAID volumes and hot spares and replacing a drive. In addition, it discusses how to migrate the existing RAID level using the WebBIOS utility and how to use the

LSI MegaCLI utility.

RAID Overview

A redundant array of independent disks (RAID) is a group, or array, of independent physical drives that provides high performance and fault tolerance. The RAID drive group appears to the host computer as either a single unit or multiple virtual units. I/O is expedited because several drives can be accessed simultaneously.

RAID drive groups provide greater data storage reliability and fault tolerance than single-drive storage systems.

Data loss resulting from a drive failure can be prevented by using the remaining drives to reconstruct missing data.

RAID has become popular because it improves I/O performance and increases storage subsystem reliability.

RAID Levels Supported by Cisco UCS C220/C240 M3 Rack Servers

The RAID levels supported on any Cisco UCS C-Series server vary by RAID controller and can be verified in Cisco

IMC.

Example: RAID levels supported on two different controllers.

Server Controller

C220/ C240 M3 Cisco UCS RAID SAS 2008M-8i Mezzanine Card

LSI MegaRAID SAS 9271CV-8i

Supported RAID Levels

0, 1, 1E, 5, 10, 50

0, 1, 5, 6, 10, 50, 60

Verifying the Supported RAID Levels in Cisco IMC

● Log in to Cisco IMC, navigate to the Storage tab, and click Create Virtual Drive from Unused Physical

Drives. The supported RAID levels can be seen in the dropdown menu next to “RAID Level” (Figure 1).

Figure 1. Verifying the Supported RAID Levels

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 3 of 33

RAID 0

RAID 0 provides disk striping across all the drives in the RAID drive group. RAID 0 does not provide any data redundancy, but it offers the best performance of any RAID level. RAID 0 divides data into smaller segments and then stripes the data segments across each drive in the drive group. The size of each data segment is determined by the stripe size. RAID 0 offers high bandwidth.

RAID 0 is excellent for applications that require high bandwidth but do not require fault tolerance. Table 1 provides an overview of RAID 0. Figure 2 shows an example of RAID 0 use.

Table 1.

Drives

RAID 0

RAID 0

Uses

Ÿ

● Provides high data throughput, especially for large files

● Good for any environment that does not require fault tolerance

Advantages

Ÿ

● Provides increased data throughput for large files

● No capacity loss penalty for parity

Limitations

Ÿ

● Does not provide fault tolerance or high bandwidth

● All data is lost if any drive fails

1 through 32

Figure 2. RAID 0 Drive Group with Two Drives

RAID 1

In RAID 1, the RAID controller duplicates all the data from one drive to a second drive in the drive group. RAID 1 supports an even number of drives from 2 through 32 in a single span. It provides complete data redundancy, but at the cost of doubling the required data storage capacity.

RAID 1 is excellent for environments that require fault tolerance and have small capacity. Table 2 provides an overview of RAID 1. Figure 3 shows an example of RAID 1 use.

Table 2.

RAID 1

RAID 1

Uses

Good for small databases or any other environment that requires fault tolerance but has low capacity

Advantages

Provides complete data redundancy

Limitations

Requires twice as many drives

Drives

2 through 32 (must be an even number of drives)

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 4 of 33

Figure 3. RAID 1 Drive Group

RAID 1E

RAID 1E, Integrated Mirroring Enhanced, combines mirroring and data striping. Each mirrored stripe is written to a disk and mirrored to an adjacent disk.

RAID 1E has a profile similar to that of RAID 1. Any RAID 1 drive group with more than two drives is a RAID 1E drive group. Table 3 provides an overview of RAID 1. Figure 4 shows an example of RAID 1 use.

RAID 1E provides data redundancy and high levels of performance and allows a larger number of physical drives to be used. RAID 1E requires a minimum of three drives and supports a maximum of 32 drives.

Table 3.

RAID 1E

RAID 1E

Uses

Good for small databases or any other environment that requires fault tolerance

Advantages

Provides complete data redundancy and high performance

Limitations

Requires twice as many drives. Allows only 50 percent of the physical drive storage capacity to be used

Drives

4 through 32 (must be an even number of drives)

Figure 4. RAID 1E Drive Group 1

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 5 of 33

RAID 5

RAID 5 includes parity and disk striping at the block level. Parity is the data’s property of being odd or even, and parity checking is used to detect errors in the data. In RAID 5, the parity information is written to all drives. RAID 5 is best suited for networks that perform a lot of small I/O transactions simultaneously.

RAID 5 is excellent for environments with applications that have high read request rates but low write request rates. Table 4 provides an overview of RAID 5. Figure 5 shows an example of RAID 5 use.

Table 4.

RAID 5

RAID 5

Uses

Advantages

Limitations

Drives

● Provides high data throughput, especially for large files

● Good for transaction processing applications because each drive can read and write independently

● If a drive fails, the parity drive of the RAID controller is used to re-create all missing information

● Provides data redundancy, high read rates, and good performance in most environments; provides redundancy with lowest loss of capacity

● Not well suited to tasks requiring numerous write operations

● Suffers a greater impact if no cache is used (clustering)

● Drive performance is reduced if a drive is being rebuilt

● Not well suited for environments with few processes; such environments do not perform as well because the

RAID overhead is not offset by the performance gains in handling simultaneous processes

3 through 32

Figure 5. RAID 5 Drive Group with Six Drives

RAID 6

RAID 6 is similar to RAID 5 (it provides disk striping and parity), except that instead of one parity block per stripe, there are two. With two independent parity blocks, RAID 6 can survive the loss of any two drives in a virtual drive without losing data. It is well suited for data that requires a very high level of protection from loss.

RAID 6 is excellent for environments with applications that have high read request rates but low write request rates. Table 5 provides an overview of RAID 6. Figure 6 shows an example of RAID 6 use.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 6 of 33

Table 5.

RAID 6

RAID 6

Uses

Advantages

Limitations

Drives

● Office automation and online customer service that require fault tolerance

● Any application that has a high read request rate

● Provides data redundancy, high read rates, and good performance in most environments

● Can survive the loss of two drives or the loss of a drive while another drive is being rebuilt

● Provides the highest level of protection against drive loss or the loss of a drive while another drive is being rebuilt

● Provides the highest level of protection against drive failures of all the RAID levels

● Provides read performance similar to that of RAID 5

● Not well suited to tasks requiring numerous write operations; RAID 6 virtual drive has to generate two sets of parity data for each write operation, which results in a significant decrease in performance during write operations

● Drive performance is reduced during a drive rebuild

● Not well suited for environments with few processes; such environments do not perform as well because the

RAID overhead is not offset by the performance gains in handling simultaneous processes

● Costs more because of the additional capacity required—two parity blocks per stripe

3 through 32

Figure 6. RAID 6 with Distributed Parity Across Two Blocks in a Stripe

RAID 10

Virtual drives defined across multiple RAID 1 drive groups are referred to as RAID 10 (1 + 0). RAID 10 creates striped data across mirrored spans. The RAID 10 drive group is a spanned drive group that creates a striped set from a series of mirrored drives. The RAID 1 virtual drives must have the same stripe size. Spanning is used because one virtual drive is defined across more than one drive group. Data is striped across drive groups to increase performance by enabling access to multiple drive groups simultaneously.

Configure RAID 10 by spanning two contiguous RAID 1 virtual drives, up to the maximum number of supported devices for the controller. RAID 10 supports a maximum of eight spans, with a maximum of 32 drives per span.

You must use an even number of drives in each RAID 10 virtual drive in the span.

RAID 10 is excellent for environments that require a higher degree of fault tolerance and medium-sized capacity.

Table 6 provides an overview of RAID 10. Figure 7 shows an example of RAID 10 use.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 7 of 33

Table 6.

RAID 10

RAID 10

Uses

● Appropriate when used with data storage that needs 100 percent redundancy of mirrored drive groups and that also needs the enhanced I/O performance of RAID 0 (striped drive groups)

● Works well for medium-sized databases

Advantages Provides both high data transfer rates and complete data redundancy

Limitations

Requires twice as many drives as all other RAID levels

Drives

4 through 32 in multiples of 4 (limited by the maximum number of drives supported by the controller using an even number of drives in each RAID 10 virtual drive in the span)

Figure 7. RAID 10 Drive Group

RAID 50

RAID 50 is a combination of RAID 5 and RAID 0 (5 + 0). RAID 50 includes both parity and disk striping across multiple drive groups. It is best implemented on two RAID 5 drive groups with data striped across both drive groups.

RAID 50 divides data into smaller blocks and then stripes the blocks of data to each RAID 5 disk set. RAID 5 divides data into smaller blocks, calculates parity by performing an exclusive-or operation on the blocks, and then writes the blocks of data and parity to each drive in the drive group. The size of each block is determined by the stripe size parameter, which is set during the creation of the RAID set.

RAID 50 can support up to eight spans and tolerate up to eight drive failures, though less than the drive capacity is available. Although multiple drive failures are tolerated, only one drive failure is tolerated in each RAID 5 drive group.

RAID 50 is excellent for environments that require high reliability, high request rates, high data transfer, and medium-sized to large capacity. Table 7 provides an overview of RAID 50. Figure 8 shows an example of RAID 50 use.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 8 of 33

Table 7.

RAID 50

RAID 50

Uses

Advantages Provides high data throughput, data redundancy, and very good performance

Limitations

Requires 2 to 8 times as many parity drives as RAID 5

Drives

Appropriate when used with data that requires high reliability, high request rates, high data transfer, and mediumsized to large capacity

8 spans of RAID 5 drive groups containing 3 through 32 drives each (limited by the maximum number of devices supported by the controller)

Figure 8. RAID 50 Drive Group

RAID 60

RAID 60 provides the features of both RAID 6 and RAID 0 (6 + 0) and includes both parity and disk striping across multiple drive groups. RAID 6 supports two independent parity blocks per stripe. A RAID 60 virtual drive can survive the loss of two drives in each of the RAID 6 sets without losing data. RAID 60 is best implemented on two

RAID 6 drive groups, with data striped across both drive groups.

RAID 60 divides data into smaller blocks, calculates parity by performing an exclusive-or operation on the blocks, and then writes the blocks of data and parity to each drive in the drive group. The size of each block is determined by the stripe size parameter, which is set during the creation of the RAID set.

RAID 60 can support up to eight spans and tolerates up to 16 drive failures, though less than the total drive capacity is available. Two drive failures can be tolerated in each RAID 6 drive group.

RAID 60 is excellent for environments that require a very high level of protection from loss and that have high read request rates but low write request rates. Table 8 provides an overview of RAID 60. Figure 9 shows an example of

RAID 60 use.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 9 of 33

Table 8.

RAID 60

RAID 60

Uses

● Provides a high level of data protection through the use of a second parity block in each stripe

● Good for data that requires a very high level of protection from loss

● In the event of a failure of one drive or two drives in a RAID set in a virtual drive, the RAID controller uses the parity blocks to re-create all the missing information

● If two drives in a RAID 6 set in a RAID 60 virtual drive fail, two drive rebuild operations are required, one for each drive, and these rebuild operations can occur at the same time

● Use for office automation and online customer service that require fault tolerance

● Use for any application that has high read request rates but low write request rates

Advantages

● Provides data redundancy, high read rates, and good performance in most environments

● Each RAID 6 set can survive the loss of two drives or the loss of a drive while another drive is being rebuilt

● Provides the highest level of protection against drive failures of all of the RAID levels

● Read performance is similar to that of RAID 50, though random read operations in RAID 60 may be slightly faster because data is spread across at least one more disk in each RAID 6 set

Limitations

● Not well suited to tasks requiring numerous write operations; a RAID 60 virtual drive has to generate two sets of parity data for each write operation, which results in a significant decrease in performance during write operations

● Drive performance is reduced during a drive rebuild

● Not well suited for environments with few processes; such environments do not perform as well because the

RAID overhead is not offset by the performance gains in handling simultaneous processes

● Costs more because of the additional capacity required to use two parity blocks per stripe

Drives

A minimum of 8

Figure 9. RAID 60 Drive Group

Cisco Integrated Management Controller Overview

The Cisco IMC is the management service for the Cisco UCS C-Series Rack Servers. Cisco IMC runs within the server. The Cisco IMC GUI is a web-based management interface. You can launch the GUI and manage the server from any remote host that meets the following minimum requirements:

Java 1.6 or later

● HTTP and HTTPS enabled

Adobe Flash Player 10 or higher

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 10 of 33

Management Interfaces

You can use a web-based GUI or Secure Shell (SSH)

–based command-line interface (CLI) to access, configure, administer, and monitor the server. Almost all tasks can be performed in either interface, and the results of tasks performed in one interface are displayed in the other. However, you cannot do the following:

● Use the Cisco IMC GUI to invoke the Cisco IMC CLI

View a command that has been invoked through the Cisco IMC CLI in the Cisco IMC GUI

● Generate Cisco IMC CLI output from the Cisco IMC GUI

Tasks You Can Perform in Cisco IMC

You can use the Cisco IMC to perform the following server management tasks:

Power on, power off, power cycle, reset, and shut down the server

● Toggle the locator LED

Configure the server boot order

● View server properties and sensors

Manage remote presence

● Create and manage local user accounts and enable remote-user authentication through Microsoft Active

Directory

Configure network-related settings, including network interface card (NIC) properties, IPv4, VLANs, and network security

● Configure communication services, including HTTP, SSH, and Intelligent Platform Management Interface

(IPMI) over LAN

Manage certificates

● Configure platform event filters

Update Cisco IMC firmware

● Monitor faults, alarms, and server status

Creating Virtual Drives from Unused Physical Disks

A drive group is a group of physical drives. These drives are managed in partitions known as virtual drives.

A virtual drive is a partition in a drive group that is composed of contiguous data segments on the drives. A virtual drive can consist of an entire drive group, multiple drive groups, an entire drive group plus parts of other drive groups, a part of a drive group, parts of more than one drive group, or a combination of any two of these options.

Creating a RAID Volume Based on a Single Drive Group: RAID 0, 1, 5, and 6

Follow the steps presented here to create RAID volumes (virtual drives) from Cisco IMC.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 11 of 33

Step 1: Log in to Cisco IMC.

Step 2: Open the Storage tab in the left panel and select the controller to which SAS cables are connected.

LSI MegaRAID SAS 9271-8i Controller

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 12 of 33

LSI RAID SAS 2008M-8i Controller

Step 3: Open the Controller Info tab and select the Create Virtual Drive from Unused Physical Drives option; then proceed with drive creation.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 13 of 33

Step 4: To create a RAID volume based on a single drive group (RAID 0, 1, 5, or 6), select the required physical drives and create a drive group with those physical drives. Here are some examples:

RAID 0 requires at least one physical drive. If the minimum drive requirement is not met, the Create Virtual

Drive button will not be enabled.

RAID 1 requires a minimum of two physical drives in a drive group. If this minimum requirement is not met, the Create Virtual Drive button will not be enabled.

When only one physical drive is selected, the Create Virtual Drive button is dimmed.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 14 of 33

When at least two physical drives are selected, the Create Virtual Drive button is active.

RAID 5 and 6 require a minimum of three physical drives to create a RAID volume. Select the drives and click the >> button to create the drive group. Then click the Create Virtual Drive button at the bottom to create a virtual drive.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 15 of 33

Step 5: Navigate to the Virtual Drive Info tab to see the virtual drives created. In this example, three virtual drives have been created (RAID 0, 1, and 5).

Creating a Hot Spare

A hot spare is an additional, unused drive that is part of the disk subsystem. It is usually in standby mode, ready for service if a drive fails. If a drive used in a RAID virtual drive fails, a hot spare automatically takes its place, and the data on the failed drive is rebuilt on the hot spare. Hot spares can be used for RAID 1, 5, 6, 10, 50, and 60.

Two types of hot spares are used:

Global hot spares

Dedicated hot spares

Creating a Global Hot Spare

A global hot spare drive can be used to replace any failed drive in a redundant drive group, as long as its capacity is equal to or larger than the capacity of the failed drive. A global hot spare defined on any channel should be available to replace a failed drive on both channels.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 16 of 33

Step 1: To create a global hot spare, log in to Cisco IMC and navigate to the Storage tab and then the Physical

Drive Info tab.

Step 2: Select the unconfigured drive; the options for creating a global or dedicated hot spare are displayed in the

Actions area.

Creating a Dedicated Hot Spare

A dedicated hot spare can be used to replace a failed drive only in a chosen drive group. One or more drives can be designated as members of a spare drive pool. The most suitable drive in the pool is chosen for failover. A dedicated hot spare is used before a drive in the global hot spare pool.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 17 of 33

Step 1: To create a dedicated hot spare, log in to Cisco IMC and select the Storage tab and then the Physical

Drive Info tab.

Step 2: Select the unconfigured drive, and in the Actions area select Make Dedicated Hot Spare.

Step 3: After you select the Make Dedicated Hot Spare option, a window pops up showing the options for selecting the virtual drive to which you want to add this dedicated hot spare. Select the virtual drive.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 18 of 33

Step 4: Click the Make Dedicated Hot Spare button. After you click the button, the drive status will be changed to

Dedicated Hot Spare.

Creating a RAID Volume Based on Multiple Drive Groups: RAID 00, 10, 50, and 60

Disk spanning allows multiple drive groups to function like one big drive. Spanning overcomes a lack of disk space and simplifies storage management by combining existing resources or adding relatively inexpensive resources.

Spanning alone does not provide reliability or performance enhancements. Spanned virtual drives must have the same stripe size and must be contiguous. With spanning, a RAID 1 drive group is turned into a RAID 10 drive group (Figure 10).

Figure 10. RAID 10

Spanning two contiguous RAID 0 virtual drives does not produce a new RAID level or add fault tolerance. It does increase the capacity of the virtual drive and improves performance by doubling the number of physical disks.

Table 9 describes how to configure RAID 00, 10, 50, and 60. The virtual drives must have the same stripe size, and the maximum number of spans is eight. The full drive capacity is used when you span virtual drives.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 19 of 33

Table 9.

RAID 00, 10, 50, and 60 Configuration

RAID level

00

Description

10

50

60

Configure RAID 00 by spanning two contiguous RAID 0 virtual drives, up to the maximum number of supported devices for the controller

Configure RAID 10 by spanning two contiguous RAID 1 virtual drives, up to maximum number of supported devices for the controller RAID 10 supports a maximum of eight spans. You must use an even number of drives in each RAID virtual drive in the span. The RAID 1 virtual drives must have the same stripe size.

Configure RAID 50 by spanning two contiguous RAID 5 virtual drives. The RAID 5 virtual drives must have the same stripe size.

Configure RAID 60 by spanning two contiguous RAID 6 virtual drives.

To create a RAID volume based on multiple drive groups (RAID 00, 10, 50, or 60), follow the steps presented here.

Step 1: Log in to Cisco IMC.

Step 2: Open the Storage tab and click on the Controller Info tab. Select the Create Virtual Drive from Unused

Physical Drives option; then proceed with virtual drive creation.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 20 of 33

Step 3: To create a RAID volume based on multiple drive groups (disk spanning) (RAID 00, 10, 50, or 60), select the required physical drives and create the drive groups. Then click the Create Virtual Drive button.

Here are some examples:

● RAID 10 requires a minimum of two physical drives per drive group and a minimum of two drive groups. If any of the minimum requirements are not met, a RAID 10 volume cannot be created.

If you create only one drive group, you will not be allowed to create a RAID 10 volume.

When the minimum requirements are met (two drive groups with two physical drives), you can create a

RAID 10 volume.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 21 of 33

After you have met at least the minimum requirements, click the Create Virtual Drive button to create a

RAID 10 volume. Navigate to the Virtual Drive Info tab to see the RAID 10 volume that has been created.

● RAID 50 and 60 require a minimum of two drive groups with a minimum of three physical drives in each drive group.

In the example here, RAID 60 is created with the minimum number of physical drives and drive groups.

Click the Create Virtual Drive button and navigate to the Virtual Drive Info tab to see the RAID 60 volume that has been created.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 22 of 33

Replacing a Drive and Rebuilding the Data

Drives are installed in front-panel drive bays that provide hot-pluggable access. Drives can be also removed or swapped without shutting down the server (Figure 11).

Figure 11. 8 Hot-Swappable 2.5 inch-Drives

You can automatically replace a failed drive with a hot spare (either dedicated or global). If any drive in a RAID virtual drive fails, a hot spare automatically takes its place, and the data on the failed drive is rebuilt on the hot spare. Hot spares can be used for RAID 1, 5, 6, 10, 50, and 60.

Faulty or failed drives in a RAID virtual drive are hot swappable, meaning that no reboot is required to swap a failed drive in a virtual drive. Hot swapping is supported on RAID 1, 5, 6, 10, 50, and 60.

Note: RAID 0 does not support hot swapping.

You can track the rebuild process from the Cisco IMC.

Step 1: Log in to Cisco IMC and navigate to the Storage tab. In the example shown here, the RAID 1 virtual drive is created by using two physical drives (drives 5 and 6).

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 23 of 33

Step 2: Drive 6 is removed and the drive health status shows a moderate fault, but the OS installed on this virtual drive is still running fine. View the Storage > Virtual Drive Info tab. Then view the Storage > Physical Drive Info tab.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 24 of 33

Step 3: When the new drive is installed, the drive rebuilding begins, and the data on the failed drive is rebuilt on the hot spare. View the Storage > Virtual Drive Info tab. Then view the Storage > Physical Drive Info tab.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 25 of 33

Step 4: View the actions being performed on the Storage > Storage Log tab.

Step 5: The rebuild time depends on the drive capacity. After the rebuild is complete, the physical drive status changes from Rebuild to Online, and the virtual drive health status is listed as Good. View the Storage > Physical

Drive Info tab. Then view the Storage > Virtual Drive Info tab.

Changing the RAID Level with the WebBIOS Utility

As the amount of data and the number of drives in your system increase, you can use RAID-level migration to change a virtual drive from one RAID level to another. You do not have to power down or restart the system. When you migrate a virtual drive, you can keep the same number of drives, or you can add drives. You can use the

WebBIOS configuration utility to migrate the RAID level of an existing virtual drive.

Note: Although you can apply RAID-level migration at any time, you should do so only when no reboot operations are occurring. Many operating systems run I/O operations serially (one at a time) during bootup. With

RAID-level migration running, a boot operation may take more than 15 minutes.

Follow the steps presented here to change the RAID level using the WebBIOS configuration utility.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 26 of 33

Step 1: Reboot the server and press Ctrl+H to open the server WebBIOS configuration utility.

Step 2: Select the adapter or RAID controller and then Virtual Drives.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 27 of 33

Step 3: Select the virtual drive that you want to migrate. In this example, only one virtual drive is configured, so when you select Virtual Drives, the virtual drive is displayed automatically, as shown here.

Step 4: In the Virtual Drives pane, select Properties.

Step 5: In the Properties window, select Adv Opers in the Operations section; then click Go.

Step 6: In the Advanced Operations window, select the drive that needs to be migrated and select an option:

Change RAID Level or Change RAID Level and Add Drive.

● If you select Change RAID Level, select the new RAID level from the drop-down list.

If you select Change RAID Level and Add Drive, select the new RAID level from the drop-down list and select one or more drives to add from the list of drives.

The available RAID levels are limited, depending on the current RAID level of the virtual drive and the number of drives available. In the example here, there is only one drive group, RAID 0; to migrate it to RAID 1, select RAID 1 from the drop-down list. Select Change RAID Level and Add Drive for this example; then click Go.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 28 of 33

Step 7: When the warning message appears, click Yes to confirm the process and proceed with the RAID-level migration of the virtual drive.

Step 8: A reconstruction operation begins on the virtual drive. Let it complete before performing any other tasks in the WebBIOS configuration utility.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 29 of 33

Step 9: After the reconstruction is complete, you can view the changed RAID level in the Virtual Drives pane.

Using the LSI MegaCLI Utility

The LSI MegaCLI utility can be used to obtain information from the LSI RAID controller. MegaCLI can also be used to create RAID volumes, migrate the RAID level, and troubleshoot.

MegaCLI supports Microsoft Windows, Linux, VMware, Solaris, DOS, and free BSD. Download MegaCLI and unzip the file. Then choose the right installation package for your operating system. For information about installation procedures, problem fixes, and supported RAID controllers, see the 8.0.4.07_MegaCLI.txt file. This file is available after you have unzipped the MegaCLI file. Detailed installation procedures can be found in the readme file associated with each OS type.

To use the MegaCLI utility:

1. Download the utility from LSI at this link: MegaCLI Utility.

2. Install the utility on the server.

3. Change the path to the location where MegaCLI is installed.

Note that you must enter megacli before every command, as shown in Figure 12. The command in the figure shows the number of adapters (controllers) installed.

Figure 12. Using MegaCLI

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 30 of 33

General Parameters

Following are some general parameters for MegaCLI:

Adapter parameter

–aN

The parameter

–aN, where N is a number starting with zero or the string ALL, specifies the adapter ID. If you have only one controller, you can safely use ALL instead of a specific ID, but you’re encouraged to use the ID for anything that makes changes to your RAID configuration.

Physical drive parameter

–PhysDrv[E:S]

For commands that operate on one or more physical drives, the parameter

–PhysDrv[E: S] is used, where E is the enclosure device ID in which the drive resides, and S is the slot number (stating with 0). The enclosure device ID can be determined by using the command MegaCli

–Enclinfo –aALL. The [E: S] syntax is also used to specify the physical drives when you create a new RAID virtual drive

Virtual drive parameter

–Lx

The parameter

–Lx is used for specifying the virtual drive, where x is a number starting with 0 or the string

ALL.

Command Syntax

This section presents the syntax for common MegaCLI commands.

Commands to Get Information

● Controller information:

MegaCli

–AdpAllInfo –aALL

MegaCli

–CfgDsply –aALL

MegaCli

–adpeventlog –getevents –f lsi-events.log –a0 –nolog

Enclosure information:

MegaCli

–EncInfo –aALL

● Virtual drive information:

MegaCli

–LDInfo –Lall –aAll

Physical drive information:

MegaCli

–PDList –aAll

MegaCli

–PDInfo –PhysDrv [E: S] –aALL

Commands for Virtual Drive Management

● Create a RAID volume based on a single drive group (RAID 0, RAID 1, or RAID 5):

MegaCli

–CfgLdAdd –r (#raid level#) [E: S] –aN

Create a RAID volume based on double drive groups (RAID 10 or RAID 50):

MegaCli

–CfgSpanAdd –r (#Raid level#) –Array0 [E: S, E: S] –Array1 [E: S, E: S] –aN

Figure 13 shows the creation of a double-drive-group RAID volume.

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 31 of 33

Figure 13. MegaCLI Command to Create a RAID Volume Based on Double Drive Groups

Commands for RAID-Level Migration

● Migrate or upgrade the RAID level:

MegaCli -LDRecon {-Start

–R(#Raid Level#) [ -Add/Rmv PhysDrv[E0:S0, E1:S1,…] ] } –ShowProg –Lx

–aN

-LDRecon: Reconstruct the virtual drive.

-Start: Start reconstruction of the selected (-Lx) virtual disk to a new RAID level.

-Add/Rmv PhysDrv: Add or remove a drive from the existing virtual disk.

-ShowProg: Display a snapshot of the ongoing reconstruction process.

The example in Figure 14 migrates a virtual disk (L1) that is RAID 1 to RAID 5. Figure 15 shows the progress of the migration.

Figure 14. MegaCLI Command to Migrate RAID 1 to RAID 5

Figure 15. MegaCLI Command to Show Migration Progress

Conclusion

This document has described the RAID levels, how to create RAID virtual drives on Cisco UCS C220 and C240 servers from Cisco IMC, and also how to migrate virtual drive RAID levels from WebBIOS and MegaCLI.

For More Information

Refer to the Cisco UCS Servers RAID Guide for more information: http://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/c/sw/raid/configuration/guide/RAID_GUIDE.html

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 32 of 33

Printed in USA

© 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

C07-732029-00 6/14

Page 33 of 33

advertisement

Was this manual useful for you? Yes No
Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Related manuals