Cisco UCS C3160 and Red Hat Gluster Storage 500

Add to my manuals
22 Pages

advertisement

Cisco UCS C3160 and Red Hat Gluster Storage 500 | Manualzz

Performance White Paper

Large Unstructured Data Storage in a Small

Datacenter Footprint: Cisco UCS C3160 and Red

Hat Gluster Storage 500-TB Solution

Executive Summary

Today, companies face scenarios that require an IT architecture that can hold hundreds of terabytes of unstructured data. Storage-intensive enterprise workloads can encompass the following:

Archiving and backup, including backup images and near-online (nearline) archives

Rich media content storage and delivery, such as videos, images, and audio files

Enterprise drop-box

Cloud and business applications, including log files, and RFID and other machine-generated data

Virtual and cloud infrastructure, such as virtual machine images

Emerging workloads, such as co-resident applications

This document is intended to assist organizations who are seeking an ultra-dense, high-throughput solution that can store a large amount of unstructured data in a small amount of rack space. The paper provides testing results that showcase how Cisco Unified Computing System™ (Cisco UCS

®

) C3160 Rack Server and Red Hat

®

Gluster

Storage, along with Cisco Nexus

®

9000 Series Switches, can be optimized to serve in these scenarios. It includes system setup information, testing methodology, and results for a 500-TB solution. Initial tests indicate that the

Cisco / Red Hat solution offers an ultra-dense, scalable, high-throughput solution that can store a large amount of unstructured data that is easy to manage.

Solution Overview

Cisco Unified Computing System™ (Cisco UCS

®

) C3160 Rack Server and Red Hat Gluster Storage, combined with Cisco Nexus

®

9000 Series Switches, provide a complete 500-TB solution for high-volume, unstructured data storage.

Cisco UCS C3160 Rack Server

The Cisco UCS C3160 Rack Server is a modular, high-density rack server ideal for service providers, enterprises, and industry-specific environments that require highly scalable computing with high-capacity local storage.

Designed for a new class of cloud-scale applications, it is simple to deploy and excellent for software-defined storage environments, unstructured data repositories, Microsoft Exchange, backup and archival, media streaming, and content distribution.

Based on the Intel

®

Xeon

®

processor E5-2600 v2 series, the server offers up to 360 TB of local storage in a compact four-rack-unit (4RU) form factor. And, the server helps organizations achieve the highest levels of data availability because its hard-disk drives are individually hot-swappable and include built-in enterprise-class

Redundant Array of Independent Disks (RAID).

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 22

Unlike typical high-density rack servers that require extended-depth racks, the Cisco UCS C3160 fits comfortably in a standard-depth rack, such as the Cisco UCS R42610.

Cisco Nexus 9300

Application workloads that are deployed across a mix of virtualized and nonvirtualized server and storage infrastructure require a network infrastructure that provides consistent connectivity, security, and visibility across a range of bare-metal, virtualized, and cloud computing environments. The Cisco Nexus 9000 Series Switches provide a flexible, agile, low-cost, application-centric infrastructure (ACI) and include both modular and fixed-port switches that are designed to overcome the challenges of workloads that span a mix of virtualized and nonvirtualized server and storage infrastructure.

Cisco Nexus 9300 fixed-port switches are designed for top-of-rack (ToR) and middle-of-row (MoR) deployment in data centers that support enterprise applications, service provider hosting, and cloud computing environments. The switches are Layer 2 and Layer 3 nonblocking 10 and 40 Gigabit Ethernet and Fibre Channel over Ethernet

(FCoE)–capable switches with up to 2.56 Tbps of internal bandwidth.

These high-density, nonblocking, low-power-consuming switches can be used in ToR, MoR, and end-of-row (EoR) deployments in enterprise data centers, service provider facilities, and large virtualized and cloud computing environments.

Cisco Nexus 9300 offers industry-leading density and performance with flexible port configurations that can support existing copper and fiber cabling. With 1/10GBASE-T support, the switches can deliver 10 Gigabit Ethernet over existing copper cabling, which enables a low-cost upgrade from Cisco Catalyst

®

6500 Series Switches when the switches are used in an MoR or EoR configuration.

Red Hat Gluster Storage

Red Hat Gluster Storage is open, software-defined scale-out storage that easily manages unstructured data for physical, virtual, and cloud environments. Combining both file and object storage with a scale-out architecture (see

Figure 1), it is designed to cost-effectively store and manage petabyte-scale data growth. It also delivers a continuous storage fabric across physical, virtual, and cloud resources so organizations can transform their big, semi-structured, and unstructured data from a burden to an asset.

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 2 of 22

Figure 1. Red Hat Gluster Storage Scales Up and Out

Built on the industry-leading Red Hat Enterprise Linux operating system, Red Hat Gluster Storage offers costeffective and highly available storage without scale or performance compromises. Organizations can use it to avoid storage silos by enabling global access to data through multiple file and object protocols. And, it works seamlessly with Cisco UCS C3160 servers.

Table 1.

Red Hat Gluster Storage Features

Single global namespace Aggregates disk and memory resources into a single trusted storage pool.

Replication

Supports synchronous replication within a data center and asynchronous replication for disaster recovery.

Snapshots

Elastic hashing algorithm

Help assure data protection through cluster-wide filesystem snapshots that are user-accessible for easy recovery of files.

Eliminates performance bottlenecks and single points of failure because there is no metadata server layer.

Easy online management

Industry-standard client support

● Web-based management console

● Powerful and intuitive command-line interface for Linux administrators

● Monitoring (Nagios-based)

● Expand and shrink storage capacity without downtime

● Network File System (NFS), Server Message Block (SMB) for file-based access

● OpenStack Swift for object access

● GlusterFS native client for highly parallelized access

System Specifications

This 500-TB solution includes Cisco UCS C3160 Rack Servers, Red Hat Gluster Storage, and Cisco Nexus 9300 switches. For the purpose of running the performance tests, the components of the solution were configured to the specifications described in this section. Figure 2 shows a diagram of the system.

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 3 of 22

Figure 2. System Diagram

Component Configuration

Table 2 describes how the components of the solution were configured for the tests.

Table 2.

Component Configuration

Component

Cisco UCS C3160 Rack

Server

Operating System

Gluster Software

Connectivity

Configuration

Four Cisco UCS C3160 servers, each configured with

● Fifty-six 6-TB 7200 rpm disks

● Two E5-2695 v2 CPUs

● 256-MB RAM (sixteen 16-GB DDR3 1866 MHz)

● Two UCS C3160 System I/O Controller with single adapter card slot

● Two Cisco UCS Virtual Interface Card (VIC) 1227 dual port 10-Gbps Enhanced Small Form-Factor Pluggable

(SFP+)

● One Cisco 12G SAS RAID controller with 4-GB flash-backed write cache

Red Hat Enterprise Linux 6.6

Red Hat Gluster Storage 3.0.4

Two Cisco Nexus 9300 48p 1/10G SFP+ with Nexus 9300 Base Cisco NX-OS Release 7.0(3)

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 4 of 22

RAID Configuration

RAID configuration included the following:

Each UCS C3160 server included 4 x 14 disk RAID 6 arrays for the Gluster bricks, created using the LSI

RAID controller graphical configuration utility.

The RAID controller cache was set to “write back with battery--backed write cache”.

Each RAID 6 array featured one virtual disk (72 TB) and each virtual disk had a capacity of 66 TB after formatting with XFS.

StorCLI output results are shown in Appendix A.

Network Configuration

The network was configured as follows:

Each UCS C3160 server includes two UCS VIC 1227 adapters.

Each VIC 1227 adapter has two 10-Gb ports, with one port connected to Nexus 9300 switch A and the other port connected to Nexus 9300 switch B. This means that each UCS C3160 has two 10-Gb connections (one from each VIC 1227) to each Nexus 9300.

The two Nexus 9300 switches have a 160-Gb virtual port channel peer link configured between them, so that both switches can appear as a single logical switch.

The four 10-Gb interfaces from each C3160 are configured as a single 40 Gigabit Ethernet interface using a

Link Aggregation Control Protocol (LACP) mode 4 802.3ad bond.

Output from the ifcfg-bond0 and on one of the Ethernet devices is shown in Appendix B.

Red Hat Gluster Storage Installation and Configuration

Installation and configuration of the Red Hat Gluster Storage comprised several steps:

1.

Install the Red Hat Gluster Storage 3.0.4 ISO image on the Cisco UCS C3160 servers. This release is based on Red Hat Enterprise Linux 6.6 and GlusterFS 3.6. For step-by-step installation and configuration for Red Hat Gluster Storage, visit https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/

2.

Install Red Hat Enterprise Linux 7.1 on the client servers, following instructions in the Installing Native

Client in the Red Hat Gluster Storage Administration guide .

3.

Create storage bricks, using the rhs-server-init.sh script on all nodes.

The rhs-server-init.sh script does the following:

Creates a physical volume.

Creates a logical volume.

Makes the XFS file system on the logical volume.

Runs a tuned performance profile for Red Hat Gluster Storage virtualization.

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 5 of 22

4. Make sure the glusterd daemon is running on each of the Red Hat Storage servers (gluster1, gluster2, gluster3, and gluster4), and then use the gluster peer probe command to create the trusted storage cluster from the gluster1 server:

# gluster peer probe gluster2

# gluster peer probe gluster3

# gluster peer probe gluster4

5. Confirm that all the storage servers are in a connected state using the gluster peer status command:

# gluster peer status

6. Create the distributed replicated (two-way) volume from the gluster1 server. (See Appendix C for the full volume creation script 2.)

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 6 of 22

7. Start the volume, and look at the status:

# gluster volume start gvol0

# gluster volume info

# gluster volume status

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 7 of 22

8. Confirm that all the storage servers are in a connected state.

9. Mount the GlusterFS volume gvol0 on client 1:

# mkdir /mnt/gvol0

# mount -t glusterfs gluster1:/gvol0 /mnt/gvol0

10. Repeat these commands on the other fifteen clients.

Note: Even though the same Red Hat Gluster Storage server is being used for mounting the volumes, the

Native Client protocol has built-in load balancing. The clients use the mount server initially to get the volume information and after that they will contact the individual storage servers directly for accessing the data. The data request does not have to go through the mount server.

Test Methodology and Results

The testing environment was set up to include industry-standard tools to measure read/write performance and to provide benchmarking for performance across a specific cluster and performance of distributed storage.

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 8 of 22

Driver Setup

Eight UCS B200 M4 servers were used to drive I/O to the 500 TB Gluster cluster. Each server was equipped as follows:

Two E5-26XX CPUs

256-GB RAM

Cisco VIC 1240 Capable of 40 Gbps throughput

The UCS 5100 chassis housing the B200 M4 servers includes two 8-port UCS 2208XP IO modules connected to two UCS 6248UP fabric interconnects. The fabric interconnects have 16 uplinks to the Nexus 9300.

To drive I/O, each UCS B200 M4 server also ran a hypervisor and two virtual switches, with a dedicated 10 Gigabit

Ethernet network interface. Each virtual machine was running Red Hat Enterprise Linux 7.1 and the I/O benchmark tools discussed below.

Sequential I/O Benchmarking (IOzone)

IOzone is a filesystem benchmark tool that is useful for performing a broad filesystem analysis of a computer platform. In this case, IOzone was used to test the sequential read/write performance of the GlusterFS replicated volume. IOzone’s cluster mode option -+m is particularly well-suited for distributed storage testing because testers can start many worker threads from various client systems in parallel, targeting the GlusterFS volume. The 8 times

2 distributed replicated volume was tested using the following command with a total of 128 threads of execution across 16 client systems (eight threads per client):

# iozone -+m ${IOZONE_CONFIG_FILENAME} -i ${IOZONE_TEST} -C -w -+n -s ${IOZONE_FILESZ} -r

${IOZONE_RECORDSZ} -+z -c -e -t ${TEST_THREADS}

The following parameters were used in the IOzone command line:

-+m specifies cluster testing mode.

IOZONE_CONFIG_ FILENAME is the IOzone config file for the cluster mode. The file lists the client host names and the associated GlusterFS mount point.

The IOZONE_TEST parameter was varied to cover the Sequential Read and Sequential Write test cases.

IOZONE_FILESZ was 8 GB. 8-GB file transfers were used as a representative workload for the “Content

Cloud” reference architecture testing.

IOZONE_RECORDSZ was varied between 64 KB and 16 MB, using record sizes that were powers of two.

This range of record sizes was meant to characterize the effect of record or block size (in client file requests) on I/O performance.

TEST_THREADS specifies the number of threads or processes that are active during the measurement.

128 threads were used across 16 client systems (eight threads per client).

Figure 3 shows up to 12 GB per second sequential read throughput and up to 4 GB per second replicated write results for typical sequential I/O sizes.

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 9 of 22

Figure 3. Sequential Throughput (IOzone)

Performance Benchmarking Across a Cluster (SmallFile)

The SmallFile benchmark is a Python-based small-file distributed POSIX workload generator that can be used to measure performance for a variety of small-file and file metadata-intensive workloads across an entire cluster. This benchmark was used to complement the IOzone benchmark for measuring the performance of large-file workloads.

Although the SmallFile benchmark kit supports multiple operations, only the create operation was used to create files and write data to them using varying file sizes ranging from 10 KB to 2 GB.

SmallFile benchmark parameters included:

The create operation to create a file and write data to it

Eight threads on each of the 16 clients (client 1 through client16)

The file size of 10 KB and each thread processes 100,000 files

Each thread pauses for 10 microseconds before starting the next file operation, and the response time is the file operation duration, measured to microsecond resolution

For files larger than 1 MB, a record size of 1024 KB is used to determine how much data was transferred in a single write system call

The SmallFile command line is as follows:

# python /root/smallfile_cli.py --operation create --threads 8 --file-size

10 --files 100000 --top $path --network-sync-dir /mnt/lockvol/smf-shared

--pause 10 --host-set client1,client2 … client16 --response-times Y

The benchmark returns the number of files processed per second and the rate that the application transferred data in megabytes per second.

SmallFile test results show up to 4,300 tiny files per second being created simultaneously and up to 3.3

GBytes/sec write throughput for the larger files (Figure 4).

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 10 of 22

Figure 4. File CREATE Operations in the SmallFile Tool

Random I/O Performance (FIO)

Flexible I/O (FIO) is an industry-standard storage performance benchmark that has been used primarily for single server testing; however, with the recent addition of client/server options, it can also be used in a distributed storage environment. The front end and back end of FIO can be run separately where the FIO server can generate an I/O workload on the system under test while being controlled from another system. The --server option is used to launch FIO in a request listening mode. The --client option is used to pass a list of test servers along with the FIO profile to define the workload to be generated.

FIO was used to determine the random I/O performance using smaller block sizes (ranging from 4 KB to 32 KB).

The FIO profile used on each client driver for the random test are listed in Appendix D.

The cluster accommodated about 25,000 random read IOPS at typical sizes with a latency of about 10 ms, as shown in Figure 5.

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 11 of 22

Figure 5. FIO Random Read Tests

The solution achieves about 9500 random write IOPS at typical sizes, as shown in Figure 6. Latency is less than 7 ms using the write cache on the array controllers on the C3160.

Figure 6. FIO Random Write Tests

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 12 of 22

Conclusion

Companies facing scenarios that require support for hundreds of terabytes of unstructured data can use the

Cisco/Red Hat Gluster Storage 500-TB solution as a scalable, high-throughput solution that can meet these demands while remaining easy to manage. Test results for performance using FIO, SmallFile, and IOzone show remarkable performance and indicate that this solution is capable of handling storage-intensive enterprise workloads in a dense datacenter footprint.

For More Information

Cisco UCS

®

C3160 Rack Server

Cisco Nexus

®

9300 switches

Red Hat Gluster Storage

IOzone test

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 13 of 22

Appendix A – StorCLI Output

[root@gluster1 ~]# /opt/MegaRAID/storcli/storcli64 /c0 show

Controller = 0

Status = Success

Description = None

Product Name = RAID controller for UCS C3X60 Storage Servers

Serial Number = FCH18407MU9

SAS Address = 5a0ecf9858830000

PCI Address = 00:14:00:00

System Time = 06/05/2015 12:43:25

Mfg. Date = 11/19/14

Controller Time = 06/05/2015 19:43:46

FW Package Build = 24.5.0-0023

BIOS Version = 6.19.05.0_4.16.08.00_0x06080500

FW Version = 4.250.00-3675

Driver Name = megaraid_sas

Driver Version = 06.803.01.00-rh1

Vendor Id = 0x1000

Device Id = 0x5D

SubVendor Id = 0x1137

SubDevice Id = 0x12D

Host Interface = PCIE

Device Interface = SAS-12G

Bus Number = 20

Device Number = 0

Function Number = 0

Drive Groups = 4

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 14 of 22

TOPOLOGY :

========

-------------------------------------------------------------------------

DG Arr Row EID:Slot DID Type State BT Size PDC PI SED DS3 FSpace

-------------------------------------------------------------------------

0 - - - - RAID6 Optl N 65.483 TB enbl N N dflt N

0 0 - - - RAID6 Optl N 65.483 TB enbl N N dflt N

0 0 0 8:1 14 DRIVE Onln N 5.456 TB enbl N N dflt -

0 0 1 8:2 23 DRIVE Onln N 5.456 TB enbl N N dflt -

0 0 2 8:3 26 DRIVE Onln N 5.456 TB enbl N N dflt -

0 0 3 8:4 16 DRIVE Onln N 5.456 TB enbl N N dflt -

0 0 4 8:5 24 DRIVE Onln N 5.456 TB enbl N N dflt -

0 0 5 8:6 20 DRIVE Onln N 5.456 TB enbl N N dflt -

0 0 6 8:7 28 DRIVE Onln N 5.456 TB enbl N N dflt -

0 0 7 8:8 15 DRIVE Onln N 5.456 TB enbl N N dflt -

0 0 8 8:9 11 DRIVE Onln N 5.456 TB enbl N N dflt -

0 0 9 8:10 22 DRIVE Onln N 5.456 TB enbl N N dflt -

0 0 10 8:11 18 DRIVE Onln N 5.456 TB enbl N N dflt -

0 0 11 8:12 19 DRIVE Onln N 5.456 TB enbl N N dflt -

0 0 12 8:13 17 DRIVE Onln N 5.456 TB enbl N N dflt -

0 0 13 8:14 10 DRIVE Onln N 5.456 TB enbl N N dflt -

1 - - - - RAID6 Optl N 65.483 TB enbl N N dflt N

1 0 - - - RAID6 Optl N 65.483 TB enbl N N dflt N

1 0 0 8:15 13 DRIVE Onln N 5.456 TB enbl N N dflt -

1 0 1 8:16 27 DRIVE Onln N 5.456 TB enbl N N dflt -

1 0 2 8:17 35 DRIVE Onln N 5.456 TB enbl N N dflt -

1 0 3 8:18 21 DRIVE Onln N 5.456 TB enbl N N dflt -

1 0 4 8:19 47 DRIVE Onln N 5.456 TB enbl N N dflt -

1 0 5 8:20 41 DRIVE Onln N 5.456 TB enbl N N dflt -

1 0 6 8:21 25 DRIVE Onln N 5.456 TB enbl N N dflt -

1 0 7 8:22 12 DRIVE Onln N 5.456 TB enbl N N dflt -

1 0 8 8:23 29 DRIVE Onln N 5.456 TB enbl N N dflt -

1 0 9 8:24 31 DRIVE Onln N 5.456 TB enbl N N dflt -

1 0 10 8:25 39 DRIVE Onln N 5.456 TB enbl N N dflt -

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 15 of 22

1 0 11 8:26 32 DRIVE Onln N 5.456 TB enbl N N dflt -

1 0 12 8:27 30 DRIVE Onln N 5.456 TB enbl N N dflt -

1 0 13 8:28 43 DRIVE Onln N 5.456 TB enbl N N dflt -

2 - - - - RAID6 Optl N 65.483 TB enbl N N dflt N

2 0 - - - RAID6 Optl N 65.483 TB enbl N N dflt N

2 0 0 8:29 50 DRIVE Onln N 5.456 TB enbl N N dflt -

2 0 1 8:30 33 DRIVE Onln N 5.456 TB enbl N N dflt -

2 0 2 8:31 36 DRIVE Onln N 5.456 TB enbl N N dflt -

2 0 3 8:32 52 DRIVE Onln N 5.456 TB enbl N N dflt -

2 0 4 8:33 49 DRIVE Onln N 5.456 TB enbl N N dflt -

2 0 5 8:34 42 DRIVE Onln N 5.456 TB enbl N N dflt -

2 0 6 8:35 37 DRIVE Onln N 5.456 TB enbl N N dflt -

2 0 7 8:36 54 DRIVE Onln N 5.456 TB enbl N N dflt -

2 0 8 8:37 51 DRIVE Onln N 5.456 TB enbl N N dflt -

2 0 9 8:38 55 DRIVE Onln N 5.456 TB enbl N N dflt -

2 0 10 8:39 59 DRIVE Onln N 5.456 TB enbl N N dflt -

2 0 11 8:40 40 DRIVE Onln N 5.456 TB enbl N N dflt -

2 0 12 8:41 46 DRIVE Onln N 5.456 TB enbl N N dflt -

2 0 13 8:42 45 DRIVE Onln N 5.456 TB enbl N N dflt -

3 - - - - RAID6 Optl N 65.483 TB enbl N N dflt N

3 0 - - - RAID6 Optl N 65.483 TB enbl N N dflt N

3 0 0 8:43 44 DRIVE Onln N 5.456 TB enbl N N dflt -

3 0 1 8:44 34 DRIVE Onln N 5.456 TB enbl N N dflt -

3 0 2 8:45 60 DRIVE Onln N 5.456 TB enbl N N dflt -

3 0 3 8:46 58 DRIVE Onln N 5.456 TB enbl N N dflt -

3 0 4 8:47 38 DRIVE Onln N 5.456 TB enbl N N dflt -

3 0 5 8:48 61 DRIVE Onln N 5.456 TB enbl N N dflt -

3 0 6 8:49 53 DRIVE Onln N 5.456 TB enbl N N dflt -

3 0 7 8:50 65 DRIVE Onln N 5.456 TB enbl N N dflt -

3 0 8 8:51 63 DRIVE Onln N 5.456 TB enbl N N dflt -

3 0 9 8:52 56 DRIVE Onln N 5.456 TB enbl N N dflt -

3 0 10 8:53 57 DRIVE Onln N 5.456 TB enbl N N dflt -

3 0 11 8:54 62 DRIVE Onln N 5.456 TB enbl N N dflt -

3 0 12 8:55 48 DRIVE Onln N 5.456 TB enbl N N dflt -

3 0 13 8:56 64 DRIVE Onln N 5.456 TB enbl N N dflt -

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 16 of 22

-------------------------------------------------------------------------

DG=Disk Group Index|Arr=Array Index|Row=Row Index|EID=Enclosure Device ID

DID=Device ID|Type=Drive Type|Onln=Online|Rbld=Rebuild|Dgrd=Degraded

Pdgd=Partially degraded|Offln=Offline|BT=Background Task Active

PDC=PD Cache|PI=Protection Info|SED=Self Encrypting Drive|Frgn=Foreign

DS3=Dimmer Switch 3|dflt=Default|Msng=Missing|FSpace=Free Space Present

Virtual Drives = 4

VD LIST :

=======

----------------------------------------------------------

DG/VD TYPE State Access Consist Cache sCC Size Name

----------------------------------------------------------

0/0 RAID6 Optl RW Yes RWBD - 65.483 TB

1/1 RAID6 Optl RW Yes RWBD - 65.483 TB

2/2 RAID6 Optl RW Yes RWBD - 65.483 TB

3/3 RAID6 Optl RW Yes RWBD - 65.483 TB

----------------------------------------------------------

Cac=CacheCade|Rec=Recovery|OfLn=OffLine|Pdgd=Partially Degraded|dgrd=Degraded

Optl=Optimal|RO=Read Only|RW=Read Write|HD=Hidden|B=Blocked|Consist=Consistent|

R=Read Ahead Always|NR=No Read Ahead|WB=WriteBack|

AWB=Always WriteBack|WT=WriteThrough|C=Cached IO|D=Direct IO|sCC=Scheduled

Check Consistency

Physical Drives = 56

PD LIST :

=======

-----------------------------------------------------------------------

EID:Slt DID State DG Size Intf Med SED PI SeSz Model Sp

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 17 of 22

-----------------------------------------------------------------------

8:1 14 Onln 0 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:2 23 Onln 0 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:3 26 Onln 0 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:4 16 Onln 0 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:5 24 Onln 0 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:6 20 Onln 0 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:7 28 Onln 0 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:8 15 Onln 0 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:9 11 Onln 0 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:10 22 Onln 0 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:11 18 Onln 0 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:12 19 Onln 0 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:13 17 Onln 0 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:14 10 Onln 0 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:15 13 Onln 1 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:16 27 Onln 1 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:17 35 Onln 1 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:18 21 Onln 1 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:19 47 Onln 1 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:20 41 Onln 1 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:21 25 Onln 1 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:22 12 Onln 1 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:23 29 Onln 1 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:24 31 Onln 1 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:25 39 Onln 1 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:26 32 Onln 1 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:27 30 Onln 1 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:28 43 Onln 1 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:29 50 Onln 2 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:30 33 Onln 2 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:31 36 Onln 2 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:32 52 Onln 2 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:33 49 Onln 2 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:34 42 Onln 2 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 18 of 22

8:35 37 Onln 2 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:36 54 Onln 2 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:37 51 Onln 2 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:38 55 Onln 2 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:39 59 Onln 2 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:40 40 Onln 2 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:41 46 Onln 2 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:42 45 Onln 2 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:43 44 Onln 3 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:44 34 Onln 3 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:45 60 Onln 3 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:46 58 Onln 3 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:47 38 Onln 3 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:48 61 Onln 3 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:49 53 Onln 3 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:50 65 Onln 3 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:51 63 Onln 3 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:52 56 Onln 3 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:53 57 Onln 3 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:54 62 Onln 3 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:55 48 Onln 3 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

8:56 64 Onln 3 5.456 TB SAS HDD N N 4 KB ST6000NM0014 U

-----------------------------------------------------------------------

EID-Enclosure Device ID|Slt-Slot No.|DID-Device ID|DG-DriveGroup

DHS-Dedicated Hot Spare|UGood-Unconfigured Good|GHS-Global Hotspare

UBad-Unconfigured Bad|Onln-Online|Offln-Offline|Intf-Interface

Med-Media Type|SED-Self Encryptive Drive|PI-Protection Info

SeSz-Sector Size|Sp-Spun|U-Up|D-Down|T-Transition|F-Foreign

UGUnsp-Unsupported|UGShld-UnConfigured shielded|HSPShld-Hotspare shielded

CFShld-Configured shielded|Cpybck-CopyBack|CBShld-Copyback Shielded

Cachevault_Info :

===============

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 19 of 22

------------------------------------

Model State Temp Mode MfgDate

------------------------------------

CVPM03 Optimal 33C - 2014/08/13

Appendix B: Output from ifcfg-bond0 and One of the Ethernet Devices

[root@gluster1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-bond0

TYPE=Bond

BONDING_MASTER=yes

IPV6INIT=no

NAME=bond0

ONBOOT=yes

DEVICE=bond0

IPADDR=192.168.1.15

NETMASK=255.255.255.0

MTU=9000

BOOTPROTO=none

BONDING_OPTS="mode=4 lacp_rate=1 xmit_hash_policy=layer3+4"

Output from:

/sys/class/net/bond0/bonding/mode: 802.3ad 4

/sys/class/net/bond0/bonding/xmit_hash_policy: layer3+4 1

/sys/class/net/bond0/bonding/slaves: eth0 eth1 eth2 eth3

/sys/class/net/bond0/bonding/lacp_rate: fast 1

Appendix C: Gluster Volume Creation Script

[root@gluster1 ~]# cat gluster-volume-create.sh gluster volume create gv0 replica 2 gluster1:/rhs/brick1/gv0 gluster2:/rhs/brick1/gv0 gluster3:/rhs/brick1/gv0 gluster4:/rhs/brick1/gv0 gluster1:/rhs/brick2/gv0 gluster2:/rhs/brick2/gv0 gluster3:/rhs/brick2/gv0 gluster4:/rhs/brick2/gv0 gluster1:/rhs/brick3/gv0 gluster2:/rhs/brick3/gv0 gluster3:/rhs/brick3/gv0 gluster4:/rhs/brick3/gv0 gluster1:/rhs/brick4/gv0 gluster2:/rhs/brick4/gv0 gluster3:/rhs/brick4/gv0 gluster4:/rhs/brick4/gv0

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 20 of 22

Appendix D FIO profile used on each client driver for the random test

WriteRand.FIO

[global] numjobs=2 iodepth=2 ioengine=libaio direct=1 directory=/fio/ group_reporting fsync_on_close=1 runtime=300

[test] name=randwrite rw=randwrite bs=[4k|8k|16k|32k] size=8192m

ReadRand.FIO

[global] numjobs=8 iodepth=2 ioengine=libaio direct=1 directory=/fio/ group_reporting runtime=300

[test] name=randread rw=randread bs=[4k|8k|16k|32k] size=8192m

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 21 of 22

Printed in USA

About Red Hat

Red Hat is the world's leading provider of open source software solutions, using a community-powered approach to reliable and high-performing cloud, Linux, middleware, storage, and virtualization technologies. Red Hat also offers award-winning support, training, and consulting services. As a connective hub in a global network of enterprises, partners, and open source communities, Red Hat helps create relevant, innovative technologies that liberate resources for growth and prepare customers for the future of IT.

© 2015 Cisco and/or its affiliates. All rights reserved. Cisco and the Cisco logo, Catalyst, Cisco Nexus, Cisco

Unified Computing System, and Cisco UCS are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks, see the Trademarks page on the Cisco website.

Third party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company.

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

C11-734975-01 08/15

Page 22 of 22

advertisement

Was this manual useful for you? Yes No
Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Related manuals

Download PDF

advertisement