advertisement
![Dell EMC ECS D4500, ECS D5600, ECS U300, ECS U700 Hardware Manual | Manualzz Dell EMC ECS D4500, ECS D5600, ECS U300, ECS U700 Hardware Manual | Manualzz](http://s2.manualzz.com/store/data/068139284_1-388b1db734f940d1defa136789e3f97f-360x466.png)
Dell EMC Elastic Cloud Storage (ECS)
D- and U-Series
Hardware Guide
302-003-477
09
Copyright © 2014-2018 Dell Inc. or its subsidiaries. All rights reserved.
Published April 2018
Dell believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS-IS.“ DELL MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND
WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. USE, COPYING, AND DISTRIBUTION OF ANY DELL SOFTWARE DESCRIBED
IN THIS PUBLICATION REQUIRES AN APPLICABLE SOFTWARE LICENSE.
Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be the property of their respective owners.
Published in the USA.
Dell EMC
Hopkinton, Massachusetts 01748-9103
1-508-435-1000 In North America 1-866-464-7381 www.DellEMC.com
2 D- and U-Series Hardware Guide
CONTENTS
Figures
Tables
Chapter 1
Chapter 2
Chapter 3
Chapter 4
Chapter 5
Chapter 6
5
7
Hardware Components and Configurations 9
ECS Appliance hardware components.........................................................10
U-Series components.....................................................................10
D-Series components..................................................................... 12
C-Series components.....................................................................13
U-Series Appliance (Gen2) configurations and upgrade paths.................... 15
U-Series Appliance (Gen1) configurations and upgrade paths.....................18
D-Series Appliance configurations and upgrade paths.................................19
C-Series Appliance (Gen2) configurations and upgrade paths....................20
C-Series Appliance (Gen1) configurations and upgrade paths.................... 22
Certified hardware in support of ECS 3.2................................................... 23
Servers 25
Server front views......................................................................... 27
Server rear view............................................................................ 28
Rack and node host names......................................................................... 30
Switches 33
Private switch: Cisco 3048 48-P................................................... 35
Private switch: Arista 7010T-48.....................................................36
Private switch: Arista 7048T-48.................................................... 37
Public switch: Arista 7050SX-64................................................... 38
Public switch: Arista 7050S-52..................................................... 39
Public switch: Arista 7150S-24...................................................... 40
Public switch: Arista 7124SX..........................................................42
Disk Drives 45
Pikes Peak (dense storage)........................................................... 47
Voyager DAE................................................................................. 56
Third Party Rack Requirements 67
Third-party rack requirements....................................................................68
Power Cabling 71
U-Series single-phase AC power cabling ....................................................72
D- and U-Series Hardware Guide 3
CONTENTS
Chapter 7
Chapter 8
U-Series three-phase AC power cabling..................................................... 74
D-Series single-phase AC power cabling ....................................................77
D-Series three-phase AC power cabling..................................................... 79
C-Series single-phase AC power cabling ................................................... 83
C-Series 3-phase AC power cabling .......................................................... 84
SAS Cabling 89
Network Cabling 95
Connecting ECS appliances in a single site ................................................ 96
4 D- and U-Series Hardware Guide
FIGURES
29
30
31
32
25
26
27
28
21
22
23
24
17
18
19
20
13
14
15
16
9
10
11
12
7
8
5
6
3
4
1
2
45
46
47
48
41
42
43
44
49
50
51
52
53
37
38
39
40
33
34
35
36
Phoenix-16 (Gen1) and Rinjin-16 (Gen2) server chassis front view ............................ 27
Phoenix-12 (Gen1) and Rinjin-12 (Gen2) server chassis front view ............................ 27
Pikes Peak chassis with I/O module and power supplies removed, sleds extended.....49
U-Series disk layout for 15-disk configurations (Gen1, Gen2 full-rack only)................58
U-Series disk layout for 30-disk configurations (Gen1, Gen2).................................... 59
U-Series disk layout for 45-disk configurations (Gen1, Gen2 full-rack)...................... 60
U-Series single-phase AC power cabling for eight-node configurations .....................73
Three-phase AC delta power cabling for eight-node configuration............................. 75
Cable legend for three-phase WYE AC power diagram............................................... 76
Three-phase WYE AC power cabling for eight-node configuration............................. 77
D-Series single-phase AC power cabling for eight-node configurations .....................78
Three-phase AC delta power cabling for eight-node configuration.............................80
Three-phase WYE AC power cabling for eight-node configuration............................. 82
C-Series single-phase AC power cabling for eight-node configurations: Top .............83
C-Series single-phase AC power cabling for eight-node configurations: Bottom ....... 84
C-Series 3-phase AC power cabling for eight-node configurations: Top ....................85
C-Series 3-phase AC power cabling for eight-node configurations: Bottom .............. 86
U-Series (Gen2) SAS cabling for eight-node configurations....................................... 91
D- and U-Series Hardware Guide 5
FIGURES
62
63
64
65
66
67
58
59
60
61
54
55
56
57
U-Series (Gen1) SAS cabling for eight-node configurations....................................... 93
C-Series public switch cabling for the lower segment from the rear......................... 104
C-Series public switch cabling for the upper segment from the rear......................... 107
C-Series private switch cabling for the lower segment from the rear........................109
C-Series private switch cabling for the upper segment from the rear........................ 112
6 D- and U-Series Hardware Guide
TABLES
29
30
31
32
25
26
27
28
21
22
23
24
17
18
19
20
13
14
15
16
9
10
11
12
7
8
5
6
3
4
1
2
37
38
39
40
33
34
35
36
41
42
43
44
45
7050SX-64 switch port connections used on the top 10 GbE switch (hare) .............. 38
7050SX-64 switch port connections used on the bottom 10 GbE switch (rabbit) ..... 38
7050S-52 switch port connections used on the top 10 GbE switch (hare) ................ 39
7050S-52 switch port connections used on the bottom 10 GbE switch (rabbit) ........40
7150S switch port connections used on the top 10 GbE switch (hare) ....................... 41
7150S switch port connections used on the bottom 10 GbE switch (rabbit) ...............41
7124SX switch port connections used on the top 10 GbE switch (hare) .................... 42
7124SX switch port connections used on the bottom 10 GbE switch (hare) .............. 42
U- and D-Series 10 GB public switch network cabling for all Arista models............... 100
U- and D-Series 10 GB public switch MLAG cabling for all Arista models...................101
U- and D-Series 1 GB private switch management and interconnect cabling.............103
D- and U-Series Hardware Guide 7
TABLES
8 D- and U-Series Hardware Guide
CHAPTER 1
Hardware Components and Configurations
l l l l l l l
ECS Appliance hardware components
................................................................ 10
U-Series Appliance (Gen2) configurations and upgrade paths
............................15
U-Series Appliance (Gen1) configurations and upgrade paths
............................ 18
D-Series Appliance configurations and upgrade paths
........................................ 19
C-Series Appliance (Gen2) configurations and upgrade paths
........................... 20
C-Series Appliance (Gen1) configurations and upgrade paths
............................22
Certified hardware in support of ECS 3.2
...........................................................23
Hardware Components and Configurations 9
Hardware Components and Configurations
ECS Appliance hardware components
Describes the hardware components that make up ECS Appliance hardware models.
ECS Appliance series
The ECS Appliance series include: l l l
D-Series: A dense object storage solution with servers and separate disk array enclosures (DAEs).
U-Series: A commodity object storage solution with servers and separate DAEs.
C-Series: A dense compute and storage solution of servers with integrated disks.
Hardware generations
ECS appliances are characterized by hardware generation: l l l l
U-Series Gen2 models featuring 12 TB disks became available in March 2018.
The D-Series was introduced in October 2016 featuring 8 TB disks. D-Series models featuring 10 TB disks became available March 2017.
The original U-Series appliance (Gen1) was replaced in October 2015 with second generation hardware (Gen2).
The original C-Series appliance (Gen1) was replaced in February 2016 with second generation hardware (Gen2).
Statements about a series that is made in this document apply to all generations except where noted.
U-Series components
The U-Series ECS Appliance includes the following hardware components.
Table 1 U-Series hardware components
Component
40U rack
Private switch
Public switch
Nodes
Description
Titan D racks that include: l l l l
Single-phase PDUs with four power drops (two per side).
The high availability configuration (HA) of four power drops is mandatory and any deviation requires that an
RPQ be submitted and approved.
Optional three-phase WYE or delta PDUs with two power drops (one per side)
Front and rear doors
Racking by Dell EMC manufacturing
One 1 GbE switch
Two 10 GbE switches
Intel-based unstructured server in four- and eight-node configurations. Each server chassis contains four nodes
(blades). Gen2 also has the option for five- and six-node configurations.
10 D- and U-Series Hardware Guide
Hardware Components and Configurations
Table 1 U-Series hardware components (continued)
Component
Disk array enclosure (DAE)
Description
The U-Series disk array enclosure (DAE) drawers hold up to
60 3.5-inch disk drives. Features include: l l l l
Gen1 hardware uses 6TB disks and Gen2 hardware uses 8
TB and 12 TB disks
Two 4-lane 6 Gb/s SAS connectors
SAS bandwidth of 3500 MB/s
Drive service: hot swappable
Figure 1 U-Series minimum and maximum configurations
U-Series components 11
Hardware Components and Configurations
Note
For more robust data protection, a five-node configuration is the recommended minimum.
D-Series components
The D-Series ECS Appliance includes the following hardware components.
Table 2 D-Series hardware components
Component
40U rack
Private switch
Public switch
Nodes
Disk array enclosure (DAE)
Service tray
Description
Titan D racks that include: l l l l
Single-phase PDUs with six power drops (three per side).
The high availability configuration (HA) of six power drops is mandatory and any deviation requires that an RPQ be submitted and approved.
Optional three-phase WYE or delta PDUs with two power drops (one per side)
Front and rear doors
Racking by Dell EMC manufacturing
One 1 GbE switch
Two 10 GbE switches
Intel-based unstructured server in eight-node configurations.
Each server chassis contains four nodes.
The D-Series disk array enclosure (DAE) drawers hold up to
98 3.5-inch disk drives. Features include: l l l l
Models featuring 8TB disks and models featuring 10TB disks
Two 4-lane 12 Gb/s SAS 3.0 connectors
SAS bandwidth of 5600 MB/s
Drive service: cold service
50-lb capacity service tray
12 D- and U-Series Hardware Guide
Figure 2 D-Series minimum and maximum configurations
Hardware Components and Configurations
Note
These rack configurations are available with either 8 TB or 10 TB disks.
C-Series components
The C-Series ECS Appliance includes the following hardware components.
Table 3 C-Series hardware components
Component
40U rack
Description
Titan D Compute racks that include: l
Two single-phase PDUs in a 2U configuration with two power drops. The high availability configuration (HA) of two power drops is mandatory and any deviation requires that an RPQ be submitted and approved.
C-Series components 13
Hardware Components and Configurations
Table 3 C-Series hardware components (continued)
Component
Private switch
Public switch
Nodes
Disks
Service tray
Description l l l
Optional two three-phase WYE or delta PDUs in a 2U configuration with two power drops
Front and rear doors
Racking by Dell EMC manufacturing
One or two 1 GbE switches. The second switch is required for configurations with more than six servers.
Two or four 10 GbE switches. The third and fourth switches are required for configurations with more than six servers.
Intel-based unstructured servers in 8- through 48-node configurations. Each server chassis contains four nodes
(blades).
The C-Series has 12 3.5-inch disk drives integrated with each server. Gen1 hardware uses 6 TB disks. Gen2 hardware uses 8
TB disks.
50-lb capacity service tray
14 D- and U-Series Hardware Guide
Figure 3 C-Series minimum and maximum configurations
Hardware Components and Configurations
U-Series Appliance (Gen2) configurations and upgrade paths
Describes the second generation U-Series ECS Appliance configurations and the upgrade paths between the configurations. The Gen2 hardware became generally available in October 2015.
U-Series configurations (Gen2)
The U-Series Appliance is a commodity object storage solution.
U-Series Appliance (Gen2) configurations and upgrade paths 15
Hardware Components and Configurations
Model number
U400 (minimum configuration)
U400-E
U480-E
U400-T
U2000
U2800
Nodes
4
5
6
8
8
8
U4000 (maximum configuration)
8
Table 4 U-Series (Gen2) configurations
DAEs
4
5
6
8
8
8
8
Disks in DAE
10
10
10
10
30
45
60
Storage capacity
(8 TB disks) (12 TB disks)
320 TB 480 TB
400 TB
480 TB
640 TB
1.92 PB
2.88 PB
3.84 PB
600 TB
720 TB
960 TB
2.88 PB
4.32 PB
5.76 PB
Switches
One private and two public
One private and two public
One private and two public
One private and two public
One private and two public
One private and two public
One private and two public
Note
Five-node configurations are the smallest configuration that can tolerate a node failure and still maintain the EC protection scheme. A four-node configuration that suffers a node failure changes to a simple data mirroring protection scheme. Fivenode configurations are the recommended minimum configuration.
U-Series (Gen2) upgrade paths
U-Series Gen2 upgrades can be applied flexibly to eligible configurations. Multiple upgrades can be applied in one service call.
Upgrade rules for an appliance with all 8 TB or 12 TB disks (upgrade will not include mixed disk capacities in the rack): l l l l l l
The minimum number of disks in a DAE is 10.
Disk Upgrade Kits are available in 5 or 10 disk increments.
All DAEs in the appliance must have the same number of disks in increments of 5
(10, 15, 20, and so on, up to 60) with NO empty slots between disks.
Upgrades are flexible, meaning you can upgrade to any disk level even if that level does not correspond to a named model. For example, you can upgrade the original appliance to have 35 disks per DAE even though this configuration does not have an official label like the U2000 (30 disks per DAE) or the U2800 (45 disks per
DAE).
To upgrade a half-rack configuration to a full-rack configuration, you must order the 1 Server Chassis containing 4 nodes, 4 DAEs with 10, 20,
30, 45 or 60 Upgrade Kit . To achieve any configuration between 10, 20, 30
45 or 60 disks, add 5 or 10 disk upgrade kits in the required quantity to match the disk quantities per DAE in nodes 1-4.
All empty drive slots must be filled with a disk filler.
16 D- and U-Series Hardware Guide
Hardware Components and Configurations l
The best practice is to have only one storage pool in a VDC, unless you have more than one storage use case at the site. In a site with a single storage pool, each DAE in each rack must have the same number of disks.
Upgrade rules for systems with either four nodes/DAEs 8 TB or four nodes/DAEs 12
TB (upgrade will include mixed disk capacities in the rack): l l l l l
The minimum number of disks in a DAE is 10.
Disk Upgrade Kits are available in 5 or 10 disk increments.
No mixing of disk capacities in a DAE.
Each four node/DAE must have the same disk capacity and number of disks in increments of 5 (10, 15, 20, and so on, up to 60) with NO empty slots between disks in DAE. Example: nodes 1-4, 30, 6TB disks in each DAE, nodes 5-7, 20 12TB disks in each DAE.
All empty drive slots must be filled with a disk filler.
Table 5 U-Series (Gen2) disk upgrades
Disk upgrade kit
5-Disk Upgrade
10-Disk Upgrade
40-Disk Upgrade
60-Disk Upgrade l l l
Uses
Used to supplement other disk upgrade kits to make up a valid configuration.
Used to supplement other disk upgrade kits to make up a valid configuration.
Add 10 disks to each DAE in a four-node configuration.
Add 5 disks to each DAE in an eight-node configuration.
Populate a new DAE in a configuration with 40-disk DAEs.
l l
Add 10 disks to each DAE in a six-node configuration.
Populate a new DAE in a configuration with 60-disk DAEs.
Table 6 U-Series (Gen2) node upgrades
Current nodes
Four
(all four nodes have
12 TB disks)
Kit for upgrade to 5 nodes Kit for upgrade to 6 nodes Kit for upgrade to 8 nodes l l
One server chassis with one node and three fillers.
One DAE with the same number of disks as one of the current DAEs. Disks must be Gen2 12 TB.
l l
One server chassis with two nodes and two fillers.
Two DAEs with the same number of disks as one of the current DAEs. Disks must be Gen2 12 TB.
l l
One server chassis with four nodes.
Four DAEs with the same number of disks as one of the current DAEs. Disks must be Gen2 12 TB.
Four
(all four nodes have
8 TB disks) l l
One server chassis with one node and three fillers.
One DAE with the same number of disks as one of the current DAEs. Disks must be Gen2 8 TB.
l l
One server chassis with two nodes and two fillers.
Two DAEs with the same number of disks as one of the current DAEs. Disks must be Gen2 8 TB.
l l l
One server chassis with four nodes.
8 TB expansion disks: four
DAEs being added with the same number of disks as one of the current DAEs.
12 TB expansion disks: four
DAEs being added with the same number of disks.
U-Series Appliance (Gen2) configurations and upgrade paths 17
Hardware Components and Configurations
Current nodes
Five
(all five nodes have either all 8 TB disks or all 12 TB disks)
Six
(all six nodes have either all 8 TB disks or all 12 TB disks)
Table 6 U-Series (Gen2) node upgrades (continued)
Kit for upgrade to 5 nodes Kit for upgrade to 6 nodes Kit for upgrade to 8 nodes
Not applicable l
One node.
l One DAE with the same number of disks as one of the current DAEs.
You cannot intermix 8 TB and 12
TB disks.
l
Three nodes.
l Three DAEs with the same number of disks as one of the current DAEs.
You cannot intermix 8 TB and 12
TB disks.
Not applicable Not applicable l l
Two nodes.
Two DAEs with the same number of disks as one of the current DAEs.
You cannot intermix 8 TB and 12
TB disks.
Note
Seven-node configurations are not supported.
When you are planning to increase the number of drives in the DAEs and add nodes to the appliance, order the disks first. Then order the node upgrades. The new DAEs are shipped with the correct number of disks preinstalled.
U-Series Appliance (Gen1) configurations and upgrade paths
Describes the first generation ECS Appliance configurations and the upgrade paths between the configurations. Gen1 hardware became generally available in June 2014.
U-Series configurations (Gen1)
The U-Series Appliance is a dense storage solution using commodity hardware.
Table 7 U-Series (Gen1) configurations
Model number
Nodes
U300
(minimum configuration
)
4
U700 4
U1100
U1500
4
4
DAEs
4
4
4
4
Disks in DAE 1 to
4
15
Disks in DAE 5 to
8
Not applicable
Storage capacity
360TB
30
45
60
Not applicable
Not applicable
Not applicable
720TB
1080TB
1440TB
Switches
One private and two public
One private and two public
One private and two public
One private and two public
18 D- and U-Series Hardware Guide
Hardware Components and Configurations
Model number
U1800
Nodes
U2100
U2500 8
U3000
(maximum configuration
)
8
8
8
8
Table 7 U-Series (Gen1) configurations (continued)
DAEs Disks in DAE 1 to
4
60
Disks in DAE 5 to
8
15
Storage capacity
1800TB
8
8
8
60
60
60
30
45
60
2160TB
2520TB
2880TB
Switches
One private and two public
One private and two public
One private and two public
One private and two public
U-Series (Gen1) upgrade paths
U-Series upgrades consist of the disks and infrastructure hardware that is needed to move from the existing model number to the next higher model number. To upgrade by more than one model level, order the upgrades for each level and apply them in one service call.
Table 8 U-Series (Gen1) upgrades
Model number Disk upgrade (to the next higher model)
Not applicable
Hardware upgrade (to the next higher model)
Not applicable U300 (minimum configuration)
U700
U1100
U1500
U1800
U2100
U2500
U3000 (maximum configuration)
One 60-disk kit
One 60-disk kit
One 60-disk kit
One 60-disk kit
One 60-disk kit
One 60-disk kit
One 60-disk kit
Not applicable
Not applicable
Not applicable
One server chassis (four nodes) and four DAEs
Not applicable
Not applicable
Not applicable
D-Series Appliance configurations and upgrade paths
Describes the D-Series ECS Appliance configurations and the upgrade paths. The D-
Series hardware became generally available in October 2016. 10 TB models became available March 2017.
D-Series configurations
The D-Series Appliance is a dense object storage solution using commodity hardware.
D-Series Appliance configurations and upgrade paths 19
Hardware Components and Configurations
Model number Nodes DAEs
D4500 8
Table 9 D-Series configurations
8
Disks in each
DAE
70
Disk Size
8TB
D5600
D6200
D7800
8
8
8
Storage capacity
4.5 PB
Switches
8
8
8
70
98
98
10TB
8TB
10TB
5.6 PB
6.2 PB
7.8 PB
One private and two public
One private and two public
One private and two public
One private and two public
D-Series upgrade paths
The D-Series Appliances can be upgraded as shown in the table.
Table 10 D-Series upgrades
Upgrade option name
Number of disks
224 8 TB Disk
Upgrade Kit
(upgrades D4500 to
D6200)
224
224 10 TB Disk
Upgrade Kit
(upgrades D5600 to
D7800)
224
Hardwa re
Description
16 Sleds Adds 2 sleds of 14 disks each (28 disks total) to each DAE.
16 Sleds Adds 2 sleds of 14 disks each (28 disks total) to each DAE.
C-Series Appliance (Gen2) configurations and upgrade paths
Describes the second generation C-Series ECS Appliance configurations and the upgrade paths between the configurations. Gen2 hardware became generally available in February 2016.
C-Series (Gen2) configurations
The C-Series Appliance is a dense compute solution using commodity hardware.
Table 11 C-Series (Gen2) configurations
Phoenix-12
Compute Servers
3
4
2 (minimum configuration)
Nodes
8
12
16
Storage capacity
144TB
216TB
288TB
Switches
One private and two public
One private and two public
One private and two public
20 D- and U-Series Hardware Guide
Hardware Components and Configurations
Table 11 C-Series (Gen2) configurations (continued)
Phoenix-12
Compute Servers
7
8
5
6
9
10
11
12 (maximum configuration)
Nodes
36
40
44
48
20
24
28
32
Storage capacity
360TB
432TB
504TB
576TB
648TB
720TB
792TB
864TB
Switches
One private and two public
One private and two public
Two private and four public
Two private and four public
Two private and four public
Two private and four public
Two private and four public
Two private and four public
C-Series (Gen2) upgrade paths
C-Series upgrades consist of the disks and infrastructure hardware that is needed to move from the existing model number to the next higher model number. To upgrade by more than one model level, order the upgrades for each level and apply them in one service call.
Table 12 C-Series (Gen2) upgrades
Model number Disk upgrade (to the next higher model)
Hardware upgrade (to the next higher model)
Not applicable Not applicable 2 Phoenix-12 Compute
Servers (minimum configuration)
3 Phoenix-12 Compute
Servers
4 Phoenix-12 Compute
Servers
5 Phoenix-12 Compute
Servers
6 Phoenix-12 Compute
Servers
7 Phoenix-12 Compute
Servers
8 Phoenix-12 Compute
Servers
9 Phoenix-12 Compute
Servers
10 Phoenix-12 Compute
Servers
12 integrated disks
12 integrated disks
12 integrated disks
12 integrated disks
12 integrated disks
12 integrated disks
12 integrated disks
12 integrated disks
One server chassis (four nodes)
One server chassis (four nodes)
One server chassis (four nodes)
One server chassis (four nodes)
One server chassis (four nodes) and one private and two public switches
One server chassis (four nodes)
One server chassis (four nodes)
One server chassis (four nodes)
C-Series Appliance (Gen2) configurations and upgrade paths 21
Hardware Components and Configurations
Table 12 C-Series (Gen2) upgrades (continued)
Model number
11 Phoenix-12 Compute
Servers
12 Phoenix-12 Compute
Servers (maximum configuration)
Disk upgrade (to the next higher model)
12 integrated disks
12 integrated disks
Hardware upgrade (to the next higher model)
One server chassis (four nodes)
One server chassis (four nodes)
C-Series Appliance (Gen1) configurations and upgrade paths
Describes the first generation C-Series ECS Appliance configurations and the upgrade paths between the configurations. Gen1 hardware became generally available in March
2015.
C-Series (Gen1) configurations
The C-Series Appliance is a dense compute solution using commodity hardware.
Table 13 C-Series (Gen1) configurations
Phoenix-12 Compute
Servers
Nodes
8
7
8
5
6
3
4
2 (minimum configuration)
9
10
11
12 (maximum configuration)
12
16
20
24
28
32
36
40
44
48
Storage capacity
144TB
216TB
288TB
360TB
432TB
504TB
576TB
648TB
720TB
792TB
864TB
Switches
One private and two public
One private and two public
One private and two public
One private and two public
One private and two public
Two private and four public
Two private and four public
Two private and four public
Two private and four public
Two private and four public
Two private and four public
C-Series (Gen1) upgrade paths
C-Series upgrades consist of the disks and infrastructure hardware that is needed to move from the existing model number to the next higher model number. To upgrade by more than one model level, order the upgrades for each level and apply them in one service call.
22 D- and U-Series Hardware Guide
Hardware Components and Configurations
Table 14 C-Series (Gen1) upgrades
Model number Disk upgrade (to the next higher model)
Hardware upgrade (to the next higher model)
Not applicable Not applicable 2 Phoenix-12 Compute
Servers (minimum configuration)
3 Phoenix-12 Compute
Servers
4 Phoenix-12 Compute
Servers
5 Phoenix-12 Compute
Servers
6 Phoenix-12 Compute
Servers
7 Phoenix-12 Compute
Servers
8 Phoenix-12 Compute
Servers
9 Phoenix-12 Compute
Servers
10 Phoenix-12 Compute
Servers
11 Phoenix-12 Compute
Servers
12 Phoenix-12 Compute
Servers (maximum configuration)
12 integrated disks
12 integrated disks
12 integrated disks
12 integrated disks
12 integrated disks
12 integrated disks
12 integrated disks
12 integrated disks
12 integrated disks
12 integrated disks
One server chassis (four nodes)
One server chassis (four nodes)
One server chassis (four nodes)
One server chassis (four nodes)
One server chassis (four nodes) and one private and two public switches
One server chassis (four nodes)
One server chassis (four nodes)
One server chassis (four nodes)
One server chassis (four nodes)
One server chassis (four nodes)
Certified hardware in support of ECS 3.2
The following table lists the latest hardware pre-qualified for a Certified installation.
Note
All Arista switch models listed also ship standard with the ECS Appliance.
Table 15 ECS Certified hardware
Server models l l l
Dell DSS7000
Dell R730xd
HP Proliant SL4540 Gen8
Switch models
One 1 GbE private switch is required to handle management traffic: l l
Arista 7010T- 48
Arista 7048T
Certified hardware in support of ECS 3.2
23
Hardware Components and Configurations
Table 15 ECS Certified hardware
Server models Switch models l
Dell S3048-ON l Cisco Nexus 3048
Two 10 GbE switches are required to handle data traffic: l l l l
Arista 7050SX-64
Arista 7050S-52
Arista 7150S-24
Arista 7124SX
24 D- and U-Series Hardware Guide
CHAPTER 2
Servers
l l
....................................................................................... 26
.................................................................................30
Servers 25
Servers
ECS Appliance servers
Provides a quick reference of servers.
ECS has the following server types: l l l l l
D-Series Gen2 Rinjin-16 for Object and HDFS (October 2016)
U-Series Gen2 Rinjin-16 for Object and HDFS (November 2015)
U-Series Gen1 Phoenix-16 for Object and HDFS (June 2014)
C-Series Gen2 Rinjin-12 for Object and HDFS (February 2016)
C-Series Gen1 Phoenix-12 for Object and HDFS (March 2015)
D-Series
D-Series Rinjin-16 nodes have the following standard features: l l l l l l l
Four-node servers (2U) with two CPUs per node
2.4 GHz six-core Haswell CPUs
Eight 8 GB DDR4 RDIMMs
One system disk per node (400 GB SSD)
LED indicators for each node
Dual hot-swap chassis power supplies
One high density SAS cable with one connector
U-Series
U-Series Gen2 Rinjin-16 nodes have the following standard features: l l l l
Four-node servers (2U) with two CPUs per node
2.4 GHz six-core Haswell CPUs
Eight 8 GB DDR4 RDIMMs
One system disk per node (400GB SSD) l l
LED indicators for each node
Dual hot-swap chassis power supplies l
One SAS adapter with two SAS ports per node
U-Series Gen1 Phoenix-16 nodes have the following standard features: l l l l l l l
Four-node servers (2U) with two CPUs per node
2.4 GHz four-core Ivy Bridge CPUs
Four channels of native DDR3 (1333) memory
One system disk per node (either a 200 GB or 400 GB SSD)
LED indicators for each node
Dual hot-swap chassis power supplies
One SAS adapter with one SAS port per node
C-Series
C-Series Gen2 Rinjin-12 nodes have the following standard features: l l
Four-node servers (2U) with two CPUs per node
2.4 GHz six-core Haswell CPUs
26 D- and U-Series Hardware Guide
Servers l l l l
Eight 8 GB DDR4 RDIMMs
One system disk per node
LED indicators for each node
Dual hot-swap chassis power supplies
C-Series Gen1 Phoenix-12 nodes have the following standard features: l l l l l l l
Four-node servers (2U) with two CPUs per node
2.4 GHz four-core Ivy Bridge CPUs
Four channels of native DDR3 (1333) memory
The first disk that is assigned to each node is a 6TB hybrid system/storage disk
LED indicators for each node
Dual hot-swap chassis power supplies. Supports N + 1 power.
12 3.5” hot-swap SATA hard drives per server (three for each node)
Server front views
The following figure shows the server chassis front with the four nodes identified.
Figure 4 Phoenix-16 (Gen1) and Rinjin-16 (Gen2) server chassis front view
The following figure shows the server chassis front identifying the integrated disks assigned to each node.
Figure 5 Phoenix-12 (Gen1) and Rinjin-12 (Gen2) server chassis front view
LED indicators are on the left and right side of the server front panels.
Server front views 27
Servers
Table 16 Server LEDs
1
2
3
4
NODE 3
ID
NODE 1
ID
NODE 4
ID
NODE 2
ID
1. System Power Button with LED for each node.
2. System ID LED Button for each node.
3. System Status LED for each node.
4. LAN Link/Activity LED for each node.
CL5558
Server rear view
The Rinjin-16, Phoenix-16, Rinjin-12, and the Phoenix-12 server chassis provide dual hot-swappable power supplies and four nodes.
The chassis shares a common redundant power supply (CRPS) that enables HA power in each chassis that is shared across all nodes. The nodes are mounted on hotswappable trays that fit into the four corresponding node slots accessible from the rear of the server.
28 D- and U-Series Hardware Guide
Figure 6 Server chassis rear view (all)
Servers
1. Node 1
2. Node 2
3. Node 3
4. Node 4
Note
In the second server chassis in a five- or six- node configuration, the nodes (blades) must be populated starting with the node 1 slot. Empty slots must have blank fillers.
Figure 7 Rear ports on nodes (all)
1. 1 GbE: Connected to one of the data ports on the 1 GbE switch
Server rear view 29
Servers
30
2. RMM: A dedicated port for hardware monitoring (per node)
3. SAS to DAE. Used on U- and D-Series servers only. U-Series Gen1 has a single port. U-Series Gen2 hardware has two ports. The D-Series has one high density
SAS cable with one connector.
4. 10 GbE SW2 ( hare): The left 10 GbE data port of each node is connected to one of the data ports on the 10 GbE (SW2) switch
5. 10 GbE SW1 ( rabbit): The right 10 GbE data port of each node is connected to one of the data ports on the 10 GbE (SW1) switch
Rack and node host names
Lists the default rack and node host names for an ECS appliance.
Default rack IDs and color names are assigned in installation order as shown below:
Table 17 Rack ID 1 to 50
Rack
ID
Rack color Rack
ID
Rack color Rack
ID
Rack color red green blue yellow magenta cyan azure violet rose orange chartreuse pink brown white gray beige silver
13
14
15
16
9
10
11
12
7
8
5
6
3
4
1
2
17 carmine auburn bronze apricot jasmine army copper amaranth mint cobalt fern sienna mantis denim aquamarine baby eggplant
30
31
32
33
26
27
28
29
22
23
24
25
18
19
20
21
34
47
48
49
50
43
44
45
46
39
40
41
42
35
36
37
38
Nodes are assigned node names based on their order within the server chassis and within the rack itself. The following table lists the default node names.
cornsilk ochre lavender ginger ivory carnelian taupe navy indigo veronica citron sand russet brick avocado bubblegum
Table 18 Default node names
1
Node Node name provo
Node Node name
9 boston
Node Node name
17 memphis
D- and U-Series Hardware Guide
Servers
Table 18 Default node names (continued)
Node Node name
6
7
8
4
5
2
3 sandy orem ogden layton logan
Lehi murray
Node
14
15
16
10
11
12
13
Node name chicago houston phoenix dallas detroit columbus austin
Node
22
23
24
18
19
20
21
Node name seattle denver portland tucson atlanta fresno mesa
Nodes positioned in the same slot in different racks at a site will have the same node name. For example node 4 will always be called ogden , assuming you use the default node names.
The getrackinfo command identifies nodes by a unique combination of node name and rack name. For example, node 4 in rack 4 and node 4 in rack 5 will be identified as: ogden-green ogden-blue and can be pinged using their NAN resolvable (via mDNS) name: ogden-green.nan.local
ogden-blue.nan.local
Rack and node host names 31
Servers
32 D- and U-Series Hardware Guide
CHAPTER 3
Switches
l
.....................................................................................34
Switches 33
Switches
ECS Appliance switches
Provides a quick reference of private and public switches.
l l
Private switch—One 1 GbE private switch to handle management traffic. In a C-
Series appliance with more than six servers, a second private switch is added.
Public switch—Two 10 GbE switches to handle data traffic. In a C-Series appliance with more than six servers, two more public switches are added.
Table 19 ECS Appliance switch summary
Switch model Part number Type
Arista 7010T-48 100-400-120-xx Private 1 GbE (Turtle)
Arista 7048T 100-585-063-xx Private 1 GbE (Turtle)
Used in l l l l l
D-Series
U-Series Gen2
U-Series Gen1
C-Series Gen2
C-Series Gen1 l l l l
U-Series Gen2
U-Series Gen1
C-Series Gen2
C-Series Gen1 l l
D-Series
U-Series Gen2
Cisco 3048 48-P
This switch is available when customers are supplying their own public Cisco switches through an RPQ.
100-400-130-xx Private 1 GbE (Turtle)
Arista
7050SX-64
100-400-065-xx Public 10 GbE (Hare and
Rabbit)
Arista 7050S-52 100-585-062-xx Public 10 GbE (Hare and
Rabbit) l l l l
D-Series
U-Series Gen2
C-Series Gen2
C-Series Gen1 l l l
U-Series Gen2
C-Series Gen2
C-Series Gen1
U-Series Gen1 Arista 7150S-24 100-564-196-xx Public 10 GbE (Hare and
Rabbit)
Arista 7124SX 100-585-061-xx Public 10 GbE (Hare and
Rabbit)
U-Series Gen1
34 D- and U-Series Hardware Guide
Switches
Private switch: Cisco 3048 48-P
The private switch is used for management traffic. It has 52 ports and dual power supply inputs. The switch is configured in the factory.
Figure 8 Cisco 3048 ports (rear)
Figure 9 Cisco 3048 ports (front)
Table 20 Cisco 3048 switch configuration detail
4
5
6
1
Figure label Ports
2
3
3
Connection description
1–24
25–48
49
Connected to the MGMT (eth0) network ports on the nodes
(blue cables).
Connected to the RMM network ports on the nodes (gray cables).
The 1 GbE management port. This port is connected to rabbit
(bottom) 10GB switch management port. See Note 2.
50 The 1 GbE management port. This port is connected to hare
(top) 10GB switch management port. See Note 2.
51
52
Rack/Segment Interconnect IN. See Note 1 and 2.
52 Rack/Segment Interconnect OUT. See Note 1 and 2.
Serial console The console port is used to manage the switch through a serial connection. The Ethernet management port is connected to the 1 GbE management switch. This port is on the front of the switch.
Private switch: Cisco 3048 48-P 35
Switches
Note
1. The NAN (Nile Area Network) links all ECS Appliances at a site.
2. Ports 49 through 52 use CISCO 1G BASE-T SFPs (part number 100-400-141). In an ECS Appliance, these four SFPs are installed in the 1 GbE switch. In a customer-supplied rack order, these SFPs need to be installed.
Private switch: Arista 7010T-48
The private switch is used for management traffic. It has 52 ports and dual power supply inputs. The switch is configured in the factory.
Figure 10 Arista 7010T-48 ports
Table 21 Arista 7010T-48 switch configuration detail
4
5
6
1
Figure label Ports
2
3
3
Connection description
1–24
25–48
49
50
Connected to the MGMT (eth0) network ports on the nodes
(blue cables).
Connected to the RMM network ports on the nodes (gray cables).
The 1 GbE management port. This port is connected to rabbit
(bottom) 10GB switch management port. See Note 2.
The 1 GbE management port. This port is connected to hare
(top) 10GB switch management port. See Note 2.
Rack/Segment Interconnect IN. See note 1 and 2.
51
52 52 Rack/Segment Interconnect OUT. See note 1 and 2.
Serial console The console port is used to manage the switch through a serial connection. The Ethernet management port is connected to the 1 GbE management switch.
Note
1. The NAN (Nile Area Network) links all ECS Appliances at a site.
2. Ports 49 through 51 contain SFPs (RJ45 copper). In an ECS Appliance or a customer-supplied rack order, these four SFPs are installed in the 1 GbE switch.
36 D- and U-Series Hardware Guide
Switches
Private switch: Arista 7048T-48
The private switch is used for management traffic. It has 52 ports and dual power supply inputs. The switch is configured in the factory.
Figure 11 Arista 7048T-48 ports
Table 22 Arista 7048T-48 switch configuration detail
4
5
6
1
Figure label Ports
2
3
3
Connection description
1–24 Connected to the MGMT (eth0) network ports on the nodes
(blue cables).
25–48
49
50
Connected to the RMM network ports on the nodes (gray cables).
The 1 GbE management port. This port is connected to rabbit
(bottom) 10GB switch management port. See Note 2.
The 1 GbE management port. This port is connected to hare
(top) 10GB switch management port. See Note 2.
51
52
Rack/Segment Interconnect IN. See Note 1 and 2.
52 Rack/Segment Interconnect OUT. See Note 1 and 2.
Serial console The console port is used to manage the switch through a serial connection. The Ethernet management port is connected to the 1 GbE management switch.
Note
1. The NAN (Nile Area Network) links all ECS Appliances at a site.
2. Ports 49 through 51 contain SFPs (RJ45 copper). In an ECS Appliance or a customer-supplied rack order, these four SFPs are installed in the 1 GbE switch.
Private switch: Arista 7048T-48 37
Switches
Public switch: Arista 7050SX-64
The 7050SX-64 switch is a 52-port switch. The switch is equipped with 52 SFP+ ports, dual hot-swap power supplies, and redundant, field-replaceable fan modules.
Figure 12 Arista 7050SX-64 ports
38
2
3
4
Table 23 7050SX-64 switch port connections used on the top 10 GbE switch ( hare )
3
4
5
6
1
Figure label Ports
2
7
Connection description
1–8
9–32
33–44
45–48
The 10 GbE uplink data ports. These ports provide the connection to the customer's 10 GbE infrastructure. SR Optic.
See note.
The 10 GbE node data ports, only ports 9–16 are used in the
U- and D-Series. These ports are connected to the left 10 GbE
(P02) interface on each node. SR Optic.
Unused.
The 10 GbE LAG ports. These ports are connected to the LAG ports on the other 10 GbE switch ( rabbit ). SR Optic.
Unused.
49–52
<...> The 1 GbE management port. This port is connected to port
50 of the management switch ( turtle ). RJ-45.
Serial console The console port is used to manage the switch through a serial connection. The Ethernet management port is connected to the 1 GbE management switch.
Table 24 7050SX-64 switch port connections used on the bottom 10 GbE switch ( rabbit )
1
Figure label Ports
1–8
5
9–32
33–44
45–48
49–52
Connection description
The 10 GbE uplink data ports. These ports provide the connection to the customer's 10 GbE infrastructure. SR Optic.
See note.
The 10 GbE node data ports, only ports 9–16 are used in the
U- and D-Series. These ports are connected to the right 10
GbE (P01) interface on each node. SR Optic.
Unused.
The 10 GbE LAG ports. These ports are connected to the LAG ports on the other 10 GbE switch ( hare ). SR Optic.
Unused.
D- and U-Series Hardware Guide
Switches
Table 24 7050SX-64 switch port connections used on the bottom 10 GbE switch ( rabbit )
(continued)
Figure label Ports
6
7
Connection description
<...> The 1 GbE management port. This port is connected to port
49 of the management switch ( turtle ). RJ-45.
Serial console The console port is used to manage the switch through a serial connection. The Ethernet management port is connected to the 1 GbE management switch.
Note
10 GbE switches ship with one SFP - RJ-45 copper SFP installed in port 1. Fibre SFPs can be ordered through Dell EMC. An ECS appliance that is shipped in a Dell EMC rack has all SFPs installed, but not installed for a customer rack installation. In either case, the switch may require additional SFPs to be installed or reconfigured in ports 1–8 based on customer uplink configuration.
Public switch: Arista 7050S-52
The 7050S-52 switch is a 52-port switch. The switch is equipped with 52 SFP+ ports, dual hot-swap power supplies, and redundant, field-replaceable fan modules.
Figure 13 Arista 7050S-52 ports
Table 25 7050S-52 switch port connections used on the top 10 GbE switch ( hare )
1
Figure label Ports
1–8
2
3
4
5
6
9–32
33–44
45–48
49–52
<...>
Connection description
The 10 GbE uplink data ports. These ports provide the connection to the customer's 10 GbE infrastructure. SR Optic.
See note.
The 10 GbE node data ports, only ports 9–16 are used in the
U- and D-Series. These ports are connected to the left 10 GbE
(P02) interface on each node. SR Optic.
Unused.
The 10 GbE LAG ports. These ports are connected to the LAG ports on the other 10 GbE switch ( rabbit ). SR Optic.
Unused.
The 1 GbE management port. This port is connected to port
50 of the management switch ( turtle ). RJ-45.
Public switch: Arista 7050S-52 39
Switches
Table 25 7050S-52 switch port connections used on the top 10 GbE switch ( hare ) (continued)
Figure label Ports
7
Connection description
Serial console The console port is used to manage the switch through a serial connection. The Ethernet management port is connected to the 1 GbE management switch.
Table 26 7050S-52 switch port connections used on the bottom 10 GbE switch ( rabbit )
3
4
5
6
1
Figure label Ports
2
7
Connection description
1–8
9–32
33–44
45–48
The 10 GbE uplink data ports. These ports provide the connection to the customer's 10 GbE infrastructure. SR Optic.
See note.
The 10 GbE node data ports, only ports 9–16 are used in the
U- and D-Series. These ports are connected to the right 10
GbE (P01) interface on each node. SR Optic.
Unused.
The 10 GbE LAG ports. These ports are connected to the LAG ports on the other 10 GbE switch ( hare ). SR Optic.
49–52
<...>
Unused.
The 1 GbE management port. This port is connected to port
49 of the management switch ( turtle ). RJ-45.
Serial console The console port is used to manage the switch through a serial connection. The Ethernet management port is connected to the 1 GbE management switch.
Note
10 GbE switches ship with one SFP - RJ-45 copper SFP installed in port 1. Fibre SFPs can be ordered through Dell EMC. An ECS appliance that is shipped in a Dell EMC rack has all SFPs installed, but not installed for a customer rack installation. In either case, the switch may require additional SFPs to be installed or reconfigured in ports 1–8 based on customer uplink configuration.
Public switch: Arista 7150S-24
The 7150S-24 switch is a 24 port switch. The switch is equipped with 24 SFP+ ports, dual hot-swap power supplies, and redundant, field-replaceable fan modules.
Figure 14 Arista 7150S-24 ports
40 D- and U-Series Hardware Guide
Switches
Table 27 7150S switch port connections used on the top 10 GbE switch ( hare )
5
6
1
Figure label Ports
2, 3
4, 5
4
7
Connection description
1–8
9–20
The 10 GbE uplink data ports. These ports provide the connection to the customer's 10 GbE infrastructure. SR Optic.
See note.
The 10 GbE node data ports. Only ports 9–16 are used in Uand D-Series. These ports are connected to the left (P02) 10
GbE interface on each node. SR Optic.
Unused.
21–24
45–48 The 10 GbE LAG ports. These ports are connected to the LAG ports on the bottom 10 GbE switch ( rabbit ). SR Optic.
Unused.
49–52
<...> The 1 GbE management port. This port is connected to port
50 of the management switch ( turtle ). RJ-45.
Serial console The console port is used to manage the switch through a serial connection. The Ethernet management port is connected to the 1 GbE management switch.
Table 28 7150S switch port connections used on the bottom 10 GbE switch ( rabbit )
1
Figure label Ports
2, 3
4, 5
6
7
Connection description
1–8
9–20
The 10 GbE uplink data ports. These ports provide the connection to the customer's 10 GbE infrastructure. SR Optic.
See note.
The 10 GbE node data ports. Only ports 9–16 are used in Uand D-Series. These ports are connected to the right (P01) 10
GbE interface on each node. SR Optic.
21–24 The 10 GbE LAG ports. These ports are connected to the LAG ports on the top 10 GbE switch ( hare ). SR Optic.
<...> The 1 GbE management port. This port is connected to port
49 of the management switch ( turtle ). RJ-45.
Serial console The console port is used to manage the switch through a serial connection. The Ethernet management port is connected to the 1 GbE management switch.
Note
10 GbE switches ship with one SFP - RJ-45 copper SFP installed in port 1. Fibre SFPs can be ordered through Dell EMC. An ECS appliance that is shipped in a Dell EMC rack has all SFPs installed, but not installed for a customer rack installation. In either case, the switch may require additional SFPs to be installed or reconfigured in ports 1–8 based on customer uplink configuration.
Public switch: Arista 7150S-24 41
Switches
Public switch: Arista 7124SX
The Arista 7124SX switch is equipped with 24 SFP+ ports, dual hot-swap power supplies, and redundant field replaceable fan modules.
Figure 15 Arista 7124SX
Table 29 7124SX switch port connections used on the top 10 GbE switch ( hare )
5
6
1
Figure label Ports
2, 3
4, 5
4
7
Connection description
1-8
9-20
The 10 GbE uplink data ports. These ports provide the connection to the customers 10 GbE infrastructure. SR Optic.
See note.
The 10 GbE node data ports. Only ports 9-16 are used in Uand D-Series. These ports are connected to the left (P02) 10
GbE interface on each node. SR Optic.
Unused.
21-24
45-48 The 10 GbE LAG ports. These ports are connected to the LAG ports on the bottom 10 GbE switch ( rabbit ). SR Optic.
Unused.
49-52
<...> The 1 GbE management port. This port is connected to port
50 of the management switch ( turtle ). RJ-45.
Serial console The console port is used to manage the switch through a serial connection and the Ethernet management port is connected to the 1 GbE management switch.
Table 30 7124SX switch port connections used on the bottom 10 GbE switch ( hare )
1
Figure label Ports
1-8
2
3, 4
9-20
21-24
Connection description
The 10 GbE uplink data ports. These ports provide the connection to the customers 10 GbE infrastructure. SR Optic.
See note.
The 10 GbE node data ports. Only ports 9-16 are used in Uand D-Series. These ports are connected to the right (P01) 10
GbE interface on each node. SR Optic.
The 10 GbE LAG ports. These ports are connected to the LAG ports on the top 10 GbE switch ( hare ). SR Optic.
42 D- and U-Series Hardware Guide
Switches
Table 30 7124SX switch port connections used on the bottom 10 GbE switch ( hare )
(continued)
Figure label Ports
5
6
Connection description
<...> The 1 GbE management port. This port is connected to port
49 of the management switch ( turtle ). RJ-45.
Serial console The console port is used to manage the switch through a serial connection and the Ethernet management port is connected to the 1 GbE management switch.
Note
10 GbE switches ship with one SFP - RJ-45 copper SFP installed in port 1. Fibre SFPs can be ordered through Dell EMC. An ECS appliance that is shipped in a Dell EMC rack has all SFPs installed, but not installed for a customer rack installation. In either case, the switch may require additional SFPs to be installed or reconfigured in ports 1–8 based on customer uplink configuration.
Public switch: Arista 7124SX 43
Switches
44 D- and U-Series Hardware Guide
CHAPTER 4
Disk Drives
l l l
.........................................................................................46
.............................................................................................46
......................................................................................... 47
Disk Drives 45
Disk Drives
Integrated disk drives
Describes disk drives that are integrated into the server chassis of the ECS Appliance.
D-Series
In D-Series servers, OS disks are integrated into the server chassis and are accessible from the front of the server chassis. Each node has one OS SSD drive.
U-Series
In U-Series servers, OS disks are integrated into the server chassis and are accessible from the front of the server chassis. Each node has one OS SSD drive.
Note
Early Gen1 appliances had two mirrored disks per node.
C-Series
In C-Series servers with integrated storage disks, the disks are accessible from the front of the server chassis. The disks are assigned equally to the four nodes in the chassis. All disks must be the same size and speed. Gen1 uses 6 TB disks and Gen2 uses 8 TB and 12 TB disks.
Note
In Gen1 only, the first integrated disk that is assigned to each node is called disk drive zero (HDD0). These storage drives contain some system data.
Figure 16 C-Series (Gen1) Integrated disks with node mappings
Storage disk drives
Describes the disk drives used in ECS Appliances.
Table 31 Storage disk drives
Series and Generation Service Size RPM Type
D-Series D5600 and D7800
D-Series D4500 and D6200, U-Series
Gen2, C-Series Gen2
U-Series Gen1, C-Series Gen1
U-Series Gen2
Object
Object
Object
Object
10 TB
8 TB
6 TB
12 TB
7200
7200
7200
7200
SAS
SAS
SATA
SAS
All disks integrated into a server chassis or in a DAE must conform to these rules:
46 D- and U-Series Hardware Guide
Disk Drives l l
All disk drives must be the same size within a DAE
All disk drives must be the same speed
Disk array enclosures
The D-Series and U-Series Appliance include disk array enclosures (DAEs). The DAE is a drawer that slides in and out of the 40U rack. The storage disk drives, I/O modules, and cooling modules are located inside of the DAE.
Note
Use the power and weight calculator to plan for the weight of the configuration.
ECS Appliances use two types of DAE: l l
The D-Series includes the Pikes Peak (dense storage) enclosure, which can hold up to 98 disks.
The U-Series includes the Voyager DAE, which can hold up to 60 disks.
The C-Series does not use DAEs. C-Series servers have integrated disks: 12 3.5-inch disk drives accessible from the front of each server.
Pikes Peak (dense storage)
The Pikes Peak enclosure has the following features: l l l l l l l
Seven sleds with up to 14 3.5-inch disk drives each in a single 4U drawer (up to 98 disk drives total). Serviced from the front, after removing the I/O module.
One I/O module containing two replaceable power supply units (PSUs). Serviced from the front.
Three exhaust fans or cooling modules; n+1 redundant. Serviced from the rear.
Two power supplies; n+1 redundant. Serviced from within the I/O module in front.
Blank filler sleds for partially populated configurations.
Two 4-lane 12 Gb/s SAS 3.0 interconnects.
19" 4U 1m deep chassis.
Voyager DAE
The Voyager DAE has the following features: l l l l l
3.5-inch disk drives in a single 4U drawer. Serviced from the front.
One Link Control Card (LCC). Serviced from the front.
One Inter-Connect Module (ICM). Serviced from the back.
Three fans or cooling modules; n+1 redundant. Serviced from the front.
Two power supplies; n+1 redundant. Serviced from the back.
Pikes Peak (dense storage)
The Pikes Peak enclosure is used in D-Series ECS Appliances.
Chassis, sleds, and disks
Chassis
The chassis is composed of: l
Seven sleds with up to 14 3.5-inch disk drives each in a single 4U drawer (up to 98 disk drives total). Serviced from the front, after removing the I/O module.
Disk array enclosures 47
Disk Drives l l l l l l
One I/O module containing two replaceable power supply units (PSUs). Serviced from the front.
Three exhaust fans or cooling modules; n+1 redundant. Serviced from the rear.
Two power supplies; n+1 redundant. Serviced from within the I/O module in front.
Blank filler sleds for partially populated configurations.
Two 4-lane 12 Gb/s SAS 3.0 interconnects.
19" 4U 1m deep chassis.
Replacing a sled, a drive, or the I/O module requires taking the DAE offline (cold service). All drives in the DAE are inaccessible during the cold service. However, the identify LEDs will continue to operate for 15 minutes after power is disconnected.
Figure 17 Pikes Peak chassis
48 D- and U-Series Hardware Guide
Disk Drives
Figure 18 Pikes Peak chassis with I/O module and power supplies removed, sleds extended
Figure 19 Enclosure LEDs from the front
Table 32 Enclosure LEDs
LED Color
Enclosure "OK"
Enclosure Fail
Green
Yellow
Enclosure Identify Blue
State
Solid
Fast flashing
Slow flashing
Description
Enclosure operating normally
Enclosure failure
Enclosure received an identify command
Pikes Peak (dense storage) 49
Disk Drives
Sleds and disks
The seven sleds are designated by letters A through G.
Figure 20 Sleds letter designations
Each sled must be fully populated with 14 8 TB drives of the same speed. The D6200 uses seven sleds and the D4500 uses five sleds. In the D4500 configuration, sleds positions C and E are populated by blank filler sleds. Sleds are serviced by pulling the sled forward and removing the cover.
Drives are designated by the sled letter plus the slot number. The following figure shows the drive designators for sled A.
50 D- and U-Series Hardware Guide
Figure 21 Drive designations and sled LEDs
Disk Drives
Each sled and drive slot has an LED to indicate failure or to indicate that the LED was enabled by an identify command.
Table 33 Sled and drive LEDs
LED
Sled Identify/Fail
HDD Identify/Fail
Color
Amber
Amber
State
Slow Flashing
Fast Flashing
Description
Link received an identify command
SAS link failure
Each drive is enclosed in a tool-less carrier before it is inserted into the sled.
Pikes Peak (dense storage) 51
Disk Drives
Figure 22 Disk drive in carrier
Figure 23 Empty drive carrier
I/O module and power supplies
I/O module
At the front of the enclosure is a removable base that includes the I/O module on the bottom and two power supplies on top. The I/O module contains all of the SAS functionality for the DAE. The I/O module is replaceable after the DAE is powered off.
52 D- and U-Series Hardware Guide
Figure 24 I/O module separated from enclosure
Disk Drives
The front of the I/O module has a set of status LEDs for each SAS link.
Figure 25 SAS link LEDs
Pikes Peak (dense storage) 53
Disk Drives
Table 34 SAS link LEDs
LED Color
Mini-SAS Link OK Green
Mini-SAS Identify/
Fail
Amber
Mini-SAS Identify/
Fail
Amber
State
Solid
Slow flashing
Fast flashing
Description
Valid SAS link detected
SAS link received an identify command
SAS link failure
Note
The Link OK and SAS A and B Fail are not Green and Amber fast flashing when the
DAE is powered on and the node/SCSi HBA is not online (NO LINK).
While the I/O module hardware used in the D-Series is identical between 8TB and 10
TB models, the software configuration of the I/O module is different depending on the disks used in the model. Consequently, the I/O module field-replaceable unit (FRU) number is different depending on disk size: l l
I/O module FRU for 8TB models (D4500 and D6200): 05-000-427-01
I/O module FRU for 10TB models (D5600 and D7800): 105-001-028-00
Power supplies
Two power supplies (n + 1 redundant) sit on top of the I/O module in front. A single power supply can be swapped without removing the I/O module assembly or powering off the DAE.
Figure 26 Power supply separated from I/O module
At the top of each power supply is a set of status LEDs.
54 D- and U-Series Hardware Guide
Figure 27 Power supply LEDs
Disk Drives
Fan modules
Table 35 SAS link LEDs
LED
PSU Fail
PSU Identify
AC OK
DC OK
Color
Amber
Blue
Green
Green
State
Solid
Solid
Solid
Solid
Description
There is a fault in the power supply
The power supply received an identify command
AC power input is within regulation
DC power output is within regulation
The Pikes Peak DAE has three hot-swappable managed system fans at the rear in a redundant 2-plus-1 configuration. Logic in the DAE will gracefully shut down the DAE if the heat becomes too high after a fan failure. A failed fan must be left in place until the fan replacement service call. Each fan has an amber fault LED. The fans are labeled A, B, and C from right to left.
Pikes Peak (dense storage) 55
Disk Drives
Figure 28 Enclosure fan locations
Voyager DAE
The Voyager DAE is used in U-Series ECS Appliances.
Disk drives in Voyager DAEs
Disk drives are encased in cartridge-style enclosures. Each cartridge has a latch that allows you to snap-out a disk drive for removal and snap-in for installation.
The inside of each Voyager has physically printed labels that are on the left and the front sides of the DAE that describe the rows (or banks) and columns (or slots) where the disk drives are installed.
The banks are labeled from A to E and the slots are labeled from 0 to 11. When describing the layout of disk drives within the DAE, the interface format for the DAE is called E_D. That is, E indicates the enclosure, and D the disk. For example, you could have an interface format of 1_B11. This format is interpreted as enclosure 1, in row
(bank) B/slot number 11.
Enclosures are numbered from 1 through 8 starting at the bottom of the rack. Rear cable connections are color-coded.
The arrangement of disks in a DAE must match the prescribed layouts that are shown in the figures that follow. Not all layouts are available for all hardware.
Looking at the DAE from the front and above, the following figure shows the disk drive layout of the DAE.
Disk population rules: l l l l
The first disk must be placed at Row A Slot 0 with each subsequent disk placed next to it. When Row A is filled, the next disk must be placed in Row B Slot 0. (Do not skip a slot.)
(Gen2) For a full-rack, each DAE must have the same number of disks from 10 to
60 in increments of 5.
(Gen2) For a half-rack, each DAE must have the same number of disks from 10 to
60 in increments of 10.
(Gen2) To upgrade a half-rack, add the "1 server, 4 DAEs, and 40 disk upgrade kit." Each DAE in the full rack must have the same number of disks. Add enough
40-disk upgrade kits to match the disks in the original DAEs.
56 D- and U-Series Hardware Guide
Disk Drives l l l l
(Gen1) A DAE can contain 15, 30, 45, or 60 disks.
(Gen1) The lower four DAEs must contain the same number of disks.
(Gen1) The upper DAEs are added only after the lower DAEs contain 60 disks.
(Gen1) The upper DAEs must contain the same number of disks.
The figures show example layouts.
Figure 29 U-Series disk layout for 10-disk configurations (Gen2 only)
Voyager DAE 57
Disk Drives
Figure 30 U-Series disk layout for 15-disk configurations (Gen1, Gen2 full-rack only)
58 D- and U-Series Hardware Guide
Figure 31 U-Series disk layout for 30-disk configurations (Gen1, Gen2)
Disk Drives
Voyager DAE 59
Disk Drives
Figure 32 U-Series disk layout for 45-disk configurations (Gen1, Gen2 full-rack)
60 D- and U-Series Hardware Guide
Figure 33 U-Series disk layout for 60-disk configurations
Disk Drives
Link control cards
Each DAE includes a link control card (LCC) whose main function is to be a SAS expander and provide enclosure services. The LCC independently monitors the environment status of the entire enclosure and communicates the status to the system. The LCC includes a fault LED and a power LED.
Note
Remove the power from the DAE before replacing the LCC.
Table 36 DAE LCC status LED
LED
Power
Power fault
Color
Green
—
Amber
—
State
On
Off
On
Off
Description
Power on
Power off
Fault
No fault or power off
Voyager DAE 61
Disk Drives
Figure 34 LCC with LEDs
Figure 35 LCC Location
1 2 3
Fan control module
Each DAE includes three fan control modules (cooling modules) on the front of the
DAE. The fan control module augments the cooling capacity of each DAE. It plugs directly into the DAE baseboard from the top of the DAE. Inside the fan control module, sensors measure the external ambient temperatures to ensure even cooling throughout the DAE.
CL4669
62 D- and U-Series Hardware Guide
Table 37 Fan control module fan fault LED
LED
Fan fault
Color
Amber
State
On
— Off
Figure 36 Fan control module with LED
Disk Drives
Description
Fault detected. One or more fans faulted.
No fault. Fans operating normally.
Figure 37 Location of fan modules
Interconnect Module
The Interconnect Module (ICM) is the primary interconnect management element.
It is a plug-in module that includes a USB connector, RJ-12 management adapter, Bus
ID indicator, enclosure ID indicator, two input SAS connectors and two output SAS
Voyager DAE 63
Disk Drives connectors with corresponding LEDs. These LEDs indicate the link and activity of each
SAS connector for input and output to devices.
Note
Disconnect power to the DAE when changing the ICM.
Table 38 ICM bus status LEDs
LED
Power fault
Power on
Color
Green
—
Amber
—
State
On
Off
On
Off
Description
Power on
Power off
Fault
No fault or power off
The ICM supports the following I/O ports on the rear: l l l
Four 6 Gb/s PCI Gen2 SAS ports
One management (RJ-12) connector to the SPS (field service diagnostics only)
One USB connector l
One 6 Gb/s SAS x8 ports
It supports four 6 Gb/s SAS x8 ports on the rear of the ICM (two inputs and two outputs, one used in Gen1 hardware and two used in Gen2 hardware). This port provides an interface for SAS and NL-SAS drives in the DAE.
Table 39 ICM 6 Gb/s port LEDs
LED Color
Link/Activity Blue
Green
—
State
On
On
Off
Description
Indicates a 4x or 8x connection with all lanes running at 6 Gb/s.
Indicates that a wide port width other than 4x or 8x has been established or one or more lanes is not running at full speed or disconnected.
Not connected.
64 D- and U-Series Hardware Guide
Figure 38 ICM LEDs
Disk Drives
Power supply
1. Power fault LED (amber)
2. Power LED (green)
3. Link activity LEDs (blue/green)
4. Single SAS port that is used for Gen1 hardware.
5. Two SAS ports that are used for Gen2 hardware.
The power supply is hot-swappable. It has a built-in thumbscrew for ease of installation and removal. Each power supply includes a fan to provide cooling to the power supply. The power supply is an auto-ranging, power-factor-corrected, multioutput, offline converter with its own line cord. Each supply supports a fully configured DAE and shares load currents with the other supply. The power supplies provide four independent power zones. Each of the hot-swappable power supplies can deliver 1300 W at 12 V in its load-sharing highly available configuration. Control and status are implemented throughout the I2C interface.
Voyager DAE 65
Disk Drives
Table 40 DAE AC power supply/cooling module LEDs
LED Color
AC power on (12 V power): one LED for each power cord.
Green
—
State
On
Off
Power fault Amber
—
On
Off
Description
OK. AC or SPS power applied. All output voltages are within respective operating ranges, not including fan fault.
12 V power is out of operation range, or in shutdown or fault detected within the unit.
Under ICM control. LED is on if any fans or outputs are outside the specified operating range while the unit is not in low power mode.
All outputs are within the specified range, or in shutdown or fault detected within unit.
Figure 39 DAE power supply
66 D- and U-Series Hardware Guide
CHAPTER 5
Third Party Rack Requirements
l
........................................................................... 68
Third Party Rack Requirements 67
Third Party Rack Requirements
Third-party rack requirements
Customers who want to assemble an ECS Appliance using their own racks must ensure that the racks meet the following requirements listed in
on page 68.
RPQ is required for the following additional scenarios related to customer-provided rack: l l l
Single model that is installed in multi-racks.
The U-Series DAE Cable Management Arms (CMA) cannot be installed due to third-party rack limitations.
Transfers from Dell EMC to customer rack.
Option: Customer rack enables the adjustment of rear rails to 24 inches so that
Dell EMC fixed rails can be used. RPQ is not required if all third-party rack
requirements in Table 41 on page 68 are met.
Table 41 Third-party rack requirements
Requirement Category
Cabinet
NEMA rails
Power
Description
44 inches minimum rack depth.
Recommended 24 inches wide cabinet to provide room for cable routing on the sides of the cabinet.
Sufficient contiguous space anywhere in the rack to install the components in the required relative order.
If a front door is used, it must maintain a minimum of 1.2 inches of clearance to the bezels. It must be perforated with 50% or more evenly distributed air opening. It should enable easy access for service personnel and allow the LEDs to be visible through it.
If a rear door is used, it must be perforated with 50% or more evenly distributed air opening.
Blanking panels should be used as required to prevent air recirculation inside the cabinet.
There is a recommended minimum of 42 inches of clearance in the front and 36 inches of clearance in the rear of the cabinet to allow for service area and proper airflow.
19 inches wide rail with 1U increments.
Between 24 inches and 34 inches deep.
NEMA round and square hole rails are supported.
NEMA treaded hole rails are NOT supported.
NEMA round holes must accept M5 size screws.
Special screws (036-709-013 or 113) are provided for use with square holes rails.
Square hole rails require M5 nut clips that are provided by the customer for third-party rack provided.
The AC power requirements are 200–240 VAC +/- 10% 50–60 Hz.
Vertical PDUs and AC plugs must not interfere with the DAE and Cable Management arms requiring a depth of 42.5 inches.
68 D- and U-Series Hardware Guide
Third Party Rack Requirements
Requirement Category
Cabling
Disk Array Enclosures (DAEs)
Table 41 Third-party rack requirements (continued)
Description
The customer rack should have redundant power zones, one on each side of the rack with separate PDU power strips. Each redundant power zone should have capacity for the maximum power load. NOTE: Dell EMC is not responsible for any failures, issues, or outages resulting from failure of the customer provided PDUs.
Cables for the product must be routed in such a way that it mimics the standard ECS
Appliance offering coming from the factory. This includes dressing cables to the sides to prevent drooping and interfering with service of field replaceable units (FRUs).
Optical cables should be dressed to maintain a 1.5 inches bend radius.
Cables for third-party components in the rack cannot cross or interfere with ECS logic components in such a way that they block front to back air flow or individual FRU service activity.
All DAEs should be installed in sequential order from bottom to top to prevent a tipping risk.
WARNING
Opening more than one DAE at a time creates a tip hazard. ECS racks provide an integrated solution to prevent more than one DAE from being open at a time.
Customer racks will not be able to support this feature.
Weight Customer rack must be capable of supporting the weight of ECS equipment.
Note
Use the power and weight calculator to refine the power and heat values to moreclosely match the hardware configuration for the system. The calculator contains the latest information for power and weight planning.
ECS support personnel can refer to the Elastic Cloud Storage Third-Party Rack
Installation Guide for more details on installing in customer racks.
Third-party rack requirements 69
Third Party Rack Requirements
70 D- and U-Series Hardware Guide
CHAPTER 6
Power Cabling
l l l l l l l
.........................................................................................72
U-Series single-phase AC power cabling
........................................................... 72
U-Series three-phase AC power cabling
.............................................................74
D-Series single-phase AC power cabling
........................................................... 77
D-Series three-phase AC power cabling
.............................................................79
C-Series single-phase AC power cabling
........................................................... 83
C-Series 3-phase AC power cabling
.................................................................. 84
Power Cabling 71
Power Cabling
ECS power calculator
Use the power and weight calculator to refine the power and heat values to moreclosely match the hardware configuration for your system. The calculator contains the latest information for power and weight planning.
U-Series single-phase AC power cabling
Provides the single-phase power cabling diagram for the U-Series ECS Appliance.
The switches plug into the front of the rack and route through the rails to the rear.
72 D- and U-Series Hardware Guide
Figure 40 U-Series single-phase AC power cabling for eight-node configurations
Power Cabling
U-Series single-phase AC power cabling 73
Power Cabling
Figure 40 U-Series single-phase AC power cabling for eight-node configurations (continued)
Note
For a four-node configuration, counting from the bottom of the rack, ignore DAEs 5 through 8 and server chassis 2.
U-Series three-phase AC power cabling
Provides cabling diagrams for three-phase AC delta and wye power.
Three-phase Delta AC power cabling
The legend maps colored cables that are shown in the diagram to part numbers and cable lengths.
Figure 41 Cable legend for three-phase delta AC power diagram
74 D- and U-Series Hardware Guide
Figure 42 Three-phase AC delta power cabling for eight-node configuration
Power Cabling
U-Series three-phase AC power cabling 75
Power Cabling
Figure 42 Three-phase AC delta power cabling for eight-node configuration (continued)
Note
For a four-node configuration, counting from the bottom of the rack, ignore DAEs 5–8 and server chassis 2.
Three-phase WYE AC power cabling
The legend maps colored cables that are shown in the diagram to part numbers and cable lengths.
Figure 43 Cable legend for three-phase WYE AC power diagram
76 D- and U-Series Hardware Guide
Figure 44 Three-phase WYE AC power cabling for eight-node configuration
Power Cabling
Note
For a four-node configuration, counting from the bottom of the rack, ignore DAEs 5–8 and server chassis 2.
D-Series single-phase AC power cabling
Provides the single-phase power cabling diagram for the D-Series ECS Appliance.
The switches plug into the front of the rack and route through the rails to the rear.
D-Series single-phase AC power cabling 77
Power Cabling
Figure 45 D-Series single-phase AC power cabling for eight-node configurations
78 D- and U-Series Hardware Guide
Power Cabling
D-Series three-phase AC power cabling
Provides cabling diagrams for three-phase AC delta and wye power.
Three-phase Delta AC power cabling
The legend maps colored cables shown in the diagram to part numbers and cable lengths.
D-Series three-phase AC power cabling 79
Power Cabling
Figure 46 Three-phase AC delta power cabling for eight-node configuration
80 D- and U-Series Hardware Guide
Power Cabling
Figure 46 Three-phase AC delta power cabling for eight-node configuration (continued)
Note
For a four-node configuration, counting from the bottom of the rack, ignore DAEs 5–8 and server chassis 2.
Three-phase WYE AC power cabling
The legend maps colored cables shown in the diagram to part numbers and cable lengths.
D-Series three-phase AC power cabling 81
Power Cabling
Figure 47 Three-phase WYE AC power cabling for eight-node configuration
82 D- and U-Series Hardware Guide
Power Cabling
Figure 47 Three-phase WYE AC power cabling for eight-node configuration (continued)
Note
For a four-node configuration, counting from the bottom of the rack, ignore DAEs 5–8 and server chassis 2.
C-Series single-phase AC power cabling
Provides the single-phase power cabling diagram for the C-Series ECS Appliance.
The switches plug into the front of the rack and route through the rails to the rear.
Figure 48 C-Series single-phase AC power cabling for eight-node configurations: Top
C-Series single-phase AC power cabling 83
Power Cabling
Figure 49 C-Series single-phase AC power cabling for eight-node configurations: Bottom
C-Series 3-phase AC power cabling
Provides the 3-phase power cabling diagrams for the C-Series ECS Appliance.
The switches plug into the front of the rack and route through the rails to the rear.
84 D- and U-Series Hardware Guide
Figure 50 C-Series 3-phase AC power cabling for eight-node configurations: Top
Power Cabling
C-Series 3-phase AC power cabling 85
Power Cabling
Figure 51 C-Series 3-phase AC power cabling for eight-node configurations: Bottom
86 D- and U-Series Hardware Guide
Figure 51 C-Series 3-phase AC power cabling for eight-node configurations: Bottom
(continued)
Power Cabling
C-Series 3-phase AC power cabling 87
Power Cabling
88 D- and U-Series Hardware Guide
CHAPTER 7
SAS Cabling
l l
......................................................................................... 90
......................................................................................... 93
SAS Cabling 89
SAS Cabling
U-Series SAS cabling
Provides wiring diagrams for the SAS cables that connect nodes to Voyager DAEs.
Gen2
Gen2 use two SAS cables for each node to DAE connection.
The top port on the DAE is port 0 and always connects to the SAS adapter's left port on the node. The bottom port is port 1 and always connects to the SAS adapter's right port on the node.
90 D- and U-Series Hardware Guide
Figure 52 U-Series (Gen2) SAS cabling for eight-node configurations
SAS Cabling
U-Series SAS cabling 91
SAS Cabling
Figure 53 U-Series (Gen2) SAS cabling
Gen1
Note
Hardware diagrams number nodes starting with zero. In all other discussions of ECS architecture and software, nodes are numbered starting with one.
92 D- and U-Series Hardware Guide
Figure 54 U-Series (Gen1) SAS cabling for eight-node configurations
SAS Cabling
D-Series SAS cabling
Provides wiring diagrams for the SAS cables that connect nodes to Pikes Peak DAEs.
D-Series has one High Density SAS cable (two cables put together); One connector on the HBA and one connector on the I/O Module.
The top port on the DAE is port 0 and always connects to the SAS adapter's left port on the node. The bottom port is port 1 and always connects to the SAS adapter's right port on the node.
D-Series SAS cabling 93
SAS Cabling
Figure 55 D-Series SAS cabling for eight-node configurations
94 D- and U-Series Hardware Guide
CHAPTER 8
Network Cabling
l l
Connecting ECS appliances in a single site
........................................................96
................................................................................................. 97
Network Cabling 95
Network Cabling
Connecting ECS appliances in a single site
The ECS appliance management networks are connected together through the Nile
Area Network (NAN). The NAN is created by connecting either port 51 or 52 to another turtle switch of another ECS appliance. Through these connections, nodes from any segment can communicate to any other node in the NAN.
The simplest topology to connect the ECS appliances together does not require extra switch hardware. All the turtle switches can be connected together in a linear or daisy chain fashion.
Figure 56 Linear or daisy-chain topology
In this topology, if there is a loss of connectivity a split-brain can occur.
Figure 57 Linear or daisy-chain split-brain
For a more reliable network, the ends of the daisy chain topology can be connected together to create a ring network. The ring topology is more stable because it would require two cable link breaks in the topology for a split-brain to occur. The primary drawback to the ring topology is that the RMM ports cannot be connected to the customer network unless an external customer or aggregation switch is added to ring.
Figure 58 Ring topology
96 D- and U-Series Hardware Guide
Network Cabling
The daisy-chain or ring topologies are not recommended for large installations. When there are four or more ECS appliances, an aggregation switch is recommended. The addition of an aggregation switch in a star topology can provide better fail over by reducing split-brain issues.
Figure 59 Star topology
Network cabling
The network cabling diagrams apply to U-Series, D-Series, or C-Series ECS Appliance in an Dell EMC or customer provided rack.
To distinguish between the three switches, each switch has a nickname: l l l
Hare: 10 GbE public switch is at the top of the rack in a U- or D-Series or the top switch in a C-Series segment.
Rabbit: 10 GbE public switch is located just below the hare in the top of the rack in a U- or D-Series or below the hare switch in a C-Series segment.
Turtle: 1 GbE private switch that is located below rabbit in the top of the rack in a
U-Series or below the hare switch in a C-Series segment.
U- and D-Series network cabling
The following figure shows a simplified network cabling diagram for an eight-node configuration for a U- or D-Series ECS Appliance as configured by Dell EMC or a customer in a supplied rack. Following this figure, other detailed figures and tables provide port, label, and cable color information.
Network cabling 97
Network Cabling
Figure 60 Public switch cabling for U- and D-Series
98 D- and U-Series Hardware Guide
Figure 61 U-Series and D-Series network cabling
Network Cabling
Network cabling 99
Network Cabling
Figure 62 Network cabling labels
100
Table 42 U- and D-Series 10 GB public switch network cabling for all Arista models
Chassis / node /
10GB adapter port
Switch port / label
(rabbit, SW1)
Switch port / label
(hare, SW2)
Label color
Orange 1 / Node 1 P01 (Right) 10G SW1 P09
1 / Node 1 P02 (Left)
1 / Node 2 P01
(Right)
1 / Node 2 P02 (Left)
1 / Node 3 P01
(Right)
1 / Node 3 P02 (Left)
1 / Node 4 P01
(Right)
10G SW1 P10
10G SW1 P11
10G SW1 P12
10G SW2 P09
10G SW2 P10
10G SW2 P11
Blue
Black
Green
D- and U-Series Hardware Guide
Network Cabling
Table 42 U- and D-Series 10 GB public switch network cabling for all Arista models (continued)
Chassis / node /
10GB adapter port
Switch port / label
(rabbit, SW1)
Switch port / label
(hare, SW2)
Label color
1 / Node 4 P02 (Left)
2 / Node 5 P01
(Right)
2 / Node 5 P02 (Left)
10G SW1 P13
2 / Node 6 P01
(Right)
2 / Node 6 P02 (Left)
10G SW1 P14
2 / Node 7 P01
(Right)
2 / Node 7 P02 (Left)
10G SW1 P15
2 / Node 8 P01
(Right)
10G SW1 P16
2 / Node 8 P02 (Left)
10G SW2 P12
10G SW2 P13
10G SW2 P14
10G SW2 P15
10G SW2 P16
Brown
Light Blue
Purple
Magenta
Note
1.5m (U-Series) or 3m (C-Series) Twinax network cables are provided for 10GB.
Table 43 U- and D-Series 10 GB public switch MLAG cabling for all Arista models
Connection
MLAG cables
(7050x10 GB switches)
Connection 10 GB
SW1 (rabbit)
MLAG cables (71xx 10
GB switches)
23
24
45
46
47
48
Port number 10
GB SW2 (hare)
23
24
45
46
47
48
Port number labels
10G SW1 P23
10G SW2 P23
10G SW1 P24
10G SW2 P25
10G SW1 P45
10G SW2 P45
10G SW1 P46
10G SW2 P46
10G SW1 P47
10G SW2 P47
10G SW1 P48
10G SW2 P48
Note
1m Twinax network cables are provided to cable 10 GB switch to switch MLAG.
Network cabling 101
Network Cabling
Figure 63 Private switch cabling for U- and D-Series
Table 44 U- and D-Series 1 GB private switch network cabling
Chassis /
Node
1 / Node 1
1 / Node 2
1 / Node 3
RMM Port /
Label (Grey
Cable)
Switch
Port / Label
(Grey
Cable) eth0 Port /
Label (Blue
Cable)
Switch
Port / Label
(Blue
Cable)
Label Color
Node 01 RMM 1GB SW P25 Node01 P01
Node 02
RMM
1GB SW P01 Orange
1GB SW P26 Node02 P02 1GB SW P02 Blue
Node 03
RMM
1GB SW P27 Node03 P03 1GB SW P03 Black
102 D- and U-Series Hardware Guide
Network Cabling
Table 44 U- and D-Series 1 GB private switch network cabling (continued)
Chassis /
Node
1 / Node 4
2 / Node 5
2 / Node 6
2 / Node 7
2 / Node 8
RMM Port /
Label (Grey
Cable)
Node 04
RMM
Node 05
RMM
Node 06
RMM
Node 07
RMM
Node 08
RMM
Switch
Port / Label
(Grey
Cable) eth0 Port /
Label (Blue
Cable)
Switch
Port / Label
(Blue
Cable)
Label Color
1GB SW P28 Node04 P04 1GB SW P04 Green
1GB SW P29 Node05 P05 1GB SW P05 Brown
1GB SW P30 Node06 P06 1GB SW P06 Light Blue
1GB SW P31 Node07 P07 1GB SW P07 Purple
1GB SW P32 Node08 P08 1GB SW P08 Magenta
Table 45 U- and D-Series 1 GB private switch management and interconnect cabling
1 GB Switch
Ports
49
50
51
52
10GB SW1
(rabbit) Port
Number
<...> - mgmt port
10GB SW2
(hare) Port
Number
Labels Color
10G SW2 MGMT
1G SW P49
White
<...> - mgmt port 10G SW2 MGMT
1G SW P50
White
Rack/Segment Interconnect IN or first rack empty
Rack/Segment Interconnect OUT
Note
Port 49 and 50 are 1 meter white cables. RJ45 SFPs are installed in ports 49 to 52.
C-Series network cabling
A full rack configuration in the C-Series is made up of two segments: lower and upper.
Each segment has a hare, rabbit, and turtle switch, and the two segments are connected. A configuration of six or less servers is a single-segment appliance and has one set of switches. Cabling information for the lower and upper segments for public and private switches are provided below.
Network cabling 103
Network Cabling
Figure 64 C-Series public switch cabling for the lower segment from the rear
104 D- and U-Series Hardware Guide
Network Cabling
Figure 64 C-Series public switch cabling for the lower segment from the rear (continued)
Network cabling 105
Network Cabling
Figure 64 C-Series public switch cabling for the lower segment from the rear (continued)
106 D- and U-Series Hardware Guide
Figure 65 C-Series public switch cabling for the upper segment from the rear
Network Cabling
Network cabling 107
Network Cabling
Figure 65 C-Series public switch cabling for the upper segment from the rear (continued)
108 D- and U-Series Hardware Guide
Figure 66 C-Series private switch cabling for the lower segment from the rear
Network Cabling
Network cabling 109
Network Cabling
Figure 66 C-Series private switch cabling for the lower segment from the rear (continued)
110 D- and U-Series Hardware Guide
Network Cabling
Figure 66 C-Series private switch cabling for the lower segment from the rear (continued)
Network cabling 111
Network Cabling
Figure 67 C-Series private switch cabling for the upper segment from the rear
112 D- and U-Series Hardware Guide
Network Cabling
Figure 67 C-Series private switch cabling for the upper segment from the rear (continued)
Network cabling 113
Network Cabling
Figure 67 C-Series private switch cabling for the upper segment from the rear (continued)
Customer network connections
Customers connect to an ECS Appliance by way of 10 GbE ports and their own interconnect cables. When multiple appliances are installed in the same data center, daisy chain or home-run connections the private switches to a customer-provided switch.
114 D- and U-Series Hardware Guide
advertisement
Related manuals
advertisement
Table of contents
- 3 Contents
- 5 Figures
- 7 Tables
- 9 Hardware Components and Configurations
- 10 ECS Appliance hardware components
- 10 U-Series components
- 12 D-Series components
- 13 C-Series components
- 15 U-Series Appliance (Gen2) configurations and upgrade paths
- 18 U-Series Appliance (Gen1) configurations and upgrade paths
- 19 D-Series Appliance configurations and upgrade paths
- 20 C-Series Appliance (Gen2) configurations and upgrade paths
- 22 C-Series Appliance (Gen1) configurations and upgrade paths
- 23 Certified hardware in support of ECS 3.2
- 25 Servers
- 26 ECS Appliance servers
- 27 Server front views
- 28 Server rear view
- 30 Rack and node host names
- 33 Switches
- 34 ECS Appliance switches
- 35 Private switch: Cisco 3048 48-P
- 36 Private switch: Arista 7010T-48
- 37 Private switch: Arista 7048T-48
- 38 Public switch: Arista 7050SX-64
- 39 Public switch: Arista 7050S-52
- 40 Public switch: Arista 7150S-24
- 42 Public switch: Arista 7124SX
- 45 Disk Drives
- 46 Integrated disk drives
- 46 Storage disk drives
- 47 Disk array enclosures
- 47 Pikes Peak (dense storage)
- 56 Voyager DAE
- 67 Third Party Rack Requirements
- 68 Third-party rack requirements
- 71 Power Cabling
- 72 ECS power calculator
- 72 U-Series single-phase AC power cabling
- 74 U-Series three-phase AC power cabling
- 77 D-Series single-phase AC power cabling
- 79 D-Series three-phase AC power cabling
- 83 C-Series single-phase AC power cabling
- 84 C-Series 3-phase AC power cabling
- 89 SAS Cabling
- 90 U-Series SAS cabling
- 93 D-Series SAS cabling
- 95 Network Cabling
- 96 Connecting ECS appliances in a single site
- 97 Network cabling