ThinkSystem D2 Enclosure, Modular
Enclosure and ThinkSystem SD530 Compute
Node
Setup Guide
Machine Type: 7X20, 7X21 and 7X22
Note
Before using this information and the product it supports, be sure to read and understand the safety
information and the safety instructions, which are available at:
http://thinksystem.lenovofiles.com/help/topic/safety_documentation/pdf_files.html
In addition, be sure that you are familiar with the terms and conditions of the Lenovo warranty for your
solution, which can be found at:
http://datacentersupport.lenovo.com/warrantylookup
Fourth Edition (March 2017)
© Copyright Lenovo 2017, 2018.
LIMITED AND RESTRICTED RIGHTS NOTICE: If data or software is delivered pursuant to a General Services
Administration “GSA” contract, use, reproduction, or disclosure is subject to restrictions set forth in Contract No.
GS-35F-05925
Contents
Chapter 1. Introduction . . . . . . . . . 1
Solution package contents . . . . .
Features. . . . . . . . . . . . .
Specifications . . . . . . . . . .
Enclosure specifications . . . .
Compute node specifications . .
PCIe expansion node specifications
Management options. . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
4
4
6
7
8
12
13
Chapter 2. Solution components . . . 19
Front view . . . . . . . . . . . . .
Enclosure . . . . . . . . . . .
Compute node . . . . . . . . .
Node operator panel . . . . . . .
Rear view . . . . . . . . . . . . .
System Management Module (SMM) .
PCIe slot LEDs . . . . . . . . .
System board layout . . . . . . . . .
System-board internal connectors. .
System-board switches . . . . . .
KVM breakout cable . . . . . . . . .
2.5-inch drive backplanes . . . . . . .
Parts list. . . . . . . . . . . . . .
Enclosure components . . . . . .
Compute node components . . . .
PCIe expansion node components .
Power cords . . . . . . . . . .
Internal cable routing. . . . . . . . .
Four 2.5-inch-drive model . . . . .
Six 2.5-inch-drive model . . . . .
Six 2.5-inch-drive model (with NVMe)
KVM breakout module . . . . . .
PCIe expansion node . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
21
21
22
24
26
27
30
30
30
31
33
33
34
34
36
38
41
41
41
43
46
49
51
Chapter 3. Solution hardware
setup . . . . . . . . . . . . . . . . . . 53
Solution setup checklist . . . . . . . . . . .
Installation Guidelines . . . . . . . . . . . .
System reliability guidelines . . . . . . . .
© Copyright Lenovo 2017, 2018
53
54
55
Working inside the solution with the power
on . . . . . . . . . . . . . . . . . .
Handling static-sensitive devices . . . . . .
Install solution hardware options . . . . . . . .
Install hardware options in the enclosure . . .
Install hardware options in the compute
node . . . . . . . . . . . . . . . . .
Install hardware options in the PCIe expansion
node . . . . . . . . . . . . . . . . .
Install the enclosure in a rack . . . . . . . . .
Cable the solution . . . . . . . . . . . . . .
Power on the compute node . . . . . . . . . .
Validate solution setup . . . . . . . . . . . .
Power off the compute node . . . . . . . . . .
55
56
56
57
72
100
111
111
112
112
112
Chapter 4. System configuration . . . 113
Set the network connection for the Lenovo XClarity
Controller . . . . . . . . . . . . . . . . .
Enable System Management Module network
connection via Lenovo XClarity Controller . . .
Set front USB port for Lenovo XClarity Controller
connection. . . . . . . . . . . . . . . . .
Update the firmware . . . . . . . . . . . . .
Configure the firmware . . . . . . . . . . . .
Memory configuration . . . . . . . . . . . .
RAID configuration . . . . . . . . . . . . .
Install the operating system . . . . . . . . . .
Back up the solution configuration . . . . . . .
113
114
114
114
117
118
118
119
119
Chapter 5. Resolving installation
issues . . . . . . . . . . . . . . . . . 121
Appendix A. Getting help and
technical assistance . . . . . . . . . . 125
Before you call . . . . . . . . . . . . . . . 125
Collecting service data . . . . . . . . . . . . 126
Contacting Support . . . . . . . . . . . . . 127
Index . . . . . . . . . . . . . . . . . . 129
i
ii
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
Chapter 1. Introduction
The ThinkSystem D2 Enclosure, Modular Enclosure combined with ThinkSystem SD530 Compute Node is a
2U solution designed for high-volume network transaction processing. This solution includes a single
enclosure that can contain up to four SD530 compute nodes, which are designed to deliver a dense, scalable
platform for distributed enterprise and hyperconverged solutions.
Figure 1. D2 Enclosure 7X20 and Modular Enclosure 7X22
The solution comes with a limited warranty. For details about the warranty, see:
https://datacentersupport.lenovo.com/documents/ht100742
For details about your specific warranty, see:
http://datacentersupport.lenovo.com/warrantylookup
Each SD530 supports up to six 2.5-inch hot-swap Serial Attached SCSI (SAS), Serial ATA (SATA) or NonVolatile Memory express (NVMe) hard disk drives.
Note: The illustrations in this document might differ slightly from your model.
The enclosure machine type, model number and serial number are on the ID label that can be found on the
front of the enclosure, as shown in the following illustration.
© Copyright Lenovo 2017, 2018
1
Figure 2. ID label on the front of the enclosure
Table 1. ID label on the front of the enclosure
1 ID label
The network access tag can be found on the front of the node. You can pull way the network access tag to
paste your own label for recording some information such as the hostname, the system name and the
inventory bar code. Please keep the network access tag for future reference.
Figure 3. Network access tag on the front of the node
The node model number and serial number are on the ID label that can be found on the front of the node (on
the underside of the network access tag), as shown in the following illustration.
2
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
Figure 4. ID label on the front of the node
The system service label, which is on the top of the enclosure, provides a QR code for mobile access to
service information. You can scan the QR code using a QR code reader and scanner with a mobile device
and get quick access to the Lenovo Service Information website. The Lenovo Service Information website
provides additional information for parts installation and replacement videos, and error codes for solution
support.
The following illustration shows QR codes for the enclosure and the node.
• Enclosure:
http://datacentersupport.lenovo.com/products/servers/thinksystem/d2-enclosure/7X20
Figure 5. D2 enclosure 7X20 QR code
http://datacentersupport.lenovo.com/products/servers/thinksystem/modular-enclosure/7X22
Figure 6. Modular enclosure 7X22 QR code
• Node: http://datacentersupport.lenovo.com/products/servers/thinksystem/sd530/7X21
Figure 7. Compute node QR code
Chapter 1. Introduction
3
Solution package contents
When you receive your solution, verify that the shipment contains everything that you expected to receive.
The solution package includes the following items:
Note: Some of the items listed are available on select models only.
• Compute node(s)
• Enclosure
• Shuttle
• Rail installation kit (optional). Detailed instructions for installing the rail installation kit are provided in the
package with the rail installation kit.
• Cable management arm or cable management bar.
• Material box, including items such as power cords, rack installation template, and accessory kit.
Features
Performance, ease of use, reliability, and expansion capabilities were key considerations in the design of
your solution. These design features make it possible for you to customize the system hardware to meet your
needs today and provide flexible expansion capabilities for the future.
Enclosure:
• Redundant cooling and optional power capabilities
The enclosure supports a maximum of two 1100-watt , 1600-watt or 2000-watt hot-swap power supplies
and five dual-motor hot-swap fans, which provide redundancy for a typical configuration. The redundant
cooling by the fans in the enclosure enables continued operation if one of the fails.
Note: You cannot mix 1100-watt , 1600-watt and 2000-watt power supplies in the enclosure.
• PCI adapter capabilities
The enclosure supports up to eight low-profile PCIe x8 cards (two per node, from processor 1) or four lowprofile PCIe x16 cards (one per node, from processor 1).
• Network support
The enclosure supports 10Gb 8-port EIOM SFP+ or 10Gb 8-port EIOM Base-T (RJ45) EIOM cards, which
provides either 10Gb or 1Gb Ethernet to each node in the enclosure. The minimum networking speed
requirement for the EIOM card is 1Gbps.
• Redundant networking connection
The Lenovo XClarity Controller provides failover capability to a redundant Ethernet connection with the
applicable application installed. If a problem occurs with the primary Ethernet connection, all Ethernet
traffic that is associated with the primary connection is automatically switched to the optional redundant
Ethernet connection. If the applicable device drivers are installed, this switching occurs without data loss
and without user intervention.
• Systems-management capabilities
The enclosure comes with the System Management Module. When the SMM is used with the systemsmanagement software that comes with the solution, you can manage the functions of the solution locally
and remotely. The SMM also provides system monitoring, event recording, and network alert capability.
For additional information, see the User Guide: System Management Module User's Guide at the http://
datacentersupport.lenovo.com.
• Features on Demand
4
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
If a Features on Demand feature is integrated in the solution or in an optional device that is installed in the
solution, you can purchase an activation key to activate the feature. For information about Features on
Demand, see:
https://fod.lenovo.com/lkms
• Mobile access to Lenovo Service Information website
The enclosure provides a QR code on the system service label, which is on the cover of the enclosure,
that you can scan using a QR code reader and scanner with a mobile device to get quick access to the
Lenovo Service Information website. The Lenovo Service Information website provides additional
information for parts installation, replacement videos, and error codes for solution support.
Node:
• Multi-core processing
The compute node supports Intel Xeon E5-26xx v4 series multi-core processors. The compute node
comes with one processor installed.
• Large data-storage capacity and hot-swap capability (6 drive bays per node)
The solution supports a maximum of twenty-four 2.5-inch hot-swap Serial Attached SCSI (SAS), Serial
ATA (SATA) or Non-Volatile Memory express (NVMe) drives.
• Active Memory
The Active Memory feature improves the reliability of memory through memory mirroring. Memory
mirroring mode replicates and stores data on two pairs of DIMMs within two channels simultaneously. If a
failure occurs, the memory controller switches from the primary pair of memory DIMMs to the backup pair
of DIMMs.
• Large system-memory capacity
The solution supports up to a maximum of 384 GB of system memory. The memory controller supports
error correcting code (ECC) for up to 4 industry-standard PC4-19200 (DDR4-2400), DDR4 (fourthgeneration double-data-rate). For more information about the specific types and maximum amount of
memory, see “Specifications” on page 6.
• RAID support
The ThinkSystem RAID adapter provides hardware redundant array of independent disks (RAID) support
to create configurations. The standard RAID adapter provides RAID levels 0, 1, 5, and 10. An optional
RAID adapter is available for purchase.
Note: During RAID rebuild process, HDD is considered as non-useable. HDD tray Yellow LED will blink
and Global HDD status LED will be on. This EVENT will be logged in Lenovo XClarity Controller. When the
rebuild process is completed, HDD tray Amber LED and Global HDD status LED will be off. User can refer
to HBA utility to confirm current HDD/RAID status.
• Integrated Trusted Platform Module (TPM)
This integrated security chip performs cryptographic functions and stores private and public secure keys.
It provides the hardware support for the Trusted Computing Group (TCG) specification. You can
download the software to support the TCG specification, when the software is available.
Note: For customers in the People’s Republic of China, TPM is not supported. However, customers in the
People’s Republic of China can install a Trusted Cryptographic Module (TCM) adapter (sometimes called
a daughter card).
• Lenovo XClarity Administrator
Lenovo XClarity Administrator is a centralized resource-management solution that enables administrators
to deploy infrastructure faster and with less effort. The solution seamlessly integrates into System x,
ThinkServer, and NeXtScale servers, as well as the Flex System converged infrastructure platform.
Lenovo XClarity Administrator provides:
Chapter 1. Introduction
5
– Automated discovery
– Agent-free hardware management
– Monitoring
Administrators are able to find the right information and accomplish critical tasks faster through an
uncluttered, dashboard-driven graphical user interface (GUI). Centralizing and automating foundational
infrastructure deployment and lifecycle management tasks across large pools of systems frees up
administrator time, and makes resources available to end-users faster.
Lenovo XClarity is easily extended into the leading virtualization management platforms from Microsoft
and VMware using software plug-ins, called Lenovo XClarity Integrators. The solution improves workload
uptime and service-level assurance by dynamically relocating workloads from affected hosts in the cluster
during rolling solution reboots or firmware updates, or during predicted hardware failures.
For more information about Lenovo XClarity Administrator, see the http://shop.lenovo.com/us/en/systems/
software/systems-management/xclarity/ and the http://pic.dhe.ibm.com/infocenter/flexsys/information/topic/
com.lenovo.lxca.doc/aug_product_page.html.
• Lenovo XClarity Controller (XCC)
The Lenovo XClarity Controller is the common management controller for Lenovo ThinkSystem solution
hardware. The Lenovo XClarity Controller consolidates multiple management functions in a single chip on
the node system board.
Some of the features that are unique to the Lenovo XClarity Controller are enhanced performance, higherresolution remote video, and expanded security options. For additional information about the Lenovo
XClarity Controller, see:
http://sysmgt.lenovofiles.com/help/topic/com.lenovo.systems.management.xcc.doc/product_page.html
• UEFI-compliant server firmware
Lenovo ThinkSystem firmware is Unified Extensible Firmware Interface (UEFI) 2.5 compliant. UEFI
replaces BIOS and defines a standard interface between the operating system, platform firmware, and
external devices.
Lenovo ThinkSystem servers are capable of booting UEFI-compliant operating systems, BIOS-based
operating systems, and BIOS-based adapters as well as UEFI-compliant adapters.
Note: The solution does not support DOS (Disk Operating System).
• Features on Demand
If a Features on Demand feature is integrated in the solution or in an optional device that is installed in the
solution, you can purchase an activation key to activate the feature. For information about Features on
Demand, see:
https://fod.lenovo.com/lkms
• Light path diagnostics
Light path diagnostics provides LEDs to help you diagnose problems. For more information about the light
path diagnostics, see Light path diagnostics panel and Light path diagnostics LEDs.
• Mobile access to Lenovo Service Information website
The node provides a QR code on the system service label, which is on the cover of the node, that you can
scan using a QR code reader and scanner with a mobile device to get quick access to the Lenovo Service
Information website. The Lenovo Service Information website provides additional information for parts
installation, replacement videos, and error codes for solution support.
Specifications
The following information is a summary of the features and specifications of the solution. Depending on the
model, some features might not be available, or some specifications might not apply.
6
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
Enclosure specifications
Features and specifications of the enclosure.
Table 2. Enclosure specifications
Specification
Description
PCI expansion slots
(depending on the enclosure
model)
• PCIe 3.0 x8 shuttle:
– Supports up to eight low-profile PCIe 3.0 x8 adapters
One node supports up to two low-profile PCIe 3.0 x8 adapters from processor
1
• PCIe 3.0 x16 shuttle:
– Supports up to four low-profile PCIe 3.0 x16 adapters
One node supports one low-profile PCIe 3.0 x16 adapters from processor 1
Notes:
1. PCIe 3.0 x16 shuttle supports PCIe cassettes that can be installed and
removed without removing the shuttle from the enclosure.
2. Ensure to power off the node before unseating the PCIe cassette from the
shuttle.
Hot-swap fans
• Three 60x60x56mm fans
• Two 80x80x80mm fans
Power supply (depending on
the model)
Supports up to two hot-swap power supplies for redundancy support. (Except for the
application of 240V DC applied through C14 input connect)
• 1100-watt ac power supply
• 1600-watt ac power supply
• 2000-watt ac power supply
Important: Power supplies and redundant power supplies in the enclosure must be
with the same power rating, wattage or level.
System Management Module
(SMM)
• Hot-swappable
• Equipped with ASPEED controller
• Offers RJ45 port for management of nodes and SMM over 1G Ethernet
Ethernet I/O ports
Access to a pair of on-board 10Gb connections through two types of optional
enclosure level EIOM cards.
• Two optional EIOM cards:
– 10Gb 8-port EIOM SFP+
– 10Gb 8-port EIOM Base-T (RJ45)
• Minimum networking speed requirement for the EIOM card: 1Gbps
Note:
The EIOM card is installed in the enclosure and it provides direct access to LAN
functions provided by each node.
Size
2U enclosure
• Height: 87.0 mm (3.5 inches)
• Depth: 891.5 mm (35.1 inches)
• Width: 488.0 mm (19.3 inches)
• Weight:
– Minimum configuration (with one minimal configuration node): 22.4 kg (49.4 lbs)
– Maximum configuration (with four maximal configuration nodes): 55.0 kg (121.2
lbs)
Acoustical noise emissions
With the maximum configuration of four nodes with two processors installed, full
memory installed, full drives installed, and two 2000-watt power supplies installed:
• Operation: 6.8 bels
• Idle: 6.2 bels
Chapter 1. Introduction
7
Table 2. Enclosure specifications (continued)
Specification
Description
Heat output (based on two
2000-watt power supplies)
Approximate heat output:
• Minimum configuration (with one minimal configuration node): 604.1 BTU per hour
(177 watts)
• Maximum configuration (with four maximal configuration nodes): 7564.4 BTU per
hour (2610 watts)
Electrical input
• Sine-wave input (50-60 Hz) required
• Input voltage low range: 1100W is limited to 1050W
– Minimum: 100 V AC
– Maximum: 127 V AC
• Input voltage high range: 1100W/1600W/2000W
– Minimum: 200 V AC
– Maximum: 240 V AC
• Input kilovolt-amperes (kVA), approximately:
– Minimum: 0.153 kVA
– Maximum: 2.61 kVA
Compute node specifications
Features and specifications of the compute node.
Table 3. Compute node specifications
Specification
Description
Processor (depending on the
model)
• Supports up to two Intel Xeon series multi-core processors (one installed)
• Level-3 cache
Notes:
1. Use the Setup utility to determine the type and speed of the processors in the
node.
2. For a list of supported processors, see http://www.lenovo.com/us/en/
serverproven/.
Memory
• Minimum: 8 GB (single DDR4 DIMM per processor)
• Maximum: 1,024 GB
– 512 GB (16 x 32 GB RDIMM)
– 1,024 GB (16 x 64 GB LRDIMM)
• Type:
– PC4-21300 (dual-rank), 2666 MT/s, error correcting code (ECC), double-datarate 4 (DDR4) registered DIMM (RDIMM) or load reduced DIMM (LRDIMM)
• Supports (depending on the model):
– 8 GB, 16 GB, and 32 GB RDIMM
– 64 GB LRDIMM
• Slots: 16 DIMM slots
Drive bays
Supports up to six 2.5-inch hot-swap SAS/SATA/NVMe drive bays.
Attention: As a general consideration, do not mix standard 512-byte and advanced
4-KB format drives in the same RAID array because it might lead to potential
performance issues.
Supports the following 2.5-inch hot-swap drive backplanes:
• Four 2.5-inch hot-swap SAS/SATA backplane
• Six 2.5-inch hot-swap SAS/SATA backplane
• Six 2.5-inch hot-swap SAS/SATA/NVMe backplane
Important: Do not mix nodes with the four-drive backplane and six-drive
backplanes in the same enclosure. Mixing the four-drive backplane and six-drive
backplanes may cause unbalanced cooling.
8
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
Table 3. Compute node specifications (continued)
Specification
Description
RAID adapters (depending on
the model)
• Software RAID supports for RAID levels 0, 1, 5, and 10
• Hardware RAID supports for RAID levels 0, 1, 5, and 10
Video controller (integrated
into Lenovo XClarity
Controller)
• ASPEED
• SVGA compatible video controller
• Avocent Digital Video Compression
• Video memory is not expandable
Note: Maximum video resolution is 1920 x 1200 at 60 Hz.
Ethernet I/O port
Access to a pair of on-board 10Gb connections through two types of optional
enclosure level EIOM cards.
• Two optional EIOM cards:
– 10Gb 8-port EIOM SFP+
– 10Gb 8-port EIOM Base-T (RJ45)
• Minimum networking speed requirement for the EIOM card: 1Gbps
Note:
The EIOM card is installed in the enclosure and it provides direct access to LAN
functions provided by each node.
Size
Node
• Height: 41.0 mm (1.7 inches)
• Depth: 562.0 mm (22.2 inches)
• Width: 222.0 mm (8.8 inches)
• Weight:
– Minimum weight: 3.5 kg (7.7 lb)
– Maximum weight: 7.5 kg (16.6 lb)
Chapter 1. Introduction
9
Table 3. Compute node specifications (continued)
Specification
Description
Environment
The ThinkSystem SD530 complies with ASHRAE class A2 specifications.
Depending on the hardware configuration, some solution models comply with
ASHRAE Class A3 or Class A4 specifications. System performance may be impacted
when operating temperature is outside ASHRAE A2 specification or fan failed
condition. To comply with ASHRAE Class A3 and Class A4 specifications, the
ThinkSystem SD530 needs to meet the following hardware configuration
requirements:
• Lenovo supported processors.
For unsupported processors, see the following attention for details1.
• Lenovo supported PCIe adapters.
For unsupported PCIe adapters, see the following attention for details2.
• Two power supplies installed for redundancy.
1100-watt power supplies are not supported.
The ThinkSystem SD530 is supported in the following environment:
• Air temperature:
Power on3:
– ASHRAE Class A2: 10°C - 35°C (50°F - 95°F); Above 900 m (2,953 ft), de-rated
maximum air temperature 1°C / 300m (984 ft)
– ASHRAE Class A3: 5°C - 40°C (41°F - 104°F); Above 900 m (2,953 ft), de-rated
maximum air temperature 1°C / 175m (574 ft)
– ASHRAE Class A4: 5°C - 45°C (41°F - 113°F); Above 900 m (2,953 ft), de-rated
maximum air temperature 1°C / 125m (410 ft)
Power off4: 5°C to 45°C (41°F to 113°F)
• Maximum altitude: 3,050 m (10,000 ft)
• Relative Humidity (non-condensing):Power on3:
– ASHRAE Class A2: 8% - 80%, maximum dew point : 21°C (70°F)
– ASHRAE Class A3: 8% - 85%, maximum dew point : 24°C (75°F)
– ASHRAE Class A4: 8% - 90%, maximum dew point : 24°C (75°F)
Shipment/storage: 8% - 90%
• Particulate contamination:
Airborne particulates and reactive gases acting alone or in combination with other
environmental factors such as humidity or temperature might pose a risk to the
solution. For information about the limits for particulates and gases, see
Particulate contamination .
Power rating
12 V DC, 60 A
Attention:
1. The following processors are not supported with ASHRAE Class A3 and Class A4 specifications:
• 165W processor, 28-core, 26-core or 18-core (Intel Xeon 8176, 8176M, 8170, 8170M, and 6150)
• 150W processor, 26-core, 24-core, 20-core, 16-core or 12-core (Intel Xeon 8164, 8160, 8160M, 8158,
6148, 6142, 6142M, and 6136)
• 140W processor, 22-core or 18-core (Intel Xeon 6152, 6140, and 6140M)
• 140W processor, 14-core (Intel Xeon 6132)
• 130W processor, 8-core (Intel Xeon 6134 and 6134M)
• 125W processor, 20-core, 16-core or 12-core (Intel Xeon 6138, 6138T, 6130T, 6126)
• 115W processor, 6-core (Intel Xeon 6128)
10
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
• 105W processor, 14-core or 4-core (Intel Xeon 8156, 5122, and 5120T)
• 70W processor, 8-core (Intel Xeon 4109T)
Note: The listed processors are included but not limited to the above list only.
2. The following processors are not supported with ASHRAE Class A2, Class A3 and Class A4
specifications. The following processors are provided for special bid configuration only and need
customer’s acceptance on the limitation consequence. The limitation includes experiencing power
capping and a slight drop in performance when ambient is above 27°C.
• 205W processor, 28-core or 24-core (Intel Xeon 8180, 8180M and 8168)
• 200W processor, 18-core (Intel Xeon 6154)
• 165W processor, 12-core (Intel Xeon 6146)
• 150W processor, 24-core (Intel Xeon 8160T)
• 150W processor, 8-core (Intel Xeon 6144)
• 125W processor, 12-core (Intel Xeon 6126T)
Note: The listed processors are included but not limited to the above list only.
3. The following PCIe adapters are not supported with ASHRAE Class A3 and Class A4 specifications:
• Mellanox NIC with active optical cable
• PCIe SSD
• GPGPU card
Note: The listed PCIe adapters are included but not limited to the above list only.
4. Enclosure is powered on.
5. Enclosure is removed from original shipping container and is installed but not in use, for example, during
repair, maintenance, or upgrade.
Chapter 1. Introduction
11
PCIe expansion node specifications
Features and specifications of the PCIe expansion node.
PCIe expansion node specifications
Table 4. PCIe expansion node specifications
Specification
Description
Size
PCIe expansion node
• Height: 53.4 mm (2.11 inches)
• Depth: 562.0 mm (22.1 inches)
• Width: 222.0 mm (8.7 inches)
• Weight:
– Without PCIe adapter and cables: 2.1 kg (4.6 lb)
– Without PCIe adapter, with cables: 2.3 kg (5.1 lb)
PCI expansion slots
Supports up to two PCIe adapters with the following requirements:
1. When a compute-expansion node assembly is installed in an enclosure:
• Two 2000-watt ac power supplies are required.
• The other two node bays in the same enclosure must be installed with either
of the following:
– Another compute-expansion node assembly with one four-drive backplane
installed in the compute node
– Two node fillers
2. In the compute node that comes with the PCIe expansion node assembly:
• Two processors are required in the compute node when two PCIe adapters
are installed in the expansion node.
• No RAID adapter should be installed in the compute node.
• One four 2.5-inch hot-swap SAS/SATA backplane should be installed in the
compute node.
• No more than 12 DIMMs should be installed in the compute node.
3. Concerning the GPU adapters installed in the node assembly:
• Up to two 300 W passive GPU adapters (without fans) are supported.
• The two GPU adapters must be of the same type.
• When only one GPU adapter is installed, it has to be installed in the rear riser
slot.
Power rating
12
12 V DC, 60 A
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
Management options
Several management interfaces are available for managing your server. The management options described
in this section are provided to support the direct management of Lenovo servers.
Function
Lenovo
XClarity
Administrator
Lenovo
XClarity
Integrator
Multiple
systems
manage­
ment
√
Operating
system
deployment
√
Firmware
updates2
√
√
√3
√
√
System
configura­
tion
√
√
√
√
√
Events /
alerts
√
√
Inventory /
Log
√
√
Power
manage­
ment
√
Lenovo
XClarity
Energy
Manager
Lenovo
XClarity
Provisioning
Manager
√
Lenovo
XClarity
Essen­
tials1
Lenovo
XClarity
Controller
Lenovo
Capacity
Planner
Lenovo
Business
Vantage
√
√
√5
√
√
√4
√
√
√
Data center
planning
√
Security
manage­
ment
√6
Notes:
1. Lenovo XClarity Essentials includes Lenovo XClarity Essentials OneCLI, Lenovo XClarity Essentials
Bootable Media Creator, and Lenovo XClarity Essentials UpdateXpress.
2. Most options can be updated through the Lenovo tools. Some options, such as GPU firmware or OmniPath firmware require the use of vendor tools.
3. Firmware updates are limited to Lenovo XClarity Provisioning Manager, Lenovo XClarity Controller
firmware, and UEFI updates only. Firmware updates for optional devices, such as adapters, are not
supported.
4. Limited inventory.
5. Power management function is supported by Lenovo XClarity Integrator for VMware vCenter.
6. Available only in the People’s Republic of China.
Lenovo XClarity Administrator
Lenovo XClarity Administrator is a centralized, resource-management solution that simplifies infrastructure
management, speeds responses, and enhances the availability of Lenovo server systems and solutions. It
Chapter 1. Introduction
13
runs as a virtual appliance that automates discovery, inventory, tracking, monitoring, and provisioning for
server, network, and storage hardware in a secure environment.
Lenovo XClarity Administrator provides a central interface to perform the following functions for all managed
endpoints:
• Manage and monitor hardware. Lenovo XClarity Administrator provides agent-free hardware
management. It can automatically discover manageable endpoints, including server, network, and storage
hardware. Inventory data is collected for managed endpoints for an at-a-glance view of the managed
hardware inventory and status.
• Configuration management. You can quickly provision and pre-provision all of your servers using a
consistent configuration. Configuration settings (such as local storage, I/O adapters, boot settings,
firmware, ports, and Lenovo XClarity Controller and UEFI settings) are saved as a server pattern that can
be applied to one or more managed servers. When the server patterns are updated, the changes are
automatically deployed to the applied servers.
• Firmware compliance and updates. Firmware management is simplified by assigning firmwarecompliance policies to managed endpoints. When you create and assign a compliance policy to managed
endpoints, Lenovo XClarity Administrator monitors changes to the inventory for those endpoints and flags
any endpoints that are out of compliance.
When an endpoint is out of compliance, you can use Lenovo XClarity Administrator to apply and activate
firmware updates for all devices in that endpoint from a repository of firmware updates that you manage.
• Operating System deployment. You can use Lenovo XClarity Administrator to manage a repository of
operating-system images and to deploy operating-system images to up to 28 managed servers
concurrently.
• Service and support. Lenovo XClarity Administrator can be set up to collect and send diagnostic files
automatically to your preferred service provider when certain serviceable events occur in Lenovo XClarity
Administrator and the managed endpoints. You can choose to send diagnostic files to Lenovo Support
using Call Home or to another service provider using SFTP. You can also manually collect diagnostic files,
open a problem record, and send diagnostic files to the Lenovo Support Center.
Lenovo XClarity Administrator can be integrated into external, higher-level management and automation
platforms through open REST application programming interfaces (APIs). Using the REST APIs, Lenovo
XClarity Administrator can easily integrate with your existing management infrastructure. In addition, you can
automate tasks using the PowerShell toolkit or the Python toolkit.
To obtain the latest version of the Lenovo XClarity Administrator, see:
https://datacentersupport.lenovo.com/documents/LNVO-LXCAUPD
Documentation for Lenovo XClarity Administrator is available at:
http://sysmgt.lenovofiles.com/help/topic/com.lenovo.lxca.doc/aug_product_page.html
Lenovo XClarity Integrator
Lenovo also provides the following integrators that you can use to manage Lenovo servers from higher-level
management tools:
• Lenovo XClarity Integrator for VMware vCenter
• Lenovo XClarity Integrator Microsoft System Center
For more information about Lenovo XClarity Integrator, see:
http://www3.lenovo.com/us/en/data-center/software/systems-management/xclarity-integrators
14
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
Lenovo XClarity Energy Manager
Lenovo XClarity Energy Manager is a web-based power and temperature management solution designed for
data center administrators. It monitors and manages the power consumption and temperature of servers,
such as Converged, NeXtScale, System x, ThinkServer, and ThinkSystem servers. Lenovo XClarity Energy
Manager models data center physical hierarchy and monitors power and temperature at the server/group
level. By analyzing monitored power and temperature data, Lenovo XClarity Energy Manager greatly
improves business continuity and energy efficiency.
With Lenovo XClarity Energy Manager, administrators can take control of power usage through improved
data analysis and lower the TCO (total cost of ownership). The tool optimizes data center efficiency by
allowing administrators to:
• Monitor energy consumption, estimate power need, and re-allocate power to servers as needed via IPMI
or Redfish.
• Track platform power consumption, inlet temperature, CPU and memory power consumption.
• Visually check the layout of room, row and rack via 2D thermal map.
• Send notifications when certain events occur or thresholds are reached.
• Limit the consumed amount of energy of an endpoint by setting up policies.
• Optimize energy efficiency by monitoring real-time inlet temperatures, identifying low-usage servers
based on out-of-band power data, measuring power rangers for different server models, and evaluating
how servers accommodate new workloads based on the availability of resources.
• Reduce the power consumption to the minimum level to prolong service time during emergency power
event (such as a data-center power failure).
For more information about downloading, installation, and usage, see:
https://datacentersupport.lenovo.com/solutions/lnvo-lxem
Lenovo XClarity Provisioning Manager
Lenovo XClarity Provisioning Manager is embedded software that provides a graphic user interface (GUI) for
configuring the system with support for 10 languages. It simplifies the process of configuring Basic Input
Output System (BIOS) settings and configuring Redundant Array of Independent Disks (RAID) in an GUI
wizard. It also provides functions for updating applications and firmware, performing system diagnostics,
and automating the process of installing the supported Windows, Linux, or VMware ESXi operating systems
and associated device drivers.
Note: When you start a server and press F1, the Lenovo XClarity Provisioning Manager interface is displayed
by default. However, the text-based interface to system configuration (the Setup Utility) is also available.
From Lenovo XClarity Provisioning Manager, you can choose to restart the server and access the text-based
interface. In addition, you can choose to make the text-based interface the default interface that is displayed
when you press F1.
Lenovo XClarity Provisioning Manager provides a system summary of all installed devices and includes the
following functions:
• UEFI setup. Use this function to configure UEFI system settings, such as processor configuration, start
options, and user security. You can also view POST events and the System Event Log (SEL).
• Firmware update. Use this function to update the firmware for Lenovo XClarity Controller, Unified
Extensible Firmware Interface (UEFI), Lenovo XClarity Provisioning Manager, and operating system device
drivers.
Chapter 1. Introduction
15
• RAID setup. Use this function to configure RAID for the server. It provides an easy-to-use graphical
wizard that supports a unified process for performing RAID setup for a variety of RAID adapters. You can
also perform advanced RAID configuration from the UEFI Setup.
• OS installation. Use this function to deploy an operating system for the server with an easy-to-use
Guided Install mode. Operating systems can be installed using unattended mode after you choose the
Operating System version and basic settings; the device drivers are installed automatically.
A Manual Install mode is also available. You can export the drivers from system, manually install the
operating systems, and then install the drivers. This way, you do not need to go to the web to download
device drivers.
• Diagnostics. Use this function to view the overall health of devices installed in the server and to perform
diagnostics for hard disk drives and memory. You can also collect service data that can be saved to a
USB device and sent to Lenovo Support.
Note: The service data collected by Lenovo XClarity Provisioning Manager does not include the operating
system logs. To collect the operating system logs and the hardware service data, use Lenovo XClarity
Essentials OneCLI.
Documentation for Lenovo XClarity Provisioning Manager is available at:
http://sysmgt.lenovofiles.com/help/topic/LXPM/LXPM_introduction.html
Lenovo XClarity Essentials
Lenovo XClarity Essentials (LXCE) is a collection of server management utilities that provides a less
complicated method to enable customers to manage Lenovo ThinkSystem, System x, and Thinkserver
servers more efficiently and cost-effectively.
Lenovo XClarity Essentials includes the following utilities:
• Lenovo XClarity Essentials OneCLI is a collection of several command line applications, which can be
used to:
– Configure the server.
– Collect service data for the server. If you run Lenovo XClarity Essentials OneCLI from the server
operating system (in-band), you can collect operating system logs as well. You can also choose to view
the service data that has been collected or to send the service data to Lenovo Support.
– Update firmware and device drivers for the server. Lenovo XClarity Essentials OneCLI can help to
download UpdateXpress System Packs (UXSPs) for your server and update all the firmware and device
drivers payloads within the UXSP.
– Perform miscellaneous functions, such as rebooting the server or rebooting the BMC.
To learn more about Lenovo XClarity Essentials OneCLI, see:
https://datacentersupport.lenovo.com/documents/LNVO-CENTER
Documentation for Lenovo XClarity Essentials OneCLI is available at:
http://sysmgt.lenovofiles.com/help/topic/xclarity_essentials/overview.html
• Lenovo XClarity Essentials Bootable Media Creator (BoMC) is a software application that applies
UpdateXpress System Packs and individual updates to your system.
Using Lenovo XClarity Essentials Bootable Media Creator, you can:
– Update the server using an ISO image or CD.
– Update the server using a USB key.
– Update the server using the Preboot Execution Environment (PXE) interface.
– Update the server in unattendance mode.
16
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
– Update the server in Serial Over LAN (SOL) mode.
To learn more about Lenovo XClarity Essentials Bootable Media Creator, see:
https://datacentersupport.lenovo.com/solutions/lnvo-bomc
• Lenovo XClarity Essentials UpdateXpress is a software application that applies UpdateXpress System
Packs and individual updates to your system.
Using Lenovo XClarity Essentials UpdateXpress, you can:
– Update the local server.
– Update a remove server.
– Create a repository of updates.
To learn more about Lenovo XClarity Essentials UpdateXpress, see:
https://datacentersupport.lenovo.com/solutions/lnvo-xpress
Lenovo XClarity Controller
Lenovo XClarity Controller is the management processor for the server. It is the third generation of the
Integrated Management Module (IMM) service processor that consolidates the service processor
functionality, super I/O, video controller, and remote presence capabilities into a single chip on the server
system board.
There are two ways to access the management processor:
• Web-based interface. To access the web-based interface, point your browser to the IP address for the
management processor.
• Command-line interface. To access the CLI interface, use SSH or Telnet to log in to the management
processor.
Whenever power is applied to a server, the management processor is available. From the management
processor interface, you can perform the following functions:
• Monitor all hardware devices installed in the server.
• Power the server on and off.
• View the system event log and system audit log for the server.
• Use the Remote management function to log in to the server itself.
Documentation for Lenovo XClarity Controller is available at:
http://sysmgt.lenovofiles.com/help/topic/com.lenovo.systems.management.xcc.doc/product_page.html
Lenovo Capacity Planner
Lenovo Capacity Planner is a power consumption evaluation tool that enhances data center planning by
enabling IT administrators and pre-sales to understand important parameters of different type of racks,
servers, and other devices. Lenovo Capacity Planner can dynamically calculate the power consumption,
current, British Thermal Unit (BTU), and volt-ampere (VA) rating at the rack level, and therefore improves the
efficiency of large scale deployments.
Lenovo Capacity Planner provides the following functions:
• Power/thermal evaluation in different deployments, device copying, configuration saving, and reporting.
• Customizable server configuration, selectable workload, CPU turbo model, and worst case of fans for
different evaluations in different user scenarios.
• Flex/Density servers, chassis and node level customizable configuration.
Chapter 1. Introduction
17
• Easy to download and run with popular web browsers, like Internet Explorer 11, Firefox, Chrome, and
Edge.
Note: Users can also access the Lenovo website to run the tool online.
More information about Lenovo Capacity Planner is available at:
https://datacentersupport.lenovo.com/solutions/lnvo-lcp
Lenovo Business Vantage
Lenovo Business Vantage is a security software tool suite designed to work with the Trusted Cryptographic
Module (TCM) adapter for enhanced security, to keep user data safe, and to erase confidential data
completely from a hard disk drive.
Lenovo Business Vantage provides the following functions:
• Data Safe. Encrypt files to ensure data safety by using TCM chip , even when the disk is in “rest” status.
• Sure Erase. Erase confidential data from a hard disk. This tool follows the industry standard method to do
the erasing and allows the user to select different erasing levels.
• Smart USB Protection. Prohibit unauthorized access to the USB port of devices.
• USB Data Safe. Encrypt data in USB Flash on a certain device and prohibit access of data on other
devices.
Note: This tool is available in the People’s Republic of China only.
More information about Lenovo Business Vantage is available at:
http://support.lenovo.com.cn/lenovo/wsi/es/es.html
18
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
Chapter 2. Solution components
Use the information in this section to learn about each of the components associated with your solution.
When you contact Lenovo for help, the machine type, model, and serial number information helps support
technicians to identify your solution and provide faster service.
Each SD530 supports up to six 2.5-inch hot-swap Serial Attached SCSI (SAS), Serial ATA (SATA) or NonVolatile Memory express (NVMe) drives.
Note: The illustrations in this document might differ slightly from your model.
The enclosure machine type, model number and serial number are on the ID label that can be found on the
front of the enclosure, as shown in the following illustration.
Figure 8. ID label on the front of the enclosure
Table 5. ID label on the front of the enclosure
1 ID label
The network access tag can be found on the front of the node. You can pull way the network access tag to
paste your own label for recording some information such as the hostname, the system name and the
inventory bar code. Keep the network access tag for future reference.
© Copyright Lenovo 2017, 2018
19
Figure 9. Network access tag on the front of the node
The node model number and serial number are on the ID label that can be found on the front of the node (on
the underside of the network access tag), as shown in the following illustration.
Figure 10. ID label on the front of the node
The system service label, which is on the top of the enclosure, provides a QR code for mobile access to
service information. You can scan the QR code using a QR code reader and scanner with a mobile device
and get quick access to the Lenovo Service Information website. The Lenovo Service Information website
provides additional information for parts installation and replacement videos, and error codes for solution
support.
The following illustration shows QR codes for the enclosure and the node.
• Enclosure:
http://datacentersupport.lenovo.com/products/servers/thinksystem/d2-enclosure/7X20
Figure 11. D2 enclosure 7X20 QR code
http://datacentersupport.lenovo.com/products/servers/thinksystem/modular-enclosure/7X22
20
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
Figure 12. Modular enclosure 7X22 QR code
• Node: http://datacentersupport.lenovo.com/products/servers/thinksystem/sd530/7X21
Figure 13. Compute node QR code
Front view
The following illustration shows the controls, LEDs, and connectors on the front of the server.
Enclosure
The following illustration shows the controls, LEDs, and connectors on the front of the enclosure.
Notes:
1. The illustrations in this document might differ slightly from your hardware.
2. For proper cooling, a node filler has to be installed into every empty node bay in every configuration.
The enclosure supports the following configurations:
Up to four compute nodes.
The following illustration shows the node bays in the enclosure.
Figure 14. Enclosure front view with compute nodes and node bay numbering
Up to two PCIe expansion node assemblies.
Chapter 2. Solution components
21
Figure 15. Compute-expansion node assembly
A compute-expansion node assembly consists of a PCIe expansion node and a compute node, to which the
expansion node is installed. The node assembly takes two vertically adjacent node bays in an enclosure. See
“PCIe expansion node specifications” on page 12 for detailed PCIe expansion node requirements.
Note: Do not mix a compute-expansion node assembly with compute nodes in the same enclosure. When a
a compute-expansion node assembly is installed in an enclosure, fill the other two node bays with either two
node fillers or another unit of compute-expansion node assembly.
Figure 16. Enclosure front view with PCIe expansion node assemblies
Table 6. Enclosure front view with PCIe expansion node assemblies
1 PCIe expansion node
3 Compute node
2 PCIe expansion node
4 Compute node
Compute node
The following illustration shows the controls, LEDs, and connectors on the front of the compute node.
Six 2.5-inch drive configuration
See the following illustration for components, connectors and drive bay numbering in six 2.5-inch drive
configuration.
22
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
Figure 17. Six 2.5-inch drive configuration and drive bay numbering
Table 7. Components in six 2.5-inch drive configuration
1 Activity LED (green)
2 Status LED (yellow)
Drive LEDs:
1 Activity LED (green): Green LEDs are on all hot swap drives. When this green LED is lit, it indicates that
there is activity on the associated hard disk drive or solid-state drive.
• When this LED is flashing, it indicates that the drive is actively reading or writing data.
• For SAS and SATA drives, this LED is off when the drive is powered but not active.
• For NVMe (PCIe) SSDs, this LED is on solid when the drive is powered but not active.
Note: The drive activity LED might be in a different location on the front of the drive, depending on the drive
type that is installed.
2 Status LED (yellow): The state of this yellow LED indicates an error condition or the RAID status of the
associated hard disk drive or solid-state drive:
• When the yellow LED is lit continuously, it indicates that an error has occurred with the associated drive.
The LED turns off only after the error is corrected. You can check event logs to determine the source of
the condition.
• When the yellow LED flashes slowly, it indicates that the associated drive is being rebuilt.
• When the yellow LED flashes rapidly, it indicates that the associated drive is being located.
Note: The hard disk drive status LED might be in a different location on the front of the hard disk drive,
depending on the drive type that is installed.
Five 2.5-inch drive configuration with KVM breakout module
See the following illustration for components, connectors and drive bay numbering in five 2.5-inch drive
configuration with KVM breakout module.
Figure 18. Five 2.5-inch drive configuration with KVM breakout module and drive bay numbering
Chapter 2. Solution components
23
Table 8. Components in five 2.5-inch drive configuration with the KVM breakout module
1 KVM connector
3 Micro USB connector for Lenovo XClarity Controller
management
2 USB 3.0 connector
4 KVM breakout module
KVM breakout module comes with the following connectors:
1 KVM connector: Connect the console breakout cable to this connector (see “KVM breakout cable” on
page 33 more information).
2
USB 3.0 connector: Connect a USB device to this USB 3.0 connector.
3 Micro USB connector for Lenovo XClarity Controller management: The connector provides direct
access to Lenovo XClarity Controller by allowing you to connect a mobile device to the system and manage
it with Lenovo XClarity Mobile. For more details, see http://sysmgt.lenovofiles.com/help/topic/
com.lenovo.systems.management.xcc.doc/product_page.html and http://sysmgt.lenovofiles.com/help/topic/
com.lenovo.lxca.doc/aug_product_page.html for more information.
Notes:
1. Ensure that you use a high-quality OTG cable or a high-quality converter when connecting a mobile
device. Be aware that some cables that are supplied with mobile devices are only for charging purposes.
2. Once a mobile device is connected, it indicates it is ready to use and no further action is required.
Four 2.5-inch drive configuration with KVM breakout module
See the following illustration for components, connectors and drive bay numbering in four 2.5-inch drive
configuration with KVM breakout module.
Figure 19. Four 2.5-inch drive configuration with KVM breakout module and drive bay numbering
Table 9. Components in four 2.5-inch drive configuration with the KVM breakout module
1 KVM connector
4 KVM breakout module
2 USB 3.0 connector
5 Drive bay filler
3 Micro USB connector for Lenovo XClarity Controller
management
Node operator panel
The following illustration shows the controls and LEDs on the node operator panel.
24
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
Figure 20. Node operator panel
Table 10. Node operator panel
1 NMI pinhole
3 Identification button/LED
2 System error LED
4 Power button/LED
1 NMI pinhole: Insert the tip of a straightened paper clip into this pinhole to force a non-maskable interrupt
(NMI) upon the node, and consequent memory dump would take place. Only use this function while advised
by Lenovo support representative.
2 System error LED: When this LED (yellow) is lit, it indicates that at least one system error has occurred.
Check the event log for additional information.
3 Identification button/LED: This LED (blue) serves to visually locate the compute node, and can be turned
on with pressing on the identification button or the following commands.
• Turn on:
ipmitool.exe -I lanplus -H <XCC’s IP> -U USERID -P PASSW0RD raw 0x3a 0x08 0x01 0x00
• Turn off:
ipmitool.exe -I lanplus -H <XCC’s IP> -U USERID -P PASSW0RD raw 0x3a 0x08 0x01 0x01
Notes:
1. Default XCC’s IP address is 192.168.70.125
2. The behavior of this LED is determined by the SMM ID LED when SMM ID LED is turned on or
blinking. For the exact location of SMM ID LED, see “System Management Module (SMM)” on page
27.
Table 11. Different SMM ID LED modes and Node ID LED behavior
SMM identification LED
Node identification LEDs
Off
All node ID LEDs are turned off when this mode is activated. Afterwards, SMM ID
LED enters accept mode, while node ID LEDs determine the behavior of SMM ID
LEDs (see “Enclosure rear overview” in System Management Module User’s Guide
for more information).
On
All the node ID LEDs are on except the blinking ones, which remain blinking.
Blink
All the node ID LEDs are blinking regardless of previous status.
4 Power button/LED: When this LED is lit (green), it indicates that the node has power. This green LED
indicates the power status of the compute node:
• Flashing rapidly: The LED flashes rapidly for the following reasons:
Chapter 2. Solution components
25
– The node has been installed in an enclosure. When you install the compute node, the LED flashes
rapidly for up to 90 seconds while the Lenovo XClarity Controller in the node is initializing.
– The power source is not sufficient to turn on the node.
– The Lenovo XClarity Controller in the node is not communicating with the System Management
Module.
• Flashing slowly: The node is connected to the power through the enclosure and ready to turn on.
• Lit continuously: The node is connected to the power through the enclosure.
• Not lit continuously: No power on node.
Rear view
The following illustration shows the connectors and LEDs on the rear of the enclosure.
The following illustration shows the rear view of the entire system.
• Shuttle with eight low profile PCIe x8 slots
Figure 21. Rear view - The enclosure with x8 shuttle installed
Table 12. Components on x8 shuttle
1 10Gb 8-port EIOM cage (SFP+)
8 PCIe slot 1-B
2 10Gb 8-port EIOM cage (RJ45)
9 Power supply 2
3 PCIe slot 4-B
10 Power supply 1
4 10Gb 8-port EIOM cage filler
11 Single Ethernet port System Management Module
5 PCIe slot 3-B
12 PCIe slot 2-B
6 PCIe slot 3-A
13 PCIe slot 2-A
7 PCIe slot 1-A
14 PCIe slot 4-A
Note: Make sure the power cord is properly connected to every power supply unit installed.
• Shuttle with four low profile PCIe x16 cassette bays
26
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
Figure 22. Rear view - The enclosure with x16 shuttle installed
Table 13. Components on x16 shuttle
1 10Gb 8-port EIOM cage (SFP+)
6 Power supply 2
2 10Gb 8-port EIOM cage (RJ45)
7 Power supply 1
3 10Gb 8-port EIOM cage filler
8 System Management Module
4 PCIe slot 3
9 PCIe slot 2
5 PCIe slot 1
10 PCIe slot 4
Note: Make sure the power cord is properly connected to every power supply unit installed.
System Management Module (SMM)
The following section includes information about the connectors and LEDs on the rear of the System
Management Module (SMM).
Two types of SMM are supported in this solution. See the following illustrations to discern the type of SMM
that you have.
Single Ethernet port SMM
Figure 23. Rear view - Single Ethernet port SMM
Table 14. Single Ethernet port SMM
1 Reset pinhole
5 System error LED (yellow)
2 USB port service mode button
6 Identification LED (blue)
Chapter 2. Solution components
27
Table 14. Single Ethernet port SMM (continued)
3 Ethernet connector
7 Status LED (green)
4 USB connector
8 System power LED (green)
You can access the dedicated XCC network port of the four nodes via the Ethernet connector on the Single
Ethernet port SMM. Go to website and use IP to access XCC. For more details, see System Management
Module User's Guide.
The following four LEDs on the single Ethernet port SMM provide information about SMM operating status.
5
System error LED (yellow):
When this LED is lit, it indicates that a system error has occurred. Check the event log for additional
information.
6
Identification LED (blue):
This LED could be lit to determine the physical location the specific enclosure in which the SMM is installed.
Use the following commands to control the identification LED and locate the enclosure.
• Turn on:
ipmitool.exe -I lanplus -H <SMM’s IP> -U USERID -P PASSW0RD raw 0x32 0x97 0x01 0x01
• Turn off:
ipmitool.exe -I lanplus -H <SMM’s IP> -U USERID -P PASSW0RD raw 0x32 0x97 0x01 0x00
Note: The default SMM IP address is 192.168.70.100
To identify the solution from the front side, see “Node operator panel” on page 24 for more information.
7
Status LED (green):
This LED indicates the operating status of the SMM.
• Continuously on: the SMM has encountered one or more problems.
• Off: when the enclosure power is on, it indicates the SMM has encountered one or more problems.
• Flashing: the SMM is working.
– During pre-boot process, the LED flashes rapidly (about four times per second).
– When the pre-boot process is completed and the SMM is working correctly, the LED flashes at a
slower speed (about once per second).
8
System power LED (green):
When this LED is lit, it indicates that the SMM power is on.
28
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
Dual Ethernet port SMM
Figure 24. Rear view - Dual Ethernet port SMM
Table 15. Dual Ethernet port SMM
1 System power LED (green)
5 Ethernet connector
2 Status LED (green)
6 Ethernet connector
3 Identification LED (blue)
7 Reset pinhole
4 System error LED (yellow)
You can access the dedicated XCC network port of the four nodes via either of the SMM Ethernet connector.
Go to SMM website and use IP to access XCC. For more details, see System Management Module User's
Guide.
The following four LEDs on the dual Ethernet port SMM provide information about SMM operating status.
1
System power LED (green):
When this LED is lit, it indicates that the SMM power is on.
2
Status LED (green):
This LED indicates the operating status of the SMM.
• Continuously on: the SMM has encountered one or more problems.
• Off: when the enclosure power is on, it indicates the SMM has encountered one or more problems.
• Flashing: the SMM is working.
– During pre-boot process, the LED flashes rapidly (about four times per second).
– When the pre-boot process is completed and the SMM is working correctly, the LED flashes at a
slower speed (about once per second).
3
Identification LED (blue):
This LED could be lit to determine the physical location the specific enclosure in which the SMM is installed.
Use the following commands to control the identification LED and locate the enclosure.
• Turn on:
ipmitool.exe -I lanplus -H <SMM’s IP> -U USERID -P PASSW0RD raw 0x32 0x97 0x01 0x01
• Turn off:
ipmitool.exe -I lanplus -H <SMM’s IP> -U USERID -P PASSW0RD raw 0x32 0x97 0x01 0x00
Note: The default SMM IP address is 192.168.70.100
To identify the solution from the front side, see “Node operator panel” on page 24 for more information.
Chapter 2. Solution components
29
4
System error LED (yellow):
When this LED is lit, it indicates that a system error has occurred. Check the event log for additional
information.
PCIe slot LEDs
The following illustration shows the LEDs on the rear of PCIe 3.0 x16 shuttle.
Figure 25. Rear view - PCIe 3.0 x16 LEDs
Table 16. PCIe slot LEDs
1 PCIe slot 4 LED
3 PCIe slot 1 LED
2 PCIe slot 3 LED
4 PCIe slot 2 LED
These four LEDs provide the operating status of PCIe 3.0 x16 adapters.
There are two colors of LEDs you might see:
• Green: It indicates the PCIe adapter is working normally.
• Yellow (orange): It indicates the PCIe adapter has encountered one or more problems.
System board layout
The illustrations in this section provide information about the connectors and switches that are available on
the compute node system board.
System-board internal connectors
The following illustration shows the internal connectors on the system board.
30
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
Figure 26. Internal connectors on the system board
Table 17. Internal connectors on the system board
1 CMOS battery (CR2032)
9 SATA 2 connector
2 PCIe slot 3 connector
10 M.2 connector
3 PCIe slot 4 connector
11 Trusted cryptographic module (TCM) connector
4 KVM breakout cable connector
12 KVM breakout module USB connector
5 PCIe slot 1 connector (for RAID adapter)
13 Processor 2
6 PCIe slot 2 connector
14 Backplane miscellaneous signal connector
7 Processor 1
15 Backplane power connector
8 SATA 1 connector
The following illustration shows the location of the DIMM connectors on the system board.
Figure 27. The location of the DIMM connectors on the system board
System-board switches
The following illustration shows the location and description of the switches.
Chapter 2. Solution components
31
Important:
1. If there is a clear protective sticker on the switch blocks, you must remove and discard it to access the
switches.
2. Any system-board switch or jumper block that is not shown in the illustrations in this document are
reserved.
Figure 28. Location of the switches, jumpers, and buttons on the system board
The following table describes the jumpers on the system board.
Table 18. Jumper definition
Usage description
Switch
block
Switch
Switch name
S18
2
XClarity Controller
boot backup
Normal (default)
The compute node will boot by
using a backup of the XClarity
Controller firmware.
3
XClarity Controller
force update
Normal (default)
Enables XClarity Controller
force update
4
TPM physical
presence
Normal (default)
Indicates a physical presence
to the system TPM
1
System UEFI backup
Normal (default)
Enables system BIOS backup
2
Password override
jumper
Normal (default)
Overrides the power-on
password
3
CMOS clear jumper
Normal (default)
Clears the real-time clock (RTC)
registry
S19
Open
Close
Important:
1. Before you change any switch settings or move any jumpers, turn off the solution; then, disconnect all
power cords and external cables. Review the information in http://thinksystem.lenovofiles.com/help/topic/
safety_documentation/pdf_files.html, “Installation Guidelines” on page 54, “Handling static-sensitive
devices” on page 56, and “Power off the compute node” on page 112.
2. Any system-board switch or jumper block that is not shown in the illustrations in this document are
reserved.
32
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
KVM breakout cable
Use this information for details about the KVM breakout cable.
Use the KVM breakout cable to connect external I/O devices to the compute node. The KVM breakout cable
connects through the KVM connector (see “System-board internal connectors” on page 30). The KVM
breakout cable has connectors for a display device (video), two USB 2.0 connectors for a USB keyboard and
mouse, and a serial interface connector.
The following illustration identifies the connectors and components on the KVM breakout cable.
Figure 29. Connectors and components on the KVM breakout cable
Table 19. Connectors and components on the console breakout cable
1 Serial connector
4 Video connector (blue)
2 Captive screws
5 USB 2.0 connectors (2)
3 to KVM connector
2.5-inch drive backplanes
The following illustration shows the respective 2.5-inch drive backplanes.
Important: Do not mix nodes with the four-drive backplane and six-drive backplanes in the same enclosure.
Mixing the four-drive backplane and six-drive backplanes may cause unbalanced cooling.
• Four 2.5-inch SAS/SATA backplane
Figure 30. Four 2.5-inch SAS/SATA backplane
Chapter 2. Solution components
33
• Six 2.5-inch SAS/SATA backplane
Figure 31. Six 2.5-inch SAS/SATA backplane
• Six 2.5-inch hot-swap SAS/SATA/NVMe backplane
Figure 32. Six 2.5-inch hot-swap SAS/SATA/NVMe backplane
1 NVMe connector
Parts list
Use the parts list to identify each of the components that are available for your solution.
Note: Depending on the model, your solution might look slightly different from in the following illustrations.
Enclosure components
This section includes the components that come with the enclosure.
34
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
Figure 33. Enclosure components
The parts listed in the following table are identified as one of the following:
Chapter 2. Solution components
35
• Tier 1 customer replaceable unit (CRU): Replacement of Tier 1 CRUs is your responsibility. If Lenovo
installs a Tier 1 CRU at your request with no service agreement, you will be charged for the installation.
• Tier 2 customer replaceable unit: You may install a Tier 2 CRU yourself or request Lenovo to install it, at
no additional charge, under the type of warranty service that is designated for your server.
• Field replaceable unit (FRU): FRUs must be installed only by trained service technicians.
• Consumable and Structural parts: Purchase and replacement of consumable and structural parts
(components, such as a cover or bezel) is your responsibility. If Lenovo acquires or installs a structural
component at your request, you will be charged for the service.
Table 20. Parts listing, enclosure
Index
Description
Tier 1
CRU
Tier 2 CRU
FRU
Consuma­
ble and
Structural
part
For more information about ordering the parts shown in Figure 33 “enclosure components” on page 35:
https://datacentersupport.lenovo.com/products/servers/thinksystem/d2-enclosure/7X20/parts
1
10Gb 8-port EIOM cage filler
2
10Gb 8-port EIOM cage (SFP+)
√
3
10Gb 8-port EIOM Base-T cage (RJ45)
√
4
Cassette (for PCIe x16 shuttle)
5
System Management Module
√
6
Power supply
√
7
Power supply filler panel
√
8
Enclosure
√
9
Node filler panel
√
10
Fan cover
√
11
PCIe x8 shuttle
√
12
PCIe x16 shuttle
√
13
PCIe I/O riser (PIOR) left
√
14
PCIe I/O riser (PIOR) right
√
15
80x80x80mm fan
√
16
60x60x56mm fan
√
√
√
Compute node components
This section includes the components that come with the compute node.
36
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
Figure 34. Compute node components
Table 21. Parts listing, compute node
Index
Description
Tier 1 CRU
Tier 2
CRU
FRU
Consuma­
ble and
Structural
part
For more information about ordering the parts shown in Figure 34 “compute node components” on page 37:
https://datacentersupport.lenovo.com/products/servers/thinksystem/sd530/7x21/parts
1
PCIe adapter
2
Air baffle
3
Processor and heat sink assembly
√
4
Trusted Cryptographic Module
√
5
M.2 backplane
√
6
DIMM
√
7
2.5-inch drive bay blank (for empty bays next to the
backplane)
√
√
√
Chapter 2. Solution components
37
Table 21. Parts listing, compute node (continued)
Index
Description
8
2.5-inch drive bay blank panel (for drive bays on the
backplane)
√
9
2.5-inch hot-swap drive
√
10
Compute node tray
11
KVM breakout module
12
2.5-inch 6 drives hot-swap SAS/SATA backplane
√
13
2.5-inch 6 drives hot-swap SAS/SATA/NVMe
backplane
√
14
2.5-inch 4 drives hot-swap SAS/SATA backplane
√
15
Compute node cover
Tier 1 CRU
Tier 2
CRU
FRU
Consumable and
Structural
part
√
√
√
PCIe expansion node components
This section includes the components that come with the PCIe expansion node.
Note: PCIe expansion node has to be installed to a compute node before being installed into the enclosure.
See “compute-expansion node assembly replacement” in Maintenance Manual for detailed installation
procedure and requirements.
38
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
Figure 35. PCIe expansion node components
Table 22. Parts listing, PCIe expansion node
Index
Description
Tier 1 CRU
Tier 2
CRU
FRU
Structural
For more information about ordering the parts shown in Figure 35 “PCIe expansion node components” on page 39:
https://datacentersupport.lenovo.com/products/servers/thinksystem/sd530/7x21/parts
1
PCIe expansion node
2
Cable bracket
3
Risers, front and rear
√
4
PCIe adapter
Notes:
1. This component is not included in the PCIe
expansion node option kit.
2. The illustration might differ slightly from your
hardware.
√
5
Rear cable cover
√
6
PCIe expansion node power board
√
7
PCIe#1-A cable
√
8
PCIe#2-B cable
√
√
√
Chapter 2. Solution components
39
Table 22. Parts listing, PCIe expansion node (continued)
Index
Description
9
PCIe#3-A cable
√
10
PCIe#4-B cable
√
40
Tier 1 CRU
Tier 2
CRU
FRU
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
Structural
Power cords
Several power cords are available, depending on the country and region where the server is installed.
To view the power cords that are available for the server:
1. Go to:
http://dcsc.lenovo.com/#/
2. Click Preconfigured Model or Configure to order.
3. Enter the machine type and model for your server to display the configurator page.
4. Click Power ➙ Power Cables to see all line cords.
Notes:
• For your safety, a power cord with a grounded attachment plug is provided to use with this product. To
avoid electrical shock, always use the power cord and plug with a properly grounded outlet.
• Power cords for this product that are used in the United States and Canada are listed by Underwriter's
Laboratories (UL) and certified by the Canadian Standards Association (CSA).
• For units intended to be operated at 115 volts: Use a UL-listed and CSA-certified cord set consisting of a
minimum 18 AWG, Type SVT or SJT, three-conductor cord, a maximum of 15 feet in length and a parallel
blade, grounding-type attachment plug rated 15 amperes, 125 volts.
• For units intended to be operated at 230 volts (U.S. use): Use a UL-listed and CSA-certified cord set
consisting of a minimum 18 AWG, Type SVT or SJT, three-conductor cord, a maximum of 15 feet in length
and a tandem blade, grounding-type attachment plug rated 15 amperes, 250 volts.
• For units intended to be operated at 230 volts (outside the U.S.): Use a cord set with a grounding-type
attachment plug. The cord set should have the appropriate safety approvals for the country in which the
equipment will be installed.
• Power cords for a specific country or region are usually available only in that country or region.
Internal cable routing
Some of the components in the node have internal cable connectors.
Note: Disengage all latches, release tabs, or locks on cable connectors when you disconnect cables from
the system board. Failing to release them before removing the cables will damage the cable sockets on the
system board, which are fragile. Any damage to the cable sockets might require replacing the system board.
Some options, such as RAID adapter and backplanes, might require additional internal cabling. See the
documentation that is provided for the option to determine any additional cabling requirements and
instructions.
Four 2.5-inch-drive model
Use this section to understand how to route cables for four 2.5-inch-drive model.
Four 2.5-inch-drive model
• Four 2.5-inch hot-swap SAS/SATA backplane
Chapter 2. Solution components
41
Figure 36. Four 2.5-inch hot-swap SAS/SATA backplane
Table 23. Components on the four 2.5-inch hot-swap SAS/SATA backplane
1 Ambient sensor cable
3 Backplane power cable
2 miscellaneous signal cable
• Four 2.5-inch drive cable routing
42
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
Figure 37. Four 2.5-inch drive cable routing
Table 24. Components on the four 2.5-inch drive cable routing
1 Internal cable management baskets
3 SATA 1 connecotr
2 SAS/SATA cable
• Four 2.5-inch drive with hardware RAID cable routing
Figure 38. Four 2.5-inch drive with hardware RAID cable routing
Table 25. Components on the four 2.5-inch drive with hardware RAID cable routing
1 SAS/SATA cable
3 RAID adapter
2 Internal cable management baskets
Six 2.5-inch-drive model
Use this section to understand how to route cables for six 2.5-inch-drive model.
Six 2.5-inch-drive model
Chapter 2. Solution components
43
• Six 2.5-inch hot-swap SAS/SATA backplane
Figure 39. Six 2.5-inch hot-swap SAS/SATA backplane
Table 26. Components on the six 2.5-inch hot-swap SAS/SATA backplane
1 Ambient sensor cable
3 Backplane power cable
2 miscellaneous signal cable
• Six 2.5-inch drive cable routing
44
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
Figure 40. Six 2.5-inch drive cable routing
Table 27. Components on the six 2.5-inch drive cable routing
1 6 Internal cable management basket
3 SATA 1 connector
2 5 SAS/SATA cable
4 SATA 2 connector
• Six 2.5-inch drive with hardware RAID cable routing
Figure 41. Six 2.5-inch drive with hardware RAID cable routing
Chapter 2. Solution components
45
Note: Route the 1 SAS/SATA cable as shown in the illustration to avoid cable slack.
Table 28. Components on the six 2.5-inch drive with hardware RAID cable routing
1 4 SAS/SATA cable
3 RAID adapter
2 5 Internal cable management basket
Six 2.5-inch-drive model (with NVMe)
Use this section to understand how to route cables for Six 2.5-inch-drive model (with NVMe).
Six 2.5-inch-drive model (with NVMe)
• Six 2.5-inch hot-swap SAS/SATA/NVMe backplane
46
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
Figure 42. Six 2.5-inch hot-swap SAS/SATA/NVMe backplane
Table 29. Components on the six 2.5-inch hot-swap SAS/SATA/NVMe backplane
1 Ambient sensor cable
3 Backplane power cable
2 miscellaneous signal cable
• Six 2.5-inch drive cable routing (with NVMe)
Chapter 2. Solution components
47
Figure 43. Six 2.5-inch drive cable routing (with NVMe)
Table 30. Components on the six 2.5-inch drive cable routing (with NVMe)
1 NVMe cable
4 7 SAS/SATA cable
2 PCIe slot 3 connector
5 SATA 1 connector
3 8 Internal cable management basket
6 SATA 2 connector
• Six 2.5-inch drive (with NVMe) with hardware RAID cable routing
48
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
Figure 44. Six 2.5-inch drive (with NVMe) with hardware RAID cable routing
Note: Route the 1 SAS/SATA cable as shown in the illustration to avoid cable slack.
Table 31. Components on six 2.5-inch drive with hardware RAID cable routing
1 6 SAS/SATA cable
4 7 Internal cable management basket
2 NVMe cable
5 RAID adapter
3 PCIe slot 3 connector
KVM breakout module
Use this section to understand how to route cables for your KVM breakout module.
• The right KVM breakout module (for four 2.5-inch-drive model)
Chapter 2. Solution components
49
Figure 45. KVM breakout module installed in drive bay 4
Table 32. Components on the KVM breakout module installed in drive bay 4
1 Long signal cable
3 KVM breakout cable connector
2 5 Internal cable management basket
4 USB connector
6 Short signal cable
• The left KVM breakout module (for six 2.5-inch-drive model)
Figure 46. KVM breakout module installed in drive bay 0
50
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
Table 33. Components on the KVM breakout module installed in drive bay 0
1 Short signal cable
3 KVM breakout cable connector
2 5 Internal cable management basket
4 USB connector
6 Long signal cable
PCIe expansion node
Use this section to understand how to route cables for a PCIe expansion node.
Following are the cables that come with a PCIe expansion node:
• Front PCIe riser assembly
Figure 47. Front riser assembly cables
Table 34. Front riser assembly cables
1 Riser miscellaneous cable for the front riser assembly
3 PCIe#4-B cable
2 Auxiliary power cable for the PCIe adapter in the front
riser assembly
4 PCIe#3-A cable
• Rear riser assembly
Chapter 2. Solution components
51
Figure 48. Rear riser assembly cables
Table 35. Rear riser assembly cables
1 PCIe#2-B cable
3 Riser miscellaneous cable for the rear riser assembly
2 PCIe#1-A cable
4 Auxiliary power cable for the PCIe adapter in the rear
riser assembly
Notes: Make sure the following conditions are met before installing the rear riser cable cover.
1. If the PCIe#2-B cable is connected to the rear riser assembly, make sure it is routed under the PCIe#1-A
cable through the gap between the two front riser power connectors.
2. If the PCIe#1-A cable is connected to the rear riser assembly, make sure it is routed above the PCIe#2-B
cable through the gap between the two front riser power connectors.
3. When both riser assemblies are installed, make sure the front riser auxiliary power cable is looped back
into the gap between the two front riser power connectors, and routed above the PCIe#2-B cable.
Figure 49. Routing PCIe#1-A, PCIe#2-B and the front riser auxiliary power cable
52
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
Chapter 3. Solution hardware setup
To set up the solution, install any options that have been purchased, cable the solution, configure and update
the firmware, and install the operating system.
Solution setup checklist
Use the solution setup checklist to ensure that you have performed all tasks that are required to set up your
solution.
The solution setup procedure varies depending on the configuration of the solution when it was delivered. In
some cases, the solution is fully configured and you just need to connect the solution to the network and an
ac power source, and then you can power on the solution. In other cases, the solution needs to have
hardware options installed, requires hardware and firmware configuration, and requires an operating system
to be installed.
The following steps describe the general procedure for setting up a solution:
1. Unpack the solution package. See “Solution package contents” on page 4.
2. Set up the solution hardware.
a. Install any required hardware or solution options. See the related topics in “Install solution hardware
options” on page 56.
b. If necessary, install the solution into a standard rack cabinet by using the rail kit shipped with the
solution. See the Rack Installation Instructions that comes with optional rail kit.
c. Connect the Ethernet cables and power cords to the solution. See “Rear view” on page 26 to locate
the connectors. See “Cable the solution” on page 111 for cabling best practices.
d. Power on the solution. See “Power on the compute node” on page 112.
Note: You can access the management processor interface to configure the system without
powering on the solution. Whenever the solution is connected to power, the management processor
interface is available. For details about accessing the management node processor, see:
http://sysmgt.lenovofiles.com/help/topic/com.lenovo.systems.management.xcc.doc/dw1lm_c_chapter2_
openingandusing.html
e. Validate that the solution hardware was set up successfully. See “Validate solution setup” on page
112.
3. Configure the system.
a. Connect the Lenovo XClarity Controller to the management network. See “Set the network
connection for the Lenovo XClarity Controller” on page 113.
b. Update the firmware for the solution, if necessary. See “Update the firmware” on page 114.
c. Configure the firmware for the solution. See “Configure the firmware” on page 117.
The following information is available for RAID configuration:
• https://lenovopress.com/lp0578-lenovo-raid-introduction
• https://lenovopress.com/lp0579-lenovo-raid-management-tools-and-resources
d. Install the operating system. See “Install the operating system” on page 119.
e. Back up the solution configuration. See “Back up the solution configuration” on page 119.
f. Install the applications and programs for which the solution is intended to be used.
© Copyright Lenovo 2017, 2018
53
Installation Guidelines
Use the installation guidelines to install components in your solution.
Before installing optional devices, read the following notices carefully:
Attention: Prevent exposure to static electricity, which might lead to system halt and loss of data, by
keeping static-sensitive components in their static-protective packages until installation, and handling these
devices with an electrostatic-discharge wrist strap or other grounding system.
• Read the safety information and guidelines to ensure that you work safely.
– A complete list of safety information for all products is available at:
http://thinksystem.lenovofiles.com/help/topic/safety_documentation/pdf_files.html
– The following guidelines are available as well: “Handling static-sensitive devices” on page 56 and
“Working inside the solution with the power on” on page 55.
• Make sure the components you are installing are supported by the solution. For a list of supported
optional components for the solution, see http://www.lenovo.com/us/en/serverproven/.
• When you install a new solution, download and apply the latest firmware. This will help ensure that any
known issues are addressed, and that your solution is ready to work with optimal performance. Go to
ThinkSystem D2 Enclosure, Modular Enclosure combined with ThinkSystem SD530 Compute Node Drivers and
Software to download firmware updates for your solution.
Important: Some cluster solutions require specific code levels or coordinated code updates. If the
component is part of a cluster solution, verify that the latest level of code is supported for the cluster
solution before you update the code.
• It is good practice to make sure that the solution is working correctly before you install an optional
component.
• Keep the working area clean, and place removed components on a flat and smooth surface that does not
shake or tilt.
• Do not attempt to lift an object that might be too heavy for you. If you have to lift a heavy object, read the
following precautions carefully:
– Make sure that you can stand steadily without slipping.
– Distribute the weight of the object equally between your feet.
– Use a slow lifting force. Never move suddenly or twist when you lift a heavy object.
– To avoid straining the muscles in your back, lift by standing or by pushing up with your leg muscles.
• Make sure that you have an adequate number of properly grounded electrical outlets for the solution,
monitor, and other devices.
• Back up all important data before you make changes related to the disk drives.
• Have a small flat-blade screwdriver, a small Phillips screwdriver, and a T8 torx screwdriver available.
• To view the error LEDs on the system board and internal components, leave the power on.
• You do not have to turn off the solution to remove or install hot-swap power supplies, hot-swap fans, or
hot-plug USB devices. However, you must turn off the solution before you perform any steps that involve
removing or installing adapter cables, and you must disconnect the power source from the solution before
you perform any steps that involve removing or installing a riser card.
• Blue on a component indicates touch points, where you can grip to remove a component from or install it
in the solution, open or close a latch, and so on.
• Orange on a component or an orange label on or near a component indicates that the component can be
hot-swapped if the solution and operating system support hot-swap capability, which means that you can
54
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
remove or install the component while the solution is still running. (Orange can also indicate touch points
on hot-swap components.) See the instructions for removing or installing a specific hot-swap component
for any additional procedures that you might have to perform before you remove or install the component.
• The Red strip on the drives, adjacent to the release latch, indicates that the drive can be hot-swapped if
the solution and operating system support hot-swap capability. This means that you can remove or install
the drive while the solution is still running.
Note: See the system specific instructions for removing or installing a hot-swap drive for any additional
procedures that you might need to perform before you remove or install the drive.
• After finishing working on the solution, make sure you reinstall all safety shields, guards, labels, and
ground wires.
System reliability guidelines
Review the system reliability guidelines to ensure proper system cooling and reliability.
Make sure the following requirements are met:
• When the server comes with redundant power, a power supply must be installed in each power-supply
bay.
• Adequate space around the server must be spared to allow server cooling system to work properly. Leave
approximately 50 mm (2.0 in.) of open space around the front and rear of the server. Do not place any
object in front of the fans.
• For proper cooling and airflow, refit the server cover before you turn the power on. Do not operate the
server for more than 30 minutes with the server cover removed, for it might damage server components.
• Cabling instructions that come with optional components must be followed.
• A failed fan must be replaced within 48 hours since malfunction.
• A removed hot-swap fan must be replaced within 30 seconds after removal.
• A removed hot-swap drive must be replaced within two minutes after removal.
• A removed hot-swap power supply must be replaced within two minutes after removal.
• Every air baffle that comes with the server must be installed when the server starts (some servers might
come with more than one air baffle). Operating the server with a missing air baffle might damage the
processor.
• All processor sockets must contain either a socket cover or a processor with heat sink.
• When more than one processor is installed, fan population rules for each server must be strictly followed.
• Do not operate the enclosure without the SMM assembly installed. Operating the solution without the
SMM assembly might cause the system to fail. Replace the System Management Module (SMM)
assembly as soon as possible after removal to ensure proper operation of the system.
Working inside the solution with the power on
Guidelines to work inside the solution with the power on.
Attention: The solution might stop and loss of data might occur when internal solution components are
exposed to static electricity. To avoid this potential problem, always use an electrostatic-discharge wrist
strap or other grounding systems when working inside the solution with the power on.
• Avoid loose-fitting clothing, particularly around your forearms. Button or roll up long sleeves before
working inside the solution.
• Prevent your necktie, scarf, badge rope, or long hair from dangling into the solution.
• Remove jewelry, such as bracelets, necklaces, rings, cuff links, and wrist watches.
Chapter 3. Solution hardware setup
55
• Remove items from your shirt pocket, such as pens and pencils, in case they fall into the solution as you
lean over it.
• Avoid dropping any metallic objects, such as paper clips, hairpins, and screws, into the solution.
Handling static-sensitive devices
Use this information to handle static-sensitive devices.
Attention: Prevent exposure to static electricity, which might lead to system halt and loss of data, by
keeping static-sensitive components in their static-protective packages until installation, and handling these
devices with an electrostatic-discharge wrist strap or other grounding system.
• Limit your movement to prevent building up static electricity around you.
• Take additional care when handling devices during cold weather, for heating would reduce indoor
humidity and increase static electricity.
• Always use an electrostatic-discharge wrist strap or other grounding system, particularly when working
inside the solution with the power on.
• While the device is still in its static-protective package, touch it to an unpainted metal surface on the
outside of the solution for at least two seconds. This drains static electricity from the package and from
your body.
• Remove the device from the package and install it directly into the solution without putting it down. If it is
necessary to put the device down, put it back into the static-protective package. Never place the device
on the solution or on any metal surface.
• When handling a device, carefully hold it by the edges or the frame.
• Do not touch solder joints, pins, or exposed circuitry.
• Keep the device from others’ reach to prevent possible damages.
Install solution hardware options
This section includes instructions for performing initial installation of optional hardware. Each component
installation procedure references any tasks that need to be performed to gain access to the component
being replaced.
Installation procedures are presented in the optimum sequence to minimize work.
Attention: To ensure the components you install work correctly without problems, read the following
precautions carefully.
• Make sure the components you are installing are supported by the solution. For a list of supported
optional components for the solution, see http://www.lenovo.com/us/en/serverproven/.
• Always download and apply the latest firmware. This will help ensure that any known issues are
addressed, and that your solution is ready to work with optimal performance. Go to ThinkSystem D2
Enclosure, Modular Enclosure combined with ThinkSystem SD530 Compute Node Drivers and Software to
download firmware updates for your solution.
• It is good practice to make sure that the solution is working correctly before you install an optional
component.
• Follow the installation procedures in this section and use appropriate tools. Incorrectly installed
components can cause system failure from damaged pins, damaged connectors, loose cabling, or loose
components.
56
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
Install hardware options in the enclosure
Use the following information to remove and install the enclosure options.
Remove the shuttle
Use this information to remove the shuttle.
Before you remove the shuttle:
1. Read the following section(s) to ensure that you work safely.
•
“Installation Guidelines” on page 54
2. Power off all compute nodes and peripheral devices (see “Power off the compute node” on page 112).
3. Disengage all the compute nodes from the enclosure.
4. Disconnect the power cords and all external cables from the rear of the enclosure.
Attention: Be careful when you are removing or installing the shuttle to avoid damaging the shuttle
connectors.
Figure 50. Shuttle connectors
Complete the following steps to remove the shuttle.
Watch the procedure. A video of the installation process is available:
• Youtube: https://www.youtube.com/playlist?list=PLYV5R7hVcs-DOlbsCdADcoKQdMB2Uuk-T
• Youku: http://list.youku.com/albumlist/show/id_50483438
Step 1.
Turn the two thumbscrews counterclockwise and lift the handles up.
Step 2.
Pull the handles and slide the half of the shuttle out of the chassis.
Chapter 3. Solution hardware setup
57
Figure 51. Shuttle removal
Step 3.
Push two release latches and slide the whole shuttle out of the chassis.
Figure 52. Shuttle removal
Attention: To prevent any damage to the shuttle connectors, make sure that you hold the shuttle
properly to put it down as illustrated.
58
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
Figure 53. Shuttle connectors
If you are instructed to return the component or optional device, follow all packaging instructions, and use
any packaging materials for shipping that are supplied to you.
Remove the EIOM
Use this information to remove the EIOM.
Before you remove the EIOM:
1. Read the following section(s) to ensure that you work safely.
•
“Installation Guidelines” on page 54
2. Power off all compute nodes and peripheral devices (see “Power off the compute node” on page 112).
3. Disengage all the compute nodes from the enclosure.
4. Disconnect the power cords and all external cables from the rear of the enclosure.
5. Remove the shuttle (see “Remove the shuttle” on page 57) and place it on the stable work surface.
Complete the following steps to remove the EIOM.
Watch the procedure. A video of the installation process is available:
• Youtube: https://www.youtube.com/playlist?list=PLYV5R7hVcs-DOlbsCdADcoKQdMB2Uuk-T
• Youku: http://list.youku.com/albumlist/show/id_50483438
• For 10GbE cage (SFP+) model
Figure 54. EIOM removal
• For 10GBASE-T cage (RJ-45) model
Chapter 3. Solution hardware setup
59
Figure 55. EIOM removal
• For EIOM filler
Figure 56. EIOM filler removal
Step 1.
Disconnect two cables from the EIOM. (Skip this step for the EIOM filler)
Note: Make sure you push the release latch only when disconnecting the signal cable.
Step 2.
Turn the thumbscrews counterclockwise.
Step 3.
Grasp and push the EIOM slightly towards the front side of the shuttle.
Step 4.
Lift the EIOM up to remove the EIOM from the shuttle.
If you are instructed to return the component or optional device, follow all packaging instructions, and use
any packaging materials for shipping that are supplied to you.
Install a low-profile PCIe x16 adapter
Use this information to install a low-profile PCIe x16 adapter.
Before you install a low-profile PCIe x16 adapter:
60
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
1. Read the following section(s) to ensure that you work safely.
•
“Installation Guidelines” on page 54
2. Turn off the corresponding compute node that you are going to perform the task on.
3. Touch the static-protective package that contains the adapter to any unpainted metal surface on the
solution; then, remove the adapter from the package.
4. Locate the adapter.
Figure 57. Adapter location
5. Place the adapter, component side up, on a flat, static-protective surface and set any jumpers or
switches as described by the adapter manufacturer, if necessary.
Complete the following steps to install a low-profile PCIe x16 adapter.
Watch the procedure. A video of the installation process is available:
• Youtube: https://www.youtube.com/playlist?list=PLYV5R7hVcs-DOlbsCdADcoKQdMB2Uuk-T
• Youku: http://list.youku.com/albumlist/show/id_50483438
Step 1.
Remove the adapter cassette.
a.
Slide the release latch to the open position.
b.
Slide the adapter cassette out of the shuttle.
Figure 58. Adapter cassette removal
Step 2.
Install the adapter to the adapter cassette.
a.
Remove the screws.
b.
Slide the expansion-slot cover out.
c.
Align the gold finger on the adapter with the cassette, then, insert the adapter into the adapter
cassette.
Chapter 3. Solution hardware setup
61
d.
Loosen bracket screws for about 1/4 turn to adjust the adapter bracket to secure the adapter
according to your adapter length; then, tighten bracket screws.
e.
Fasten the screw to secure the adapter to the cassette.
f.
Connect any required cables to the adapter.
Figure 59. Adapter installation
Step 3.
Reinstall the adapter cassette.
a.
Slide the release latch to the open position.
Note: Pay attention to the adapter cassette position when you installing it and see the
following illustration for the accurate position information.
62
b.
Carefully align the adapter cassette with the guides on the shuttle ; then, slide the adapter
cassette into the shuttle and make sure that the cassette is fully seated.
c.
Slide the release latch to the close position.
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
Figure 60. Adapter cassette installation
After you install a low-profile PCIe x16 adapter, complete the following steps.
1. Reconnect the power cords and any cables that you removed.
2. Turn on all compute nodes.
Install a low-profile PCIe x8 adapter
Use this information to install a low-profile PCIe x8 adapter.
Before you install a low-profile PCIe x8 adapter:
1. Read the following section(s) to ensure that you work safely.
•
“Installation Guidelines” on page 54
2. Power off all compute nodes and peripheral devices (see “Power off the compute node” on page 112).
3. Disengage all the compute nodes from the enclosure.
4. Disconnect the power cords and all external cables from the rear of the enclosure.
5. Remove the shuttle from the enclosure (see “Remove the shuttle” on page 57).
6. Locate the adapter.
Figure 61. Adapter location
7. Touch the static-protective package that contains the adapter to any unpainted metal surface on the
solution; then, remove the adapter from the package.
8. Place the adapter, component side up, on a flat, static-protective surface and set any jumpers or
switches as described by the adapter manufacturer.
Complete the following steps to install a low-profile PCIe x8 adapter.
Watch the procedure. A video of the installation process is available:
• Youtube: https://www.youtube.com/playlist?list=PLYV5R7hVcs-DOlbsCdADcoKQdMB2Uuk-T
Chapter 3. Solution hardware setup
63
• Youku: http://list.youku.com/albumlist/show/id_50483438
Figure 62. Adapter installation
Step 1.
Slide the retention bracket forward and rotate it to the open position.
Step 2.
Remove screw (if necessary).
Step 3.
Slide the expansion-slot cover out of the shuttle.
Step 4.
Align the adapter with the PCI connector on the shuttle and press the adapter firmly into the PCI
connector on the shuttle.
Step 5.
Rotate the retention bracket and slide toward the rear of the shuttle to the close position.
Step 6.
Fasten the screw if necessary.
Note: Fasten the screw if the solution is under vibration environment or you plan to transport the
solution.
After you install a low-profile PCIe x8 adapter, complete the following steps.
1. Reinstall the shuttle (see “Install the compute node cover” on page 98).
2. Reconnect the power cords and any cables that you removed.
3. Push all compute nodes back into the enclosure (see “Install a compute node in the enclosure” on page
99).
4. Turn on all compute nodes.
Install a low-profile PCIe x8 adapter in PCIe slot 3-B and 4-B
Use this information to install a low-profile PCIe x8 adapter in PCIe slot 3-B and 4-B.
Before you install a low-profile PCIe x8 adapter in PCIe slot 3-B and 4-B:
1. Read the following section(s) to ensure that you work safely.
•
“Installation Guidelines” on page 54
2. Power off all compute nodes and peripheral devices (see “Power off the compute node” on page 112).
3. Disengage all the compute nodes from the enclosure.
4. Disconnect the power cords and all external cables from the rear of the enclosure.
64
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
5. Remove the shuttle (see “Remove the shuttle” on page 57).
6. Remove the EIOM card (see “Remove the EIOM” on page 59).
7. Touch the static-protective package that contains the adapter to any unpainted metal surface on the
solution; then, remove the adapter from the package.
8. Locate the adapter.
Figure 63. Adapter location
9. Place the adapter, component side up, on a flat, static-protective surface and set any jumpers or
switches as described by the adapter manufacturer, if necessary.
Complete the following steps to install a low-profile PCIe x8 adapter in PCIe slot 3-B and 4-B.
Watch the procedure. A video of the installation process is available:
• Youtube: https://www.youtube.com/playlist?list=PLYV5R7hVcs-DOlbsCdADcoKQdMB2Uuk-T
• Youku: http://list.youku.com/albumlist/show/id_50483438
Figure 64. Adapter installation
Step 1.
Slide the expansion-slot cover out of the shuttle.
Step 2.
Align the adapter with the PCI connector on the shuttle and press the adapter firmly into the PCI
connector on the shuttle.
After you install a low-profile PCIe x8 adapter in PCIe slot 3-B and 4-B, complete the following steps.
1. Reinstall the EIOM card (see “Install the EIOM” on page 68).
2. Reinstall the shuttle (see “Install the compute node cover” on page 98).
3. Reconnect the power cords and any cables that you removed.
4. Push all compute nodes back into the enclosure (see “Install a compute node in the enclosure” on page
99).
5. Turn on all compute nodes.
Chapter 3. Solution hardware setup
65
Install a hot-swap power supply
Use this information to install a hot-swap power supply.
To avoid possible danger, read and follow the following safety statement.
• S001
DANGER
Electrical current from power, telephone, and communication cables is hazardous.
To avoid a shock hazard:
– Do not connect or disconnect any cables or perform installation, maintenance, or
reconfiguration of this product during an electrical storm.
– Connect all power cords to a properly wired and grounded electrical outlet.
– Connect to properly wired outlets any equipment that will be attached to this product.
– When possible, use one hand only to connect or disconnect signal cables.
– Never turn on any equipment when there is evidence of fire, water, or structural damage.
– Disconnect the attached power cords, telecommunications systems, networks, and modems
before you open the device covers, unless instructed otherwise in the installation and
configuration procedures.
– Connect and disconnect cables as described in the following table when installing, moving, or
opening covers on this product or attached devices.
To Connect:
1.
2.
3.
4.
5.
Turn everything OFF.
First, attach all cables to devices.
Attach signal cables to connectors.
Attach power cords to outlet.
Turn device ON.
To Disconnect:
1.
2.
3.
4.
Turn everything OFF.
First, remove power cords from outlet.
Remove signal cables from connectors.
Remove all cables from devices.
• S035
CAUTION:
Never remove the cover on a power supply or any part that has this label attached.
Hazardous voltage, current, and energy levels are present inside any component that has this label
attached. There are no serviceable parts inside these components. If you suspect a problem with
one of these parts, contact a service technician.
Before you install a hot-swap power supply:
Notes:
66
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
1. Make sure the devices you are installing are supported. For a list of supported optional devices for the
solution, see http://www.lenovo.com/us/en/serverproven/.
2. Do not install two power supply units with different wattages. Related information is available from the
following:
• Read the label on top cover for maximum wattage output of installed power supply units. Only replace the
existing units with those with the same wattage as marked on the label.
• Check the rear of the node to make sure there is no length difference between the two installed units. If
there is visible difference in length, it means the two units come with different wattages, and one of them
have to be replaced.
Complete the following steps to install a hot-swap power supply.
Watch the procedure. A video of the installation process is available:
• Youtube: https://www.youtube.com/playlist?list=PLYV5R7hVcs-DOlbsCdADcoKQdMB2Uuk-T
• Youku: http://list.youku.com/albumlist/show/id_50483438
Figure 65. Hot-swap power supply installation
Step 1.
Slide the hot-swap power supply into the bay until the release latch clicks into place.
Important: During normal operation, each power-supply bay must contain either a power supply
or power-supply filler panel for proper cooling.
Step 2.
Connect one end of the power cord for the new power supply into the AC connector on the back of
the power supply; then, connect the other end of the power cord into a properly grounded electrical
outlet.
Note: Connect the power cord to the power supply unit, and make sure it's properly connected to
the power.
Step 3.
If the node is turned off, turn on the node.
Step 4.
Make sure that the ac power LED on the power supply is lit, indicating that the power supply is
operating correctly. If the node is turned on, make sure that the dc power LED on the power supply
is lit also.
After you install a hot-swap power supply, complete the following steps:
Chapter 3. Solution hardware setup
67
1. Reconnect the power cords and any cables that you removed.
2. Turn on all compute nodes.
Install the EIOM
Use this information to install the EIOM.
Before you install the EIOM:
1. Read the following section(s) to ensure that you work safely.
•
“Installation Guidelines” on page 54
2. Turn off the server and peripheral devices and disconnect the power cords and all external cables (see
“Power off the compute node” on page 112).
3. Disengage all the compute nodes from the enclosure.
4. Remove the shuttle (see “Remove the shuttle” on page 57) and place it on the stable work surface.
Note: The minimum networking speed requirement for the EIOM is 1Gbps.
Complete the following steps to install the EIOM.
Watch the procedure. A video of the installation process is available:
• Youtube: https://www.youtube.com/playlist?list=PLYV5R7hVcs-DOlbsCdADcoKQdMB2Uuk-T
• Youku: http://list.youku.com/albumlist/show/id_50483438
Step 1.
Grasp the EIOM and align the four EIOM tabs with the slots in the shuttle; then, lower the EIOM into
the slots.
• For 10GbE cage (SFP+) model
Figure 66. EIOM installation
• For 10GBASE-T cage (RJ-45) model
68
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
Figure 67. EIOM installation
• For EIOM filler
Figure 68. EIOM filler removal
Step 2.
Pull the EIOM slightly towards the rear side of the shuttle.
Step 3.
Connect required cables to the EIOM. (Skip this step for the EIOM filler)
Step 4.
Turn the thumbscrews clockwise.
After you install the EIOM, complete the following steps:
1. Reinstall the shuttle (see “Install the shuttle” on page 70).
2. Reconnect the power cords and any cables that you removed.
3. Push all compute nodes back into the enclosure (see “Install a compute node in the enclosure” on page
99).
4. Turn on all compute nodes.
Chapter 3. Solution hardware setup
69
Install the shuttle
Use this information to install the shuttle.
Before you install the shuttle:
1. Read the following section(s) to ensure that you work safely.
•
“Installation Guidelines” on page 54
2. Power off all compute nodes and peripheral devices (see “Power off the compute node” on page 112).
3. Disengage all the compute nodes from the enclosure.
4. Disconnect the power cords and all external cables from the rear of the enclosure.
Attention: Be careful when you are removing or installing the shuttle to avoid damaging the shuttle
connectors.
Figure 69. Shuttle connectors
Complete the following steps to install the shuttle.
Watch the procedure. A video of the installation process is available:
• Youtube: https://www.youtube.com/playlist?list=PLYV5R7hVcs-DOlbsCdADcoKQdMB2Uuk-T
• Youku: http://list.youku.com/albumlist/show/id_50483438
Step 1.
Turn the two thumbscrews counterclockwise to release handles.
Step 2.
Align the shuttle with rails and pins; then, slide the shuttle into the enclosure.
70
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
Figure 70. Shuttle installation
Step 3.
Make sure the pins on the shuttle are fully seated in the slots.
Step 4.
Push the handles down and turn the thumbscrews clockwise.
Figure 71. Shuttle installation
After you install the shuttle, complete the following steps:
1. If the cable management arm is removed, install it (see “Install the cable management arm” on page 71).
2. Push all compute nodes back into the enclosure (see “Install a compute node in the enclosure” on page
99).
3. Turn on all compute nodes.
Install the cable management arm
Use this procedure to install the cable management arm.
Before you install the cable management arm:
1. Read the following section(s) to ensure that you work safely.
•
“Installation Guidelines” on page 54
Chapter 3. Solution hardware setup
71
2. Make sure the enclosure is pushed fully into the rack and the thumbscrews are tightened.
Complete the following steps to install the cable management arm.
Watch the procedure. A video of the installation process is available:
• Youtube: https://www.youtube.com/playlist?list=PLYV5R7hVcs-DOlbsCdADcoKQdMB2Uuk-T
• Youku: http://list.youku.com/albumlist/show/id_50483438
Figure 72. Cable management arm installation
Step 1.
Align the inner mounting clip with the inner tab on the slide, then, push it until it snaps into place.
Step 2.
Align two outer mounting clips with the outer tabs on the slides; then, push them until they snap
into place.
Install hardware options in the compute node
Use the following information to remove and install the options in the compute node.
Remove a compute node from the enclosure
Use this procedure to remove a compute node from the D2 Enclosure.
Attention: Unauthorized personnel should not remove or install the nodes. Only trained or service-related
personnel are admitted to perform such actions.
Before you remove a compute node:
1. Read the following section(s) to ensure that you work safely.
•
“Installation Guidelines” on page 54
2. Turn off the corresponding compute node that you are going to perform the task on.
3. When you remove the compute node, note the node bay number. Reinstalling a compute node into a
different node bay from the one it was removed from can have unintended consequences. Some
configuration information and update options are established according to node bay number. If you
reinstall the compute node into a different node bay, you might have to reconfigure the compute node.
One way to track node is via the serial number.
Note: The serial number is located on the pull out tab for the each node.
72
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
Complete the following steps to remove the compute node from a enclosure.
Watch the procedure. A video of the installation process is available:
• Youtube: https://www.youtube.com/playlist?list=PLYV5R7hVcs-DOlbsCdADcoKQdMB2Uuk-T
• Youku: http://list.youku.com/albumlist/show/id_50483438
Figure 73. Node removal
Step 1.
Release and rotate the front handle as shown in the illustration.
Attention: To maintain proper system cooling, do not operate the D2 Enclosure without a compute
node or node bay filler installed in each node bay.
Step 2.
Slide the node out about 12 inches (300 mm); then, grip the node with both hands and remove it
from the enclosure.
Step 3.
Install either a node bay filler or another compute node in the node bay within 1 minute.
If you are instructed to return the component or optional device, follow all packaging instructions, and use
any packaging materials for shipping that are supplied to you.
Remove the compute node cover
Use this procedure to remove the compute node cover.
Before you remove the compute node cover:
1. Read the following section(s) to ensure that you work safely.
•
“Installation Guidelines” on page 54
2. Turn off the corresponding compute node that you are going to perform the task on.
3. Remove the node from the enclosure. See “Remove a compute node from the enclosure” on page 72
Complete the following steps to remove the compute node cover.
Watch the procedure. A video of the installation process is available:
• Youtube: https://www.youtube.com/playlist?list=PLYV5R7hVcs-DOlbsCdADcoKQdMB2Uuk-T
• Youku: http://list.youku.com/albumlist/show/id_50483438
Chapter 3. Solution hardware setup
73
Figure 74. Compute node cover removal
Step 1.
Push the cover-release latch on the top of the node cover.
Step 2.
Slide the cover toward the rear of the node until the cover has disengaged from the node; then, lift
the cover away from the node.
If you are instructed to return the component or optional device, follow all packaging instructions, and use
any packaging materials for shipping that are supplied to you.
Remove the air baffle
Use this procedure to remove the air baffle.
Before removing the air baffle:
1. Read the following section(s) to ensure that you work safely.
•
“Installation Guidelines” on page 54
2. Turn off the corresponding compute node that you are going to perform the task on.
3. Remove the compute node (see “Remove a compute node from the enclosure” on page 72).
4. Remove the compute node cover (see “Remove the compute node cover” on page 73).
Complete the following steps to remove the air baffle.
Watch the procedure. A video of the installation process is available:
• Youtube: https://www.youtube.com/playlist?list=PLYV5R7hVcs-DOlbsCdADcoKQdMB2Uuk-T
• Youku: http://list.youku.com/albumlist/show/id_50483438
74
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
Figure 75. Air baffle removal
Step 1.
Slightly push the right and left release latches; then, lift the air baffle out of the node.
Attention: For proper cooling and airflow, replace the air baffle before you turn on the node.
Operating the node with the air baffle removed might damage node components.
If you are instructed to return the component or optional device, follow all packaging instructions, and use
any packaging materials for shipping that are supplied to you.
Remove the M.2 backplane
Use this information to remove the M.2 backplane.
Before you remove the M.2 backplane:
1. Read the following section(s) to ensure that you work safely.
•
“Installation Guidelines” on page 54
2. Turn off the corresponding compute node that you are going to perform the task on.
3. Remove the compute node (see “Remove a compute node from the enclosure” on page 72).
4. Remove the compute node cover (see “Remove the compute node cover” on page 73).
Complete the following steps to remove the M.2 backplane.
Watch the procedure. A video of the installation process is available:
• Youtube: https://www.youtube.com/playlist?list=PLYV5R7hVcs-DOlbsCdADcoKQdMB2Uuk-T
• Youku: http://list.youku.com/albumlist/show/id_50483438
Chapter 3. Solution hardware setup
75
Figure 76. M.2 backplane removal
Step 1.
Remove the M.2 backplane from the system board by pulling up on both ends of the backplane at
the same time.
If you are instructed to return the component or optional device, follow all packaging instructions, and use
any packaging materials for shipping that are supplied to you.
Remove an M.2 drive in the M.2 backplane
Use this information to remove an M.2 drive in the M.2 backplane.
Before you remove an M.2 drive in the M.2 backplane:
1. Read the following section(s) to ensure that you work safely.
•
“Installation Guidelines” on page 54
2. Turn off the corresponding compute node that you are going to perform the task on.
3. Remove the compute node (see “Remove a compute node from the enclosure” on page 72).
4. Remove the compute node cover (see “Remove the compute node cover” on page 73).
5. Remove the M.2 backplane (see “Remove the M.2 backplane” on page 75).
Complete the following steps to remove an M.2 drive from the M.2 backplane.
Watch the procedure. A video of the installation process is available:
• Youtube: https://www.youtube.com/playlist?list=PLYV5R7hVcs-DOlbsCdADcoKQdMB2Uuk-T
• Youku: http://list.youku.com/albumlist/show/id_50483438
76
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
Figure 77. M.2 drive removal
Step 1.
Press both sides of the retainer and slide it backward to loosen the M.2 drive from the M.2
backplane.
Note: If your M.2 backplane has two M.2 drives, they will both release outward when you slide the
retainer backward.
Step 2.
Remove the M.2 drive by rotating it away from the M.2 backplane and pulling away from the
connector at an angle (approximately 30 degrees).
If you are instructed to return the component or optional device, follow all packaging instructions, and use
any packaging materials for shipping that are supplied to you.
Install the KVM breakout module
Use this information to install the KVM breakout module.
Before you install the KVM breakout module:
1. Read the following section(s) to ensure that you work safely.
•
“Installation Guidelines” on page 54
2. Turn off the corresponding compute node that you are going to perform the task on.
3. Remove the node (see “Remove a compute node from the enclosure” on page 72).
4. Remove the compute node cover (see “Remove the compute node cover” on page 73).
5. Remove the air baffle (see “Remove the air baffle” on page 74).
Complete the following steps to install the KVM breakout module.
Watch the procedure. A video of the installation process is available:
• Youtube: https://www.youtube.com/playlist?list=PLYV5R7hVcs-DOlbsCdADcoKQdMB2Uuk-T
• Youku: http://list.youku.com/albumlist/show/id_50483438
Step 1.
Connect all required cables to the KVM breakout module.
Step 2.
Carefully route cables through the drive bay and the drive backplane.
• The right KVM breakout module (for four 2.5-inch-drive model)
Chapter 3. Solution hardware setup
77
Figure 78. Right KVM breakout module installation
Table 36. Components on the right KVM breakout module installation
1 Long signal cable
2 Short signal cable
Attention: Make sure the USB 3.0 connector is on your right side as illustrated to ensure the
correct installation.
Figure 79. KVM breakout module
Table 37. KVM breakout module
1 KVM connector
2 USB 3.0 connector
• The left KVM breakout module (for six 2.5-inch-drive model)
78
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
Figure 80. Left KVM breakout module installation
Table 38. Components on the left KVM breakout module installation
1 Short signal cable
2 Long signal cable
Attention: Make sure the USB 3.0 connector is on your right side as illustrated to ensure the
correct installation.
Figure 81. KVM breakout module
Table 39. KVM breakout module
1 KVM connector
2 USB 3.0 connector
Step 3.
Insert the KVM breakout module into the node.
Step 4.
Fasten the screw.
Step 5.
Connect required cables to connectors as shown in the following illustrations.
Note: Manage cables in plastic cable guides located on side of compute node.
• The right KVM breakout module (for four 2.5-inch-drive model)
Chapter 3. Solution hardware setup
79
Figure 82. Right KVM breakout module cable routing
Table 40. Components on the right KVM breakout module cable routing
1 Long signal cable
3 KVM breakout cable connector
2 5 Internal cable management basket
4 USB connector
6 Short signal cable
• The left KVM breakout module (for six 2.5-inch-drive model)
Figure 83. Left KVM breakout module cable routing
80
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
Table 41. Components on the left KVM breakout module cable routing
1 Short signal cable
3 KVM breakout cable connector
2 5 Internal cable management basket
4 USB connector
6 Long signal cable
Note: While KVM breakout cable is connected, the USB key should not be wider than 19 mm.
After you install the KVM breakout module, complete the following steps.
1. Reinstall the air baffle (see “Install the air baffle” on page 97).
2. Reinstall the node cover (see “Install the compute node cover” on page 98).
3. Reinstall the compute node (see “Install a compute node in the enclosure” on page 99).
4. Reconnect the power cords and any cables that you removed.
5. Check the power LED to make sure it transitions between fast blink and slow blink to indicate the node
is ready to be powered on.
Install the drive backplane
Use this information to install the drive backplane.
Before you install the drive backplane:
1. Read the following section(s) to ensure that you work safely.
•
“Installation Guidelines” on page 54
2. Turn off the corresponding compute node that you are going to perform the task on.
3. Remove the compute node (see “Remove a compute node from the enclosure” on page 72).
4. Remove the compute node cover (see “Remove the compute node cover” on page 73).
You can refer to “2.5-inch drive backplanes” on page 33 for detailed backplane introduction.
Important: Do not mix nodes with the four-drive backplane and six-drive backplanes in the same
enclosure. Mixing the four-drive backplane and six-drive backplanes may cause unbalanced cooling.
Complete the following steps to install the drive backplane.
Watch the procedure. A video of the installation process is available:
• Youtube: https://www.youtube.com/playlist?list=PLYV5R7hVcs-DOlbsCdADcoKQdMB2Uuk-T
• Youku: http://list.youku.com/albumlist/show/id_50483438
Chapter 3. Solution hardware setup
81
Figure 84. Drive backplane installation
Step 1.
Connect the ambient sensor cable.
Step 2.
Align the backplane with the backplane slots in the side walls of the node.
Step 3.
Lower the backplane into the slots in the chassis and push the two latches.
Note: Make sure the ambient sensor cable is routed through the slot on the bottom backplane.
Step 4.
Connect all required cables. For detailed cable routing, see “Internal cable routing” on page 41.
After you install the drive backplane, complete the following steps.
1. If the air baffle is removed, reinstall it (see “Install the air baffle” on page 97).
2. Reinstall the compute node cover (see “Install the compute node cover” on page 98).
3. Reinstall the compute node (see “Install a compute node in the enclosure” on page 99).
4. Check the power LED to make sure it transitions between fast blink and slow blink to indicate the node
is ready to be powered on.
Install a hot-swap drive
Use this procedure to install a drive.
Before you install a drive:
1. Read the following section(s) to ensure that you work safely.
•
“Installation Guidelines” on page 54
2. Remove the drive filler from the empty drive bay. Keep the filler panel in a safe place.
3. Touch the static-protective package that contains the drive to any unpainted metal surface on the
solution; then, remove the drive from the package and place it on a static-protective surface.
The following notes describe the type of drives that the node supports and other information that you must
consider when you install a drive. For a list of supported drives, see http://www.lenovo.com/us/en/
serverproven/.
• Locate the documentation that comes with the drive and follow those instructions in addition to the
instructions in this chapter.
• You can install up to six hot-swap SAS/SATA 2.5-inch drives for each node.
82
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
• The electromagnetic interference (EMI) integrity and cooling of the solution are protected by having all
bays and PCI and PCI Express slots covered or occupied. When you install a drive, PCI, or PCI Express
adapter, save the EMC shield and filler panel from the bay or PCI or PCI Express adapter slot cover in the
event that you later remove the device.
• For a complete list of supported optional devices for the node, see http://www.lenovo.com/us/en/
serverproven/.
Complete the following steps to install a drive:
Watch the procedure. A video of the installation process is available:
• Youtube: https://www.youtube.com/playlist?list=PLYV5R7hVcs-DOlbsCdADcoKQdMB2Uuk-T
• Youku: http://list.youku.com/albumlist/show/id_50483438
Note: If you have only one drive, you must install it in the bay 0 (upper-left).
Figure 85. Drive installation
Step 1.
Step 2.
Install the dirve in the drive bay:
a.
Make sure that the tray handle is in the open (unlocked) position.
b.
Align the drive with the guide rails in the bay.
c.
Gently push the drive into the bay until the drive stops.
d.
Rotate the tray handle to the closed (locked) position and you can hear a click.
e.
Check the drive status LED to verify that the drive is operating correctly. If the yellow drive
status LED of a drive is lit continuously, that drive is faulty and must be replaced. If the green
drive activity LED is flashing, the drive is being accessed.
If you are installing additional drives, do so now.
After you install all drives, complete the following steps.
1. If the node is configured for RAID operation using a RAID adapter, you might have to reconfigure your
disk arrays after you install drives. See the RAID adapter documentation for additional information about
RAID operation and complete instructions for using the RAID adapter.
Memory module installation
The following notes describe the types of DIMMs that the node supports and other information that you must
consider when you install DIMMs.
Chapter 3. Solution hardware setup
83
• Confirm that the node supports the DIMM that you are installing (see http://www.lenovo.com/us/en/
serverproven/).
• When you install or remove DIMMs, the node configuration information changes. When you restart the
node, the system displays a message that indicates that the memory configuration has changed. You can
use the Setup utility to view the node configuration information, see ThinkSystem D2 Enclosure, Modular
Enclosure combined with ThinkSystem SD530 Compute Node Setup Guide for more information.
• Install higher capacity (ranked) DIMMs first, following the population sequence for the memory mode
being used.
• The node supports only industry-standard double-data-rate 4 (DDR4), 2666 MT/s, PC4-21300 (single-rank
or dual-rank), unbuffered or synchronous dynamic random-access memory (SDRAM) dual inline memory
modules (DIMMs) with error correcting code (ECC).
• Do not mix RDIMMs, LRDIMMs and 3DS DIMMs in the same node.
• The maximum operating speed of the node is determined by the slowest DIMM in the node.
• If you install a pair of DIMMs in DIMM connectors 1 and 3, the size and speed of the DIMMs that you
install in DIMM connectors 1 and 3 must match each other. However, they do not have to be the same
size and speed as the DIMMs that are installed in DIMM connectors 2 and 4.
• You can use compatible DIMMs from various manufacturers in the same pair.
• The specifications of a DDR4 DIMM are on a label on the DIMM, in the following format.
• gggGBpheRxff PC4-wwwwaa-mccd-bb
where:
– gggGB is the total capacity, in gigabytes, for primary bus (ECC not counted) 4GB, 8GB, 16GB, etc. (no
space between digits and units)
– pheR is the number of package ranks of memory installed and number of logical ranks per package
rank
– p=
• 1 = 1 package rank of SDRAMs installed
• 2 = 2 package ranks of SDRAMs installed
• 3 = 3 package ranks of SDRAMs installed
• 4 = 4 package ranks of SDRAMs installed
– he = blank for monolithic DRAMs, else for modules using stacked DRAM:
• h = DRAM package type
– D = multi-load DRAM stacking (DDP)
– Q = multi-load DRAM stacking (QDP)
– S = single load DRAM stacking (3DS)
• e = blank for SDP, DDP and QDP, else modules using 3DS stacks, logical ranks per package rank
– 2 = 2 logical ranks in each package rank
– 4 = 4 logical ranks in each package rank
– 8 = 8 logical ranks in each package rank
– R = rank(s)
– xff = Device organization (data bit width) of SDRAMs used on this assembly
• x4 = x4 organization (4 DQ lines per SDRAM)
• x8 = x8 organization
• x16 = x16 organization
84
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
– wwwww is the DIMM bandwidth, in MBps: 2133, 2400, 2666, 2933, 3200
– aa is the SDRAM speed grade
– m is the DIMM type
– E = Unbuffered DIMM (UDIMM), x64 primary + 8 bit ECC module data bus
– L = Load Reduced DIMM (LRDIMM), x64 primary + 8 bit ECC module data bus
– R = Registered DIMM (RDIMM), x64 primary + 8 bit ECC module data bus
– U = Unbuffered DIMM (UDIMM) with no ECC (x64-bit primary data bus)
– cc is the reference design file used for this design
– d is the revision number of the reference design used
– bb is the JEDEC SPD Revision Encoding and Additions level used on this DIMM
The following illustration shows the location of the DIMM connectors on the system board.
Figure 86. The location of the DIMM connectors on the system board
Installation order
Memory modules must be installed in a specific order based on the memory configuration that you
implement on your node.
The following memory configurations are available:
• “Memory mirroring population sequence” on page 86
• “Memory rank sparing population sequence” on page 86
• “Independent memory mode population sequence” on page 85
For information about memory modes, see “Memory configuration” on page 118
Independent memory mode population sequence
Table 42. DIMM installation sequence (Independent mode/normal mode)
Number of processor
Installation sequence (connectors)
Processor 1 installed
6, 3, 7, 2, 8, 1, 5, 4
Processor 1 and 2 installed
6, 14, 3, 11, 7, 15, 2, 10, 8, 16, 1, 9, 5, 13, 4, 12
Chapter 3. Solution hardware setup
85
Memory mirroring population sequence
Table 43. DIMM installation sequence (mirror mode/lockstep mode)
Number of processors
Installation sequence (connectors)
Processor 1 installed
(6, 7), (2, 3), (8, 1)
Processor 1 and 2 installed
(6, 7, 14, 15), (2, 3), (10, 11), (1,8), (9, 16)
If you are installing 3, 6, 9 or 12 identical DIMMs for the mirroring mode, comply with the following installation
sequence to achieve the best performance.
Table 44. DIMM installation sequence (mirror mode/lockstep mode for 3, 6, 9 and 12 identical DIMMs)
Number of processors
Installation sequence (connectors)
Processor 1 installed
(6, 7, 8), (1, 2, 3)
Processor 1 and 2 installed
(6, 7, 8), (14, 15, 16), (1, 2, 3), (9, 10, 11)
Memory rank sparing population sequence
Table 45. DIMM installation sequence (sparing mode)
Note: Signle-rank RDIMM is not supported by sparing. If you install signle-rank RDIMM, it switches to the
independent mode automatically.
Number of processors
Installation sequence (connectors)
Processor 1 installed
6, 3, 7, 2, 8, 1, 5, 4
Processor 1 and 2 installed
6, 14, 3, 11, 7, 15, 2, 10, 8, 16, 1, 9, 5, 13, 4, 12
Install a DIMM
Use this information to install a DIMM.
Before you install a DIMM:
1. Read the following section(s) to ensure that you work safely.
•
“Installation Guidelines” on page 54
2. Turn off the corresponding compute node that you are going to perform the task on.
3. Remove the compute node (see “Remove a compute node from the enclosure” on page 72).
4. Remove the compute node cover (see “Remove the compute node cover” on page 73).
5. Remove the air baffle (see “Remove the air baffle” on page 74).
Attention: Memory modules are sensitive to static discharge and require special handling. In addition to the
standard guidelines for :
• Always wear an electrostatic-discharge strap when removing or installing memory modules. Electrostaticdischarge gloves can also be used.
• Never hold two or more memory modules together so that they touch. Do not stack memory modules
directly on top of each other during storage.
• Never touch the gold memory module connector contacts or allow these contacts to touch the outside of
the memory-module connector housing.
• Handle memory modules with care: never bend, twist, or drop a memory module.
86
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
The following illustration shows the location of the DIMM connectors on the system board. The following
illustration shows the location of the DIMM connectors on the system board.
Figure 87. The location of the DIMM connectors on the system board
Complete the following steps to install a DIMM.
Watch the procedure. A video of the installation process is available:
• Youtube: https://www.youtube.com/playlist?list=PLYV5R7hVcs-DOlbsCdADcoKQdMB2Uuk-T
• Youku: http://list.youku.com/albumlist/show/id_50483438
Important: Before installing a memory module, make sure that you understand the required installation
order, depending on whether you are implementing memory mirroring, memory rank sparing, or independent
memory mode. See “Installation order” on page 85 for the required installation order.
Step 1.
Open the retaining clip on each end of the DIMM connector.
Attention:
• DIMMs are static-sensitive devices. The package must be grounded before it is opened.
• To avoid breaking the retaining clips or damaging the DIMM connectors, open and close the
clips gently.
Figure 88. DIMM installation
Step 2.
Touch the static-protective package that contains the DIMM to any unpainted metal surface on the
outside of the node. Then, remove the DIMM from the package.
Chapter 3. Solution hardware setup
87
Step 3.
Turn the DIMM so that the alignment slot align correctly with the alignment tab.
Step 4.
Insert the DIMM into the connector by aligning the edges of the DIMM with the slots at the ends of
the DIMM connector.
Step 5.
Firmly press the DIMM straight down into the connector by applying pressure on both ends of the
DIMM simultaneously. The retaining clips snap into the locked position when the DIMM is firmly
seated in the connector.
Note: If there is a gap between the DIMM and the retaining clips, the DIMM has not been correctly
inserted; open the retaining clips, remove the DIMM, and then reinsert it.
Step 6.
Reconnect any cable that you removed.
After you install a DIMM, complete the following steps:
1. Reinstall the air baffle (see “Install the air baffle” on page 97).
2. Reinstall the compute node cover (see “Install the compute node cover” on page 98).
3. Reinstall the compute node (see “Install a compute node in the enclosure” on page 99).
4. Check the power LED to make sure it transitions between fast blink and slow blink to indicate the node
is ready to be powered on.
Install a RAID adapter into the compute node
Use this information to install a RAID adapter into the compute node.
Before you install a RAID adapter into the compute node.:
1. Read the following section(s) to ensure that you work safely.
•
“Installation Guidelines” on page 54
2. Turn off the corresponding compute node that you are going to perform the task on.
3. Remove the compute node (see “Remove a compute node from the enclosure” on page 72).
4. Remove the compute node cover (see “Remove the compute node cover” on page 73).
5. Touch the static-protective package that contains the RAID adapter to any unpainted metal surface on
the node; then, remove the adapter from the package.
6. Place the RAID adapter, component side up, on a flat, static-protective surface and set any jumpers or
switches as described by the adapter manufacturer.
Complete the following steps to install a RAID adapter.
Watch the procedure. A video of the installation process is available:
• Youtube: https://www.youtube.com/playlist?list=PLYV5R7hVcs-DOlbsCdADcoKQdMB2Uuk-T
• Youku: http://list.youku.com/albumlist/show/id_50483438
88
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
Figure 89. RAID adapter installation
Step 1.
Connect the PCIe cable and SAS/SATA cables (See “Internal cable routing” on page 41).
Step 2.
Insert the end of the adapter into the slot.
Step 3.
Align the adapter with the guide pin; then, lower and rotate down the adapter to insert it.
After you install a RAID adapter into the compute node, complete the following steps.
1. If the air baffle is removed, reinstall it (see “Install the air baffle” on page 97).
2. Reinstall the compute node cover (see “Install the compute node cover” on page 98).
3. Reinstall the compute node (see “Install a compute node in the enclosure” on page 99).
4. Check the power LED to make sure it transitions between fast blink and slow blink to indicate the node
is ready to be powered on.
Install an M.2 drive in the M.2 backplane
Use this information to install an M.2 drive in the M.2 backplane.
Before you install an M.2 drive in the M.2 backplane:
1. Read the following section(s) to ensure that you work safely.
•
“Installation Guidelines” on page 54
2. Turn off the corresponding compute node that you are going to perform the task on.
3. Remove the compute node (see “Remove a compute node from the enclosure” on page 72).
4. Remove the compute node cover (see “Remove the compute node cover” on page 73).
5. Remove the M.2 backplane (see “Remove the M.2 backplane” on page 75).
Complete the following steps to install an M.2 drive in the M.2 backplane.
Watch the procedure. A video of the installation process is available:
• Youtube: https://www.youtube.com/playlist?list=PLYV5R7hVcs-DOlbsCdADcoKQdMB2Uuk-T
• Youku: http://list.youku.com/albumlist/show/id_50483438
Step 1.
Locate the connector on each side of the M.2 backplane.
Chapter 3. Solution hardware setup
89
Notes:
• Some M.2 backplanes support two identical M.2 drives. When two drives are installed, align and
support both drives when sliding the retainer forward to secure the drives.
• Install the M.2 drive in slot 0 first.
Figure 90. M.2 drive slot
Table 46. M.2 drive slot
1
Step 2.
Slot 0
2
Slot 1
Insert the M.2 drive at an angle (approximately 30 degrees) into the connector and rotate it until the
notch catches on the lip of the retainer; then, slide the retainer forward (toward the connector) to
secure the M.2 drive in the M.2 backplane.
Figure 91. M.2 drive installation
Attention: When sliding the retainer forward, make sure the two nubs on the retainer enter the
small holes on the M.2 backplane. Once they enter the holes, you will hear a soft “click” sound.
90
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
Figure 92. M.2 drive installation
After you install an M.2 drive in the M.2 backplane, complete the following steps:
1. Reinstall the M.2 backplane (see “Install the M.2 backplane” on page 92).
2. Reinstall the compute node cover (see “Install the compute node cover” on page 98).
3. Reinstall the compute node (see “Install a compute node in the enclosure” on page 99).
4. Check the power LED to make sure it transitions between fast blink and slow blink to indicate the node
is ready to be powered on.
How to adjust the position of the retainer on the M.2 backplane
Use this information to adjust the position of the retainer on the M.2 backplane.
Before you adjust the position of the retainer on the M.2 backplane, complete the following steps:
1. Read the following section(s) to ensure that you work safely.
•
“Installation Guidelines” on page 54
To adjust the position of the retainer on the M.2 backplane, complete the following steps.
Watch the procedure. A video of the installation process is available:
• Youtube: https://www.youtube.com/playlist?list=PLYV5R7hVcs-DOlbsCdADcoKQdMB2Uuk-T
• Youku: http://list.youku.com/albumlist/show/id_50483438
Step 1.
Locate the correct keyhole that the retainer should be installed into to accommodate the particular
size of the M.2 drive you wish to install.
Step 2.
Press both sides of the retainer and move it forward until it is in the large opening of the keyhole;
then, remove it from the backplane.
Step 3.
Insert the retainer into the correct keyhole and slide it backwards until the nubs are in the holes.
Chapter 3. Solution hardware setup
91
Install the M.2 backplane
Use this information to install the M.2 backplane.
Before you install the M.2 backplane:
1. Read the following section(s) to ensure that you work safely.
•
“Installation Guidelines” on page 54
2. Turn off the corresponding compute node that you are going to perform the task on.
3. Remove the compute node (see “Remove a compute node from the enclosure” on page 72).
4. Remove the compute node cover (see “Remove the compute node cover” on page 73).
Complete the following steps to install the M.2 backplane.
Watch the procedure. A video of the installation process is available:
• Youtube: https://www.youtube.com/playlist?list=PLYV5R7hVcs-DOlbsCdADcoKQdMB2Uuk-T
• Youku: http://list.youku.com/albumlist/show/id_50483438
Figure 93. M.2 backplane installation
92
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
Step 1.
Align the openings located at the bottom of the blue plastic supports at each end of the M.2
backplane with the guide pins on the system board; then, insert the backplane in the system board
connector. Press down on the M.2 backplane to fully seat it.
After you install the M.2 backplane, complete the following steps:
1. If the air baffle is removed, reinstall it (see “Install the air baffle” on page 97).
2. Reinstall the compute node cover (see “Install the compute node cover” on page 98).
3. Reinstall the compute node (see “Install a compute node in the enclosure” on page 99).
4. Check the power LED to make sure it transitions between fast blink and slow blink to indicate the node
is ready to be powered on.
Chapter 3. Solution hardware setup
93
Install a processor-heat-sink module
The processor and heat sink are removed together as part of a processor-heat-sink-module (PHM)
assembly. PHM installation requires a Torx T30 driver.
Note: If you are installing multiple options relating to the compute system board, the PHM installation should
be performed first.
“Read the
installation
Guidelines” on
page 54
“Power off
the server
for this task”
on page 112
“ATTENTION:
Static Sensitive Device
Ground package before opening”
on page 56
Attention:
• Each processor socket must always contain a cover or a PHM. When removing or installing a PHM,
protect empty processor sockets with a cover.
• Do not touch the processor socket or processor contacts. Processor-socket contacts are very fragile and
easily damaged. Contaminants on the processor contacts, such as oil from your skin, can cause
connection failures.
• Remove and install only one PHM at a time. If the system board supports multiple processors, install the
PHMs starting with the first processor socket.
• Do not allow the thermal grease on the processor or heat sink to come in contact with anything. Contact
with any surface can compromise the thermal grease, rendering it ineffective. Thermal grease can damage
components, such as electrical connectors in the processor socket. Do not remove the grease cover from
a heat sink until you are instructed to do so.
Notes:
• PHMs are keyed for the socket where they can be installed and for their orientation in the socket.
• See http://www.lenovo.com/us/en/serverproven/ for a list of processors supported for your server. All
processors on the system board must have the same speed, number of cores, and frequency.
• Before you install a new PHM or replacement processor, update your system firmware to the latest level.
See “Update the firmware” on page 114.
• Installing an additional PHM can change the memory requirements for your system. See “Memory module
installation” on page 83 for a list of processor-to-memory relationships.
• Optional devices available for your system might have specific processor requirements. See the
documentation that comes with the optional device for information.
• There are two heat sinks for SD530: 85x108x24.5mm and 108x108x24.5mm heat sinks.
Figure 94. Processor locations
94
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
– 85x108x24.5mm heat sink: It can be installed on the processor 1 or 2.
Note: 85x108x24.5mm heat sink supports all Processor Option Kits.
– 108x108x24.5mm heat sink: It can be installed on the processor 1 only.
Important: The following processors with two-processor configuration can be installed with
108x108x24.5mm heat sink on the processor 1 only.
– Intel Xeon Platinum 8156 105W 3.6GHz Processor Option Kit
– Intel Xeon Platinum 8158 150W-165W Processor Option Kit
– Intel Xeon Platinum 8160 150W 2.1GHz Processor Option Kit
– Intel Xeon Platinum 8164 145W 2.0GHz Processor Option Kit
– Intel Xeon Platinum 8170 165W 2.1GHz Processor Option Kit
– Intel Xeon Platinum 8176 165W 2.1GHz Processor Option Kit
– Intel Xeon Gold 5122 105W 3.6GHz Processor Option Kit
– Intel Xeon Gold 6134 130W 3.2GHz Processor Option Kit
– Intel Xeon Gold 6126 125W 2.6GHz Processor Option Kit
– Intel Xeon Gold 6136 150W 3.0GHz Processor Option Kit
– Intel Xeon Gold 5120T 105W Processor Option Kit
– Intel Xeon Gold 6130T 125W 2.1GHz Processor Option Kit
– Intel Xeon Gold 6142 150W 2.6GHz Processor Option Kit
– Intel Xeon Gold 6140 140W 2.3GHz Processor Option Kit
– Intel Xeon Gold 6150 165W 2.7GHz Processor Option Kit
– Intel Xeon Gold 6138 125W 2.0GHz Processor Option Kit
– Intel Xeon Gold 6138T 125W 2.0GHz Processor Option Kit
– Intel Xeon Gold 6148 150W 2.4GHz Processor Option Kit
– Intel Xeon Gold 6152 140W 2.1GHz Processor Option Kit
– Intel Xeon Silver 4109T 70W 2.0GHz Processor Option Kit
Before installing a PHM:
Note: The PHM for your system might be different than the PHM shown in the illustrations.
1. Read the following section(s) to ensure that you work safely.
•
“Installation Guidelines” on page 54
2. Turn off the corresponding compute node that you are going to perform the task on.
3. Remove the compute node (see “Remove a compute node from the enclosure” on page 72).
4. Remove the compute node cover (see “Remove the compute node cover” on page 73).
5. Remove the air baffle (see “Remove the air baffle” on page 74).
Chapter 3. Solution hardware setup
95
Figure 95. Processor locations
Step 1.
Remove the processor socket cover, if one is installed on the processor socket, by placing your
fingers in the half-circles at each end of the cover and lifting it from the system board.
Step 2.
Install the processor-heat-sink module on the system board.
Figure 96. Installing a PHM
a.
Align the triangular marks and guide pins on the processor socket with the PHM; then, insert
the PHM into the processor socket.
Attention: To prevent damage to components, make sure that you follow the indicated
tightening sequence.
b.
96
Fully tighten the Torx T30 captive fasteners in the installation sequence shown on the heat-sink
label. Tighten the screws until they stop; then, visually inspect to make sure that there is no
gap between the screw shoulder beneath the heat sink and the processor socket. (For
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
reference, the torque required for the nuts to fully tighten is 1.4 — 1.6 newton-meters, 12 — 14
inch-pounds).
After installing the PHM option:
1. If there are memory modules to install, install them. See “Install a DIMM” on page 86.
2. Reinstall the air baffle (see “Install the air baffle” on page 97).
3. Reinstall the compute node cover (see “Install the compute node cover” on page 98).
4. Reinstall the compute node (see “Install a compute node in the enclosure” on page 99).
5. Check the power LED to make sure it transitions between fast blink and slow blink to indicate the node
is ready to be powered on.
Install the air baffle
Use this procedure to install the air baffle.
Before you install the air baffle:
1. Read the following section(s) to ensure that you work safely.
•
“Installation Guidelines” on page 54
2. Turn off the corresponding compute node that you are going to perform the task on.
3. Remove the compute node (see “Remove a compute node from the enclosure” on page 72).
4. Remove the compute node cover (see “Remove the compute node cover” on page 73).
Complete the following steps to install the air baffle.
Watch the procedure. A video of the installation process is available:
• Youtube: https://www.youtube.com/playlist?list=PLYV5R7hVcs-DOlbsCdADcoKQdMB2Uuk-T
• Youku: http://list.youku.com/albumlist/show/id_50483438
Figure 97. Air baffle installation
Step 1.
Align the air baffle tabs with the baffle slots on both sides of the chassis; then, lower the air baffle
into the node. Press the air baffle down until it is securely seated.
Attention:
• For proper cooling and airflow, reinstall the air baffle before you turn on the node. Operating the
node with the air baffle removed might damage node components.
Chapter 3. Solution hardware setup
97
• Pay attention to the cables routed along the sidewalls of the node as they may catch under the
air baffle.
After you install the air baffle, complete the following steps.
1. Reinstall the compute node cover (see “Install the compute node cover” on page 98).
2. Reinstall the compute node (see “Install a compute node in the enclosure” on page 99).
3. Check the power LED to make sure it transitions between fast blink and slow blink to indicate the node
is ready to be powered on.
Install the compute node cover
Use this procedure to install the compute node cover.
Before you install the compute node cover:
1. Read the following section(s) to ensure that you work safely.
•
“Installation Guidelines” on page 54
2. Turn off the corresponding compute node that you are going to perform the task on.
3. Make sure that all components are installed and seated correctly and that you have not left loose tools
or parts inside the node.
4. Make sure that all internal cables are correctly routed. See “Internal cable routing” on page 41.
5. Remove the compute node (see “Remove a compute node from the enclosure” on page 72).
Complete the following steps to install the compute node cover.
Watch the procedure. A video of the installation process is available:
• Youtube: https://www.youtube.com/playlist?list=PLYV5R7hVcs-DOlbsCdADcoKQdMB2Uuk-T
• Youku: http://list.youku.com/albumlist/show/id_50483438
Figure 98. Compute node cover installation
Note: Before sliding the cover forward, make sure that all the tabs on the front, rear, and side of the cover
engage the side walls correctly. If the pins do not engage the enclosure correctly, it will be very difficult to
remove the cover next time.
98
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
Step 1.
Align the cover pins with the notches in the side walls of the node, then, position the cover on top
of the node.
Note: Align the front of the cover with lines in the node as shown in the illustration would help you
to install the cover correctly.
Step 2.
Slide the cover forward until the cover latches in place.
After you install the node cover, complete the following steps.
1. Reinstall the compute node (see “Install a compute node in the enclosure” on page 99).
2. Check the power LED to make sure it transitions between fast blink and slow blink to indicate the node
is ready to be powered on.
Install a compute node in the enclosure
Use this procedure to install a compute node in the D2 Enclosure.
Before you install the compute node in a enclosure:
1. Read the following section(s) to ensure that you work safely.
•
“Installation Guidelines” on page 54
Attention: Be careful when you are removing or installing the node to avoid damaging the node connectors.
Figure 99. Node connectors
Complete the following steps to install the compute node in a enclosure.
Watch the procedure. A video of the installation process is available:
• Youtube: https://www.youtube.com/playlist?list=PLYV5R7hVcs-DOlbsCdADcoKQdMB2Uuk-T
• Youku: http://list.youku.com/albumlist/show/id_50483438
Chapter 3. Solution hardware setup
99
Figure 100. Node installation
Step 1.
Select the node bay.
Note: If you are reinstalling a compute node that you removed, you must install it in the same node
bay from which you removed it. Some compute node configuration information and update options
are established according to node bay number. Reinstalling a compute node into a different node
bay can have unintended consequences. If you reinstall the compute node into a different node
bay, you might have to reconfigure the compute node.
Step 2.
Make sure that the front handle on the compute node is in the fully open position.
Step 3.
Slide the compute node into the node bay until it stops.
Step 4.
Rotate the compute node handle to the fully closed position until the handle latch clicks.
Note: The time required for a compute node to initialize varies by system configuration. The power
LED flashes rapidly; the power button on the compute node will not respond until the power LED
flashes slowly, indicating that the initialization process is complete.
After you install a compute node, complete the following steps:
1. Check the power LED to make sure it transitions between fast blink and slow blink to indicate the node
is ready to be powered on; then, power on the node.
2. Make sure that the power LED on the compute node control panel is lit continuously, indicating that the
compute node is receiving power and is turned on.
3. If you have other compute nodes to install, do so now.
4. You can place identifying information on the pull out label tab that are accessible from the front of the
node.
If this is the initial installation of the node in the enclosure, you must configure the node through the
Lenovo XClarity Provisioning Manager and install the node operating system. See ThinkSystem D2
Enclosure, Modular Enclosure combined with ThinkSystem SD530 Compute Node Setup Guide for
details.
If you have changed the configuration of the compute node or if you are installing a different compute
node from the one that you removed, you must configure the compute node through the Setup utility,
and you might have to install the compute node operating system, see ThinkSystem D2 Enclosure,
Modular Enclosure combined with ThinkSystem SD530 Compute Node Setup Guide for more details.
Install hardware options in the PCIe expansion node
Use the following information to remove and install the PCIe expansion node options.
100
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
Remove the compute-expansion node assembly from the enclosure
Use this procedure to remove the compute-expansion node assembly from the enclosure.
Attention: Unauthorized personnel should not remove or install the nodes. Only trained or service-related
personnel are admitted to perform such actions.
Before you remove the PCIe expansion node assembly from the enclosure:
1. Read the following section(s) to ensure that you work safely.
•
“Installation Guidelines” on page 54
2. When you remove the compute-expansion node assembly, note the node bay numbers and make sure
to reinstall it back to the original bays. Installing it into different node bays from the original can lead to
unexpected consequences, as some configuration information and update options are established
according to node bay number. If you reinstall the compute-expansion node assembly into different
node bays, you might have to reconfigure the reinstalled compute node. One way to track the node
assembly is via the serial number of the compute node.
Note: The serial number is located on the pull-out tab of each compute node.
Complete the following steps to remove the PCIe expansion node assembly from the enclosure.
Step 1.
Release and rotate the two front handles as shown in the illustration.
Figure 101. Compute-expansion node assembly removal
Attention: To maintain proper system cooling, do not operate the enclosure without a compute
node or node bay filler installed in each node bay.
Step 2.
Slide the node assembly out about 12 inches (300 mm); then, grip the node assembly with both
hands and remove it from the enclosure.
Step 3.
If the enclosure is powered on with nodes in the other two bays, it is critical for proper cooling that
you install two nodes or node fillers in the empty bays within 1 minute.
If you are instructed to return the component or optional device, follow all packaging instructions, and use
any packaging materials for shipping that are supplied to you.
Remove the rear cable cover
Use this information to remove the rear cable cover.
Before you remove the rear cable cover:
1. Read the following section(s) to ensure that you work safely.
Chapter 3. Solution hardware setup
101
•
“Installation Guidelines” on page 54
2. If the compute-expansion node assembly is installed in the enclosure, remove it (see “Remove the
compute-expansion node assembly from the enclosure” on page 101).
Complete the following steps to remove the rear cable cover.
Step 1.
Lift on the blue touch point of the rear cable cover.
Figure 102. Rear cable cover removal
Step 2.
Remove the rear cable cover.
If you are instructed to return the component or optional device, follow all packaging instructions, and use
any packaging materials for shipping that are supplied to you.
Install a PCIe adapter into the riser cage
Use this information to install a PCIe adapter into the riser cage.
Before you install a PCIe adapter into the riser cage:
1. Read the following section(s) to ensure that you work safely.
•
“Installation Guidelines” on page 54
2. If the compute-expansion node assembly is installed in the enclosure, remove it (see “Remove the
compute-expansion node assembly from the enclosure” on page 101).
3. Remove the rear cable cover (see “Remove the rear cable cover” on page 101).
4. Remove the riser miscellaneous cable from the front riser cage, and loosen the two captive screws to
remove the riser cage from the node.
102
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
Figure 103. Disconnecting the riser miscellaneous cable from the riser cage and removing the riser cage from the
expansion node
Complete the following steps to install a PCIe adapter into the riser cage.
Step 1.
If no adapter has been installed in the riser cage, remove the screw from the riser cage.
Figure 104. Removing the screw from the riser cage
Step 2.
Slide the adapter into the slot on the riser cage; then, fasten the screw to secure the adapter.
Chapter 3. Solution hardware setup
103
Figure 105. Installing an adapter into the riser cage
Step 3.
Connect the auxiliary power cable that comes with the adapter as illustrated.
Figure 106. Connecting the auxiliary power cable to the adapter connectors
104
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
Attention: The PCIe adapter may come with more than one auxiliary power cable, and it is of
crucial importance to adopt the cable specifically meant for SD530. Carefully examine the end of
cable for PCIe expansion node, and make sure it is exactly the same as illustrated.
Figure 107. The connector of the auxiliary cable for SD530
Notes:
1. The auxiliary power cable that comes with your adapter might look different from that in the
illustration.
2. The location of connectors might be different from that in the illustration.
After you install the PCIe adapter into the riser cage, complete the following steps:
1. Install the PCIe riser assembly into the PCIe expansion node (see “Install a PCIe riser assembly into the
PCIe expansion node assembly ” on page 105).
2. Install the rear cable cover (see “Install the rear cable cover” on page 108).
3. Install the PCIe expansion node assembly into the enclosure (see “Install the compute-expansion node
assembly into the enclosure” on page 110).
4. Power on the compute node.
Install a PCIe riser assembly into the PCIe expansion node assembly
Use this information to install a PCIe riser assembly into the compute-expansion node assembly.
Before you install a PCIe riser assembly into the compute-expansion node assembly:
1. Read the following section(s) to ensure that you work safely.
•
“Installation Guidelines” on page 54
2. If no adapter is installed in the riser cage, disconnect the front riser miscellaneous cable first if you are
removing the front riser cage, and loosen the two captive screws to remove the riser cage from the
expansion node; then, install an adapter into the riser cage (see “Install a PCIe adapter into the riser
cage” on page 102) .
Chapter 3. Solution hardware setup
105
Figure 108. Riser cage removal
3. If you are installing a new adapter in addition to an existing one, remove the airflow filler from the gap by
the front riser slot, and place it into the gap on the side of the expansion node as illustrated.
Figure 109. Airflow filler removal
Complete the following steps to install a PCIe riser assembly into the PCIe expansion node assembly.
Notes: For proper system cooling:
• When only one adapter is to be installed, make sure the adapter is install in the rear riser slot, and place
the airflow filler into the gap by the front riser slot.
106
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
Install the front PCIe riser assembly
Step 1. Pass the auxiliary power cable through the narrow window as illustrated; then, align the riser
assembly to the guide pins on the expansion node, and lower it until it stops.
Figure 110. Installing the front riser assembly into the expansion node
Step 2.
Tighten the two captive screws to secure the riser assembly to the expansion node.
Step 3.
Connect PCIe#3-A cable to the riser connector labeled “A.”
Chapter 3. Solution hardware setup
107
Figure 111. Connecting PCIe#3-A, PCIe#4-B and the riser miscellaneous cable to the front riser assembly
Step 4.
Connect PCIe#4-B cable to the riser connector labeled “B.”
Step 5.
Connect the riser miscellaneous cable to the riser assembly.
Step 6.
Connect the auxiliary power cable to the expansion node.
Figure 112. Connecting the auxiliary power cable to the expansion node
Install the rear cable cover
Use this information to install the rear cable cover.
Before you install the rear cable cover:
108
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
1. Read the following section(s) to ensure that you work safely.
•
“Installation Guidelines” on page 54
2. If the PCIe#2-B cable is connected to the rear riser assembly, make sure it is routed under the PCIe#1-A
cable through the gap between the two front riser power connectors.
3. If the PCIe#1-A cable is connected to the rear riser assembly, make sure it is routed above the PCIe#2-B
cable through the gap between the two front riser power connectors.
4. When both riser assemblies are installed, make sure the front riser auxiliary power cable is looped back
into the gap between the two front riser power connectors, and routed above the PCIe#2-B cable.
Figure 113. Routing PCIe#1-A, PCIe#2-B and the front riser auxiliary power cable
Complete the following steps to install the rear cable cover.
Step 1.
Align the side of the rear cable cover to the slot on the end of the expansion node.
Figure 114. Rear cable cover installation
Step 2.
Press down at the touch point until the rear cable cover snaps into place.
After you install the rear cable cover, complete the following steps:
1. Install the PCIe expansion node assembly into the enclosure (see “Install the compute-expansion node
assembly into the enclosure” on page 110).
Chapter 3. Solution hardware setup
109
2. Power on the compute node.
Install the compute-expansion node assembly into the enclosure
Use this procedure to install a compute-expansion node assembly into the enclosure.
Before you install the compute-expansion node assembly into the enclosure:
1. Read the following section(s) to ensure that you work safely.
•
“Installation Guidelines” on page 54
Attention: When removing or installing the node assembly, be careful not to damage the node connectors.
Figure 115. Connectors on the compute-expansion node assembly
Complete the following steps to install the PCIe expansion node assembly into the enclosure.
Step 1.
Select two empty bays vertically adjacent to each other for installation.
Figure 116. PCIe expansion node installation into the enclosure
Notes:
1. When reinstalling a compute-expansion node assembly removed previously, be sure to install
it into the exact same node bays. Some compute node configuration information and update
110
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
options are established according to node bay number, and reinstalling a compute node into a
different node bay can lead to unexpected consequences. If you reinstall the computeexpansion node assembly into different node bays, you might have to reconfigure the installed
compute node.
2. When a compute-expansion node assembly is installed in an enclosure, the other two node
bays in the same enclosure must be installed with either one compute-expansion node
assembly or two node fillers.
Step 2.
Make sure that the front handles of the compute node are in the fully open position.
Step 3.
Slide the compute-expansion node assembly into the node bays until it stops.
Step 4.
Rotate the compute node handles to the fully closed position with two hands until both the handle
latches click into place.
Note: The time required for a node to initialize varies by system configuration. The power LED
flashes rapidly; the power button on the compute node will not respond until the power LED flashes
slowly, indicating that the initialization process is complete.
After you install the compute-expansion node assembly to the enclosure, complete the following steps:
1. Check the power LED to make sure it transitions between fast blink and slow blink to indicate the node
is ready to be powered on; then, power on the node.
2. Make sure that the power LED on the compute node control panel is lit continuously, indicating that the
compute node is receiving power and is turned on.
3. If you have other compute nodes to install, do so now.
4. You can place identifying information on the pull out label tab that are accessible from the front of the
node.
If this is the initial installation of the node in the enclosure, you must configure the node through the
Lenovo XClarity Provisioning Manager and install the node operating system. See ThinkSystem D2
Enclosure, Modular Enclosure combined with ThinkSystem SD530 Compute Node Setup Guide for
details.
If you have changed the configuration of the compute node or if you are installing a different compute
node from the one that you removed, you must configure the compute node through the Setup utility,
and you might have to install the compute node operating system, see ThinkSystem D2 Enclosure,
Modular Enclosure combined with ThinkSystem SD530 Compute Node Setup Guide for more details.
Install the enclosure in a rack
To install the enclosure in a rack, follow the following instructions.
To install the enclosure in a rack, follow the instructions that are provided in the Rail Installation Kit for the
rails on which the enclosure will be installed.
Cable the solution
Attach all external cables to the solution. Typically, you will need to connect the solution to a power source,
to the data network, and to storage. In addition, you will need to connect the solution to the management
network.
Connect to power
Connect the solution to power.
Connect to the network
Connect the solution to the network.
Chapter 3. Solution hardware setup
111
Connect to storage
Connect the solution to any storage devices.
Power on the compute node
After the compute node performs a short self-test (power status LED flashes rapidly) when connected to
input power, it enters the standby state (power status LED flashes once per second).
A compute node can be turned on (power LED on) in any of the following ways:
• You can press the power button.
• The compute node can restart automatically after a power interruption.
• The compute node can respond to remote power-on requests sent to the Lenovo XClarity Controller.
For information about powering off the compute node, see “Power off the compute node” on page 112.
Validate solution setup
After powering up the solution, make sure that the LEDs are lit and that they are green.
Power off the compute node
The compute node remains in the standby state when connected to a power source, while Lenovo XClarity
Controller is allowed to respond to remote power-on requests. To completely power off the compute node
(power status LED off), you must disconnect all power cables.
To power off the compute node that is in a standby state (power status LED flashes once per second):
Note: The Lenovo XClarity Controller can place the compute node in a standby state as an automatic
response to a critical system failure.
• Start an orderly shutdown using the operating system (if supported by your operating system).
• Press the power button to start an orderly shutdown (if supported by your operating system).
• Press and hold the power button for more than 4 seconds to force a shutdown.
In the standby state, the compute node can respond to remote power-on requests sent to the Lenovo
XClarity Controller. For information about powering on the compute node, see “Power on the compute node”
on page 112.
112
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
Chapter 4. System configuration
Complete these procedures to configure your system.
Set the network connection for the Lenovo XClarity Controller
Before you can access the Lenovo XClarity Controller over your network, you need to specify how Lenovo
XClarity Controller will connect to the network. Depending on how the network connection is implemented,
you might need to specify a static IP address as well.
The procedure for setting the network connection will depend on whether or not you have a video connection
to the server.
• If a monitor is attached to the server, you can use Lenovo XClarity Controller to set the network
connection.
• If no monitor attached to the server, you can set the network connection through the Lenovo XClarity
Controller interface. Connect an Ethernet cable from your laptop to Lenovo XClarity Controller connector
on the server.
Note: Make sure that you modify the IP settings on the laptop so that it is on the same network as the
server default settings.
The default IPv4 address and the IPv6 Link Local Address (LLA) is provided on the Lenovo XClarity
Controller Network Access label that is affixed to the Pull Out Information Tab.
Important: The Lenovo XClarity Controller is set initially with a user name of USERID and password of
PASSW0RD (with a zero, not the letter O). This default user setting has Supervisor access. Change this user
name and password during your initial configuration for enhanced security.
You can use the Lenovo XClarity Administrator Mobile app to connect to the Lenovo XClarity Controller
interface and configure the network address. For additional information about the Mobile app, see the
following site:
http://sysmgt.lenovofiles.com/help/topic/com.lenovo.lxca.doc/lxca_usemobileapp.html
Complete the following steps to connect the Lenovo XClarity Controller to the network using the Lenovo
XClarity Provisioning Manager.
Step 1.
Start the server.
Step 2.
When you see <F1> Setup, press F1.
Step 3.
Specify how the Lenovo XClarity Controller will connect to the network.
• If you choose a static IP connection, make sure that you specify an IPv4 or IPv6 address that is
available on the network.
• If you choose a DHCP connection, make sure that the MAC address for the server has been
configured in the DHCP server.
Step 4.
Click OK to continue starting the server.
© Copyright Lenovo 2017, 2018
113
Enable System Management Module network connection via Lenovo
XClarity Controller
If no KVM breakout module is available to access Lenovo XClarity Controller logs with, enable System
Management Module network connection via XCC first. For more information, see “Web interface access” in
System Management Module User's Guide.
Set front USB port for Lenovo XClarity Controller connection
Before you can access the Lenovo XClarity Controller through the front USB port, you need to configure the
USB port forLenovo XClarity Controller connection.
Your server has a front panel USB port that you can use as an Lenovo XClarity Controller management
connection. See “Front view” on page 21 for the location of this connector.
You can switch the front panel USB port between normal and Lenovo XClarity Controller management
operation by performing one of the following steps.
• Hold the blue ID button on the front panel for at least 3 seconds until its LED flashes slowly (once every
couple of seconds). See “Front view” on page 21 for the location of the ID button.
• From the Lenovo XClarity Controller management controller CLI, run the usbfp command. For information
about using the Lenovo XClarity Controller CLI, see http://sysmgt.lenovofiles.com/help/topic/
com.lenovo.thinksystem.xcc.doc/dw1lm_c_ch7_commandlineinterface.html.
• From the Lenovo XClarity Controller management controller web interface, click BMC
Configuration > Network > Front Panel USB Port Management. For information about Lenovo XClarity
Controller web interface functions, see http://sysmgt.lenovofiles.com/help/topic/
com.lenovo.thinksystem.xcc.doc/dw1lm_r_immactiondescriptions.html.
You can also check the current setting of the front panel USB port using the Lenovo XClarity Controller
management controller CLI (usbfp command) or the Lenovo XClarity Controller management controller web
interface (BMC Configuration > Network > Front Panel USB Port Management). See http://
sysmgt.lenovofiles.com/help/topic/com.lenovo.thinksystem.xcc.doc/dw1lm_c_ch7_commandlineinterface.html or
http://sysmgt.lenovofiles.com/help/topic/com.lenovo.thinksystem.xcc.doc/dw1lm_r_immactiondescriptions.html.
Update the firmware
Several options are available to update the firmware for the server.
You can use the tools listed here to update the most current firmware for your server and the devices that are
installed in the server.
Note: Lenovo typically releases firmware in bundles called UpdateXpress System Packs (UXSPs). To ensure
that all of the firmware updates are compatible, you should update all firmware at the same time. If you are
updating firmware for both the Lenovo XClarity Controller and UEFI, update the firmware for Lenovo XClarity
Controller first.
Best practices related to updating firmware is available at the following location:
http://lenovopress.com/LP0656
Important terminology
• In-band update. The installation or update is performed using a tool or application within an operating
system that is executing on the server’s core CPU.
114
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
• Out-of-band update. The installation or update is performed by the Lenovo XClarity Controller collecting
the update and then directing the update to the target subsystem or device. Out-of-band updates have no
dependency on an operating system executing on the core CPU. However, most out-of-band operations
do require the server to be in the S0 (Working) power state.
• On-Target update. The installation or update is initiated from an Operating System executing on the
server’s operating system.
• Off-Target update. The installation or update is initiated from a computing device interacting directly with
the server’s Lenovo XClarity Controller.
• UpdateXpress System Packs (UXSPs). UXSPs are bundled updates designed and tested to provide the
interdependent level of functionality, performance, and compatibility. UXSPs are server machine-type
specific and are built (with firmware and device driver updates) to support specific Windows Server, Red
Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES) operating system distributions.
Machine-type-specific firmware-only UXSPs are also available.
See the following table to determine the best Lenovo tool to use for installing and setting up the firmware:
Note: The server UEFI settings for option ROM must be set to Auto or UEFI to update firmware using
Lenovo XClarity Administrator or Lenovo XClarity Essentials. For more information, see the following Tech
Tip:
https://datacentersupport.lenovo.com/us/en/solutions/ht506118
Tool
Lenovo XClarity
Provisioning
Manager
Limited to core system
firmware only.
In-band
update
Out-ofband
update
√
Lenovo XClarity
Controller
Supports core system
firmware and most
advanced I/O option
firmware updates
√
Lenovo XClarity
Essentials OneCLI
Supports all core
system firmware, I/O
firmware, and installed
operating system
driver updates
√
√
Lenovo XClarity
Essentials
UpdateXpress
Supports all core
system firmware, I/O
firmware, and installed
operating system
driver updates
√
√
Ontarget
update
Offtarget
update
Graphical
user
interface
√
√
√
√
Commandline interface
√
√
√
√
Supports
UXSPs
√
√
Chapter 4. System configuration
115
In-band
update
Tool
Lenovo XClarity
Essentials Bootable
Media Creator
Supports core system
firmware and I/O
firmware updates. You
can update the
Microsoft Windows
operating system, but
device drivers are not
included on the
bootable image
√
Lenovo XClarity
Administrator
Supports core system
firmware and I/O
firmware updates
√
Out-ofband
update
Ontarget
update
√
Offtarget
update
√
Graphical
user
interface
Commandline interface
√
√
Supports
UXSPs
√
√
The latest firmware can be found at the following site:
http://datacentersupport.lenovo.com/products/servers/thinksystem/sd530/7X21/downloads
• Lenovo XClarity Provisioning Manager
From Lenovo XClarity Provisioning Manager, you can update the Lenovo XClarity Controller firmware, the
UEFI firmware, and the Lenovo XClarity Provisioning Manager software.
Note: By default, the Lenovo XClarity Provisioning Manager Graphical User Interface is displayed when
you press F1. If you have changed that default to be the text-based system setup, you can bring up the
Graphical User Interface from the text-based system setup interface.
Additional information about using Lenovo XClarity Provisioning Manager to update firmware is available
at:
http://sysmgt.lenovofiles.com/help/topic/LXPM/platform_update.html
• Lenovo XClarity Controller
If you need to install a specific update, you can use the Lenovo XClarity Controller interface for a specific
server.
Notes:
– To perform an in-band update through Windows or Linux, the operating system driver must be installed
and the Ethernet-over-USB (sometimes called LAN over USB) interface must be enabled.
Additional information about configuring Ethernet over USB is available at:
http://sysmgt.lenovofiles.com/help/topic/com.lenovo.systems.management.xcc.doc/NN1ia_c_
configuringUSB.html
– If you update firmware through the Lenovo XClarity Controller, make sure that you have downloaded
and installed the latest device drivers for the operating system that is running on the server.
Specific details about updating firmware using Lenovo XClarity Controller are available at:
http://sysmgt.lenovofiles.com/help/topic/com.lenovo.systems.management.xcc.doc/NN1ia_c_
manageserverfirmware.html
• Lenovo XClarity Essentials OneCLI
116
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
Lenovo XClarity Essentials OneCLI is a collection of command line applications that can be used to
manage Lenovo servers. Its update application can be used to update firmware and device drivers for
your servers. The update can be performed within the host operating system of the server (in-band) or
remotely through the BMC of the server (out-of-band).
Specific details about updating firmware using Lenovo XClarity Essentials OneCLI is available at:
http://sysmgt.lenovofiles.com/help/topic/toolsctr_cli_lenovo/onecli_c_update.html
• Lenovo XClarity Essentials UpdateXpress
Lenovo XClarity Essentials UpdateXpress provides most of OneCLI update functions through a graphical
user interface (GUI). It can be used to acquire and deploy UpdateXpress System Pack (UXSP) update
packages and individual updates. UpdateXpress System Packs contain firmware and device driver
updates for Microsoft Windows and for Linux.
You can obtain Lenovo XClarity Essentials UpdateXpress from the following location:
https://datacentersupport.lenovo.com/solutions/lnvo-xpress
• Lenovo XClarity Essentials Bootable Media Creator
You can use Lenovo XClarity Essentials Bootable Media Creator to create bootable media that is suitable
for applying firmware updates, running preboot diagnostics, and deploying Microsoft Windows operating
systems.
You can obtain Lenovo XClarity Essentials BoMC from the following location:
https://datacentersupport.lenovo.com/solutions/lnvo-bomc
• Lenovo XClarity Administrator
If you are managing multiple servers using the Lenovo XClarity Administrator, you can update firmware for
all managed servers through that interface. Firmware management is simplified by assigning firmwarecompliance policies to managed endpoints. When you create and assign a compliance policy to managed
endpoints, Lenovo XClarity Administrator monitors changes to the inventory for those endpoints and flags
any endpoints that are out of compliance.
Specific details about updating firmware using Lenovo XClarity Administrator are available at:
http://sysmgt.lenovofiles.com/help/topic/com.lenovo.lxca.doc/update_fw.html
Configure the firmware
Several options are available to install and set up the firmware for the server.
Important: Do not configure option ROMs to be set to Legacy unless directed to do so by Lenovo Support.
This setting prevents UEFI drivers for the slot devices from loading, which can cause negative side effects for
Lenovo software, such as Lenovo XClarity Administrator and Lenovo XClarity Essentials OneCLI, and to the
Lenovo XClarity Controller. The side effects include the inability to determine adapter card details, such as
model name and firmware levels. When adapter card information is not available, generic information for the
model name, such as "Adapter 06:00:00" instead of the actually model name, such as "ThinkSystem RAID
930-16i 4GB Flash." In some cases, the UEFI boot process might also hang.
• Lenovo XClarity Provisioning Manager
From Lenovo XClarity Provisioning Manager, you can configure the UEFI settings for your server.
Note: The Lenovo XClarity Provisioning Manager provides a Graphical User Interface to configure a
server. The text-based interface to system configuration (the Setup Utility) is also available. From Lenovo
XClarity Provisioning Manager, you can choose to restart the server and access the text-based interface.
In addition, you can choose to make the text-based interface the default interface that is displayed when
you press F1.
• Lenovo XClarity Essentials OneCLI
Chapter 4. System configuration
117
You can use the config application and commands to view the current system configuration settings and
make changes to Lenovo XClarity Controller and UEFI. The saved configuration information can be used
to replicate or restore other systems.
For information about configuring the server using Lenovo XClarity Essentials OneCLI, see:
http://sysmgt.lenovofiles.com/help/topic/toolsctr_cli_lenovo/onecli_c_settings_info_commands.html
• Lenovo XClarity Administrator
You can quickly provision and pre-provision all of your servers using a consistent configuration.
Configuration settings (such as local storage, I/O adapters, boot settings, firmware, ports, and Lenovo
XClarity Controller and UEFI settings) are saved as a server pattern that can be applied to one or more
managed servers. When the server patterns are updated, the changes are automatically deployed to the
applied servers.
Specific details about updating firmware using Lenovo XClarity Administrator are available at:
http://sysmgt.lenovofiles.com/help/topic/com.lenovo.lxca.doc/server_configuring.html
• Lenovo XClarity Controller
You can configure the management processor for the server through the Lenovo XClarity Controller Web
interface or through the command-line interface.
For information about configuring the server using Lenovo XClarity Controller, see:
http://sysmgt.lenovofiles.com/help/topic/com.lenovo.systems.management.xcc.doc/NN1ia_c_
manageserverfirmware.html
Memory configuration
Memory performance depends on several variables, such as memory mode, memory speed, memory ranks,
memory population and processor.
More information about optimizing memory performance and configuring memory is available at the Lenovo
Press website:
https://lenovopress.com/servers/options/memory
In addition, you can take advantage of a memory configurator, which is available at the following site:
http://1config.lenovo.com/#/memory_configuration
RAID configuration
Using a Redundant Array of Independent Disks (RAID) to store data remains one of the most common and
cost-efficient methods to increase node's storage performance, availability, and capacity.
RAID increases performance by allowing multiple drives to process I/O requests simultaneously. RAID can
also prevent data loss in case of a drive failure by reconstructing (or rebuilding) the missing data from the
failed drive using the data from the remaining drives.
RAID array (also known as RAID drive group) is a group of multiple physical drives that uses a certain
common method to distribute data across the drives. A virtual drive (also known as virtual disk or logical
drive) is a partition in the drive group that is made up of contiguous data segments on the drives. Virtual drive
is presented up to the host operating system as a physical disk that can be partitioned to create OS logical
drives or volumes.
An introduction to RAID is available at the following Lenovo Press website:
118
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
https://lenovopress.com/lp0578-lenovo-raid-introduction
Detailed information about RAID management tools and resources is available at the following Lenovo Press
website:
https://lenovopress.com/lp0579-lenovo-raid-management-tools-and-resources
Install the operating system
Several options are available to install an operating system on the solution.
• Lenovo XClarity Administrator
If you are managing your solution using Lenovo XClarity Administrator, you can use it to deploy operatingsystem images to up to 28 managed servers concurrently. For more information about using Lenovo
XClarity Administrator to deploy operating system images, see:
http://sysmgt.lenovofiles.com/help/topic/com.lenovo.lxca.doc/compute_node_image_deployment.html
• Lenovo XClarity Provisioning Manager
Lenovo XClarity Provisioning Manager is used to install operating system of single solution. You can
complete operating system installation by following the instructions in Lenovo XClarity Provisioning
Manager OS Installation function.
• Install the operating system manually
If you cannot install the operating system through Lenovo XClarity Administrator or Lenovo XClarity
Provisioning Manager, you can install the operating system manually. For more information about
installing a specific operating system:
1. Go to http://datacentersupport.lenovo.com and navigate to the support page for your solution.
2. Click How-tos & Solutions.
3. Select an operating system and the installation instructions will be displayed.
Back up the solution configuration
After setting up the solution or making changes to the configuration, it is a good practice to make a complete
backup of the solution configuration.
Make sure that you create backups for the following solution components:
• Management processor
You can back up the management processor configuration through the Lenovo XClarity Controller
interface. For details about backing up the management processor configuration, see:
http://sysmgt.lenovofiles.com/help/topic/com.lenovo.systems.management.xcc.doc/NN1ia_c_
backupthexcc.html
Alternatively, you can use the s a v e command from Lenovo XClarity Essentials OneCLI to create a backup
of all configuration settings. For more information about the s a v e command, see:
http://sysmgt.lenovofiles.com/help/topic/toolsctr_cli_lenovo/onecli_r_save_command.html
• Operating system
Use your own operating-system and user-data backup methods to back up the operating system and
user data for the solution.
Chapter 4. System configuration
119
120
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
Chapter 5. Resolving installation issues
Use this information to resolve issues that you might have when setting up your system.
Use the information in this section to diagnose and resolve problems that you might encounter during the
initial installation and setup of your solution.
•
“Solution does not power on” on page 121
•
“The solution immediately displays the POST Event Viewer when it is turned on” on page 121
•
“Solution cannot recognize a drive” on page 121
•
“Displayed system memory less than installed physical memory” on page 122
•
“A Lenovo optional device that was just installed does not work.” on page 122
•
“Voltage planar fault is displayed in the event log” on page 123
Solution does not power on
Complete the following steps until the problem is resolved:
1. Check XCC web page can be logged in via out-of-band network interface.
2. Check the power button LED. If the power button LED is flashing slowly, press the power button to turn
on the solution.
3. Check power supplies are installed correctly and power supply LEDs are lit normally.
4. If the error recurs, check FFDC logs for more details.
The solution immediately displays the POST Event Viewer when it is turned on
Complete the following steps until the problem is solved.
1. Correct any errors that are indicated by the light path diagnostics LEDs.
2. Make sure that the solution supports all the processors and that the processors match in speed and
cache size.
You can view processor details from system setup.
To determine if the processor is supported for the solution, see http://www.lenovo.com/us/en/
serverproven/.
3. (Trained technician only) Make sure that processor 1 is seated correctly
4. (Trained technician only) Remove processor 2 and restart the solution.
5. Replace the following components one at a time, in the order shown, restarting the solution each time:
a. (Trained technician only) Processor
b. (Trained technician only) System board
Solution cannot recognize a drive
Complete the following steps until the problem is solved.
1. Verify that the drive is supported for the solution. See http://www.lenovo.com/us/en/serverproven/ for a list
of supported hard drives.
2. Make sure that the drive is seated in the drive bay properly and that there is no physical damage to the
drive connectors.
3. Run the diagnostics tests for the SAS/SATA adapter and drives. When you start a solution and press F1,
the Lenovo XClarity Provisioning Manager interface is displayed by default. You can perform hard drive
diagnostics from this interface. From the Diagnostic page, click Run Diagnostic ➙ HDD test.
© Copyright Lenovo 2017, 2018
121
Based on those tests:
• If the adapter passes the test but the drives are not recognized, replace the backplane signal cable
and run the tests again.
• Replace the backplane.
• If the adapter fails the test, disconnect the backplane signal cable from the adapter and run the tests
again.
• If the adapter fails the test, replace the adapter.
Displayed system memory less than installed physical memory
Complete the following steps until the problem is resolved:
Note: Each time you install or remove a DIMM, you must disconnect the solution from the power source;
then, wait 10 seconds before restarting the solution.
1. Make sure that:
• No error LEDs are lit on the operator information panel.
• Memory mirrored channel does not account for the discrepancy.
• The memory modules are seated correctly.
• You have installed the correct type of memory.
• If you changed the memory, you updated the memory configuration in the Setup utility.
• All banks of memory are enabled. The solution might have automatically disabled a memory bank
when it detected a problem, or a memory bank might have been manually disabled.
• There is no memory mismatch when the solution is at the minimum memory configuration.
2. Reseat the DIMMs, and then restart the solution.
3. Check the POST error log:
• If a DIMM was disabled by a systems-management interrupt (SMI), replace the DIMM.
• If a DIMM was disabled by the user or by POST, reseat the DIMM; then, run the Setup utility and
enable the DIMM.
4. Run memory diagnostics. When you start a solution and press F1, the Lenovo XClarity Provisioning
Manager interface is displayed by default. You can perform memory diagnostics from this interface.
From the Diagnostic page, click Run Diagnostic ➙ Memory test.
5. Reverse the DIMMs between the channels (of the same processor), and then restart the solution. If the
problem is related to a DIMM, replace the failing DIMM.
6. Re-enable all DIMMs using the Setup utility, and then restart the solution.
7. (Trained technician only) Install the failing DIMM into a DIMM connector for processor 2 (if installed) to
verify that the problem is not the processor or the DIMM connector.
8. (Trained technician only) Replace the system board.
A Lenovo optional device that was just installed does not work.
1. Make sure that:
• The device is supported for the solution (see http://www.lenovo.com/us/en/serverproven/).
• You followed the installation instructions that came with the device and the device is installed
correctly.
• You have not loosened any other installed devices or cables.
• You updated the configuration information in the Setup utility. Whenever memory or any other device
is changed, you must update the configuration.
2. Reseat the device that you just installed.
122
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
3. Replace the device that you just installed.
Voltage planar fault is displayed in the event log
Complete the following steps until the problem is solved.
1. Revert the system to the minimum configuration. See “Specifications” on page 6 for the minimally
required number of processors and DIMMs.
2. Restart the system.
• If the system restarts, add each of the items that you removed one at a time, restarting the system
each time, until the error occurs. Replace the item for which the error occurs.
• If the system does not restart, suspect the system board.
Chapter 5. Resolving installation issues
123
124
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
Appendix A. Getting help and technical assistance
If you need help, service, or technical assistance or just want more information about Lenovo products, you
will find a wide variety of sources available from Lenovo to assist you.
On the World Wide Web, up-to-date information about Lenovo systems, optional devices, services, and
support are available at:
http://datacentersupport.lenovo.com
Note: This section includes references to IBM web sites and information about obtaining service. IBM is
Lenovo's preferred service provider for ThinkSystem.
Before you call
Before you call, there are several steps that you can take to try and solve the problem yourself. If you decide
that you do need to call for assistance, gather the information that will be needed by the service technician to
more quickly resolve your problem.
Attempt to resolve the problem yourself
You can solve many problems without outside assistance by following the troubleshooting procedures that
Lenovo provides in the online help or in the Lenovo product documentation. The Lenovo product
documentation also describes the diagnostic tests that you can perform. The documentation for most
systems, operating systems, and programs contains troubleshooting procedures and explanations of error
messages and error codes. If you suspect a software problem, see the documentation for the operating
system or program.
You can find the product documentation for your ThinkSystem products at the following location:
http://thinksystem.lenovofiles.com/help/index.jsp
You can take these steps to try to solve the problem yourself:
• Check all cables to make sure that they are connected.
• Check the power switches to make sure that the system and any optional devices are turned on.
• Check for updated software, firmware, and operating-system device drivers for your Lenovo product. The
Lenovo Warranty terms and conditions state that you, the owner of the Lenovo product, are responsible
for maintaining and updating all software and firmware for the product (unless it is covered by an
additional maintenance contract). Your service technician will request that you upgrade your software and
firmware if the problem has a documented solution within a software upgrade.
• If you have installed new hardware or software in your environment, check http://www.lenovo.com/us/en/
serverproven/ to make sure that the hardware and software is supported by your product.
• Go to http://datacentersupport.lenovo.com and check for information to help you solve the problem.
– Check the Lenovo forums at https://forums.lenovo.com/t5/Datacenter-Systems/ct-p/sv_eg to see if
someone else has encountered a similar problem.
You can solve many problems without outside assistance by following the troubleshooting procedures that
Lenovo provides in the online help or in the Lenovo product documentation. The Lenovo product
documentation also describes the diagnostic tests that you can perform. The documentation for most
systems, operating systems, and programs contains troubleshooting procedures and explanations of error
© Copyright Lenovo 2017, 2018
125
messages and error codes. If you suspect a software problem, see the documentation for the operating
system or program.
Gathering information needed to call Support
If you believe that you require warranty service for your Lenovo product, the service technicians will be able
to assist you more efficiently if you prepare before you call. You can also see http://
datacentersupport.lenovo.com/warrantylookup for more information about your product warranty.
Gather the following information to provide to the service technician. This data will help the service
technician quickly provide a solution to your problem and ensure that you receive the level of service for
which you might have contracted.
• Hardware and Software Maintenance agreement contract numbers, if applicable
• Machine type number (Lenovo 4-digit machine identifier)
• Model number
• Serial number
• Current system UEFI and firmware levels
• Other pertinent information such as error messages and logs
As an alternative to calling Lenovo Support, you can go to https://www-947.ibm.com/support/servicerequest/
Home.action to submit an Electronic Service Request. Submitting an Electronic Service Request will start the
process of determining a solution to your problem by making the pertinent information available to the
service technicians. The Lenovo service technicians can start working on your solution as soon as you have
completed and submitted an Electronic Service Request.
Collecting service data
To clearly identify the root cause of a solution issue or at the request of Lenovo Support, you might need
collect service data that can be used for further analysis. Service data includes information such as event
logs and hardware inventory.
Service data can be collected through the following tools:
• Lenovo XClarity Provisioning Manager
Use the Collect Service Data function of Lenovo XClarity Provisioning Manager to collect system service
data. You can collect existing system log data or run a new diagnostic to collect new data.
• Lenovo XClarity Controller
You can use the Lenovo XClarity Controller web interface or the CLI to collect service data for the solution.
The file can be saved and sent to Lenovo Support.
– For more information about using the web interface to collect service data, see http://
sysmgt.lenovofiles.com/help/topic/com.lenovo.systems.management.xcc.doc/NN1ia_c_
servicesandsupport.html.
– For more information about using the CLI to collect service data, see http://sysmgt.lenovofiles.com/help/
topic/com.lenovo.systems.management.xcc.doc/nn1ia_r_ffdccommand.html.
• Lenovo XClarity Administrator
Lenovo XClarity Administrator can be set up to collect and send diagnostic files automatically to Lenovo
Support when certain serviceable events occur in Lenovo XClarity Administrator and the managed
endpoints. You can choose to send diagnostic files to Lenovo Support using Call Home or to another
service provider using SFTP. You can also manually collect diagnostic files, open a problem record, and
send diagnostic files to the Lenovo Support Center.
126
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
You can find more information about setting up automatic problem notification within the Lenovo XClarity
Administrator at http://sysmgt.lenovofiles.com/help/topic/com.lenovo.lxca.doc/admin_setupcallhome.html.
• Lenovo XClarity Essentials OneCLI
Lenovo XClarity Essentials OneCLI has inventory application to collect service data. It can run both inband and out-of-band. When running in-band within the host operating system on the solution, OneCLI
can collect information about the operating system, such as the operating system event log, in addition to
the hardware service data.
To obtain service data, you can run the g e t i n f o r command. For more information about running the
g e t i n f o r , see http://sysmgt.lenovofiles.com/help/topic/toolsctr_cli_lenovo/onecli_r_getinfor_command.html.
Contacting Support
You can contact Support to obtain help for your issue.
You can receive hardware service through a Lenovo Authorized Service Provider. To locate a service
provider authorized by Lenovo to provide warranty service, go to https://datacentersupport.lenovo.com/
serviceprovider and use filter searching for different countries. For Lenovo support telephone numbers, see
https://datacentersupport.lenovo.com/supportphonelist. In the U.S. and Canada, call 1-800-426-7378.
In the U.S. and Canada, hardware service and support is available 24 hours a day, 7 days a week. In the U.
K., these services are available Monday through Friday, from 9 a.m. to 6 p.m.
China product support
To contact product support in China, go to:
http://support.lenovo.com.cn/lenovo/wsi/es/ThinkSystem.html
You can also call 400-106-8888 for product support. The call support is available Monday through Friday,
from 9 a.m. to 6 p.m.
Taiwan product support
To contact product support for Taiwan, call 0800–016–888. The call support is available 24 hours a day, 7
days a week.
Appendix A. Getting help and technical assistance
127
128
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
Index
2.5-inch drive backplanes
introduction 33
7X20 7, 34
7X21 8, 36
7X22 7, 34
option install 94
creating a personalized support web page
custom support web page 125
125
D
A
a low-profile PCIe x16 adapter, installation 60
a low-profile PCIe x8 adapter, installation 63
a low-profile PCIe x8adapter in PCIe slot 3-B and 4-B,
installation 64
ac power LED 26
air baffle
removing 74
replacing 97
an M.2 drive in the M.2 backplane
installation 89
removal 76
B
back up the solution configuration
button, presence detection 24
119
C
cable management arm
install 71
cable routing
four 2.5-inch drive cable routing 41
KVM breakout module 49
PCIe expansion node 51
six 2.5-inch drive cable routing 43, 46
cable the solution 111
check log LED 22
collecting service data 126
Common installation issues 121
compute node 8, 36, 88
installing 99
removing 72
compute node cover
installing 98
Compute-expansion node assembly
removing 101
Configuration 113
configure the firmware 117
connector
USB 21–22
connectors
Ethernet 26
front of solution 21–22
internal 30
on the rear of the enclosure 26
power supply 26
rear 26
USB 26
video 26
connectors, internal system board 30
controls and LEDs
on the node operator panel 24
cover
installing 98
removing 73
CPU
© Copyright Lenovo 2017, 2018
dc power LED 26
devices, static-sensitive
handling 56
dimm
install 83
DIMM installation order 85
DIMM, install 86
drive
activity LED 22
installing 82
status LED 22
drive backplane, installation 81
DVD
drive activity LED 22
drive DVD LED 22
eject button 22
E
EIOM, installation 68
EIOM, removal 59
enclosure 7, 34
enclosure options
installing 57
enclosure rear view 26
Ethernet 26
link status LED 26
Ethernet activity
LED 24, 26
Ethernet connector 26
F
features 4
filler, node bay 99, 110
firmware updates 1
four 2.5-inch drive cable routing 41
front USB
configure for XCC management 114
front view
connectors 21–22
LED location 21–22
front view of the solution 21–22
G
Getting help 125
GPU 12
guidelines
options installation 54
system reliability 55
H
handling static-sensitive devices
56
129
hardware options
installing 56
hardware service and support telephone numbers
help 125
hot-swap power supply, install 66
system information 24
system locator 24
system-error 24
LEDs
front of solution 21–22
node operator panel 24
Lenovo XClarity Integrator 13
locator LED 22
127
I
install
a low-profile PCIe x16 adapter 60
a low-profile PCIe x8 adapter 63
a low-profile PCIe x8 adapter in PCIe slot 3-B and 4-B
a RAID adapter 88
adapter 102
an M.2 drive in the M.2 backplane 89
cable management arm 71
dimm 83
DIMM 86
drive backplane 81
EIOM 68
hot-swap power supply 66
KVM breakout module 77
M.2 backplane 92
memory 83
PCIe expansion node 105
PCIe riser assembly 102
rear cable cover 108
shuttle 70
install enclosure in a rack 111
install the operating system 119
installation 1, 70
guidelines 54
installation guidelines 54
installing
compute node 99
compute node cover 98
drive 82
enclosure options 57
node options 72
PCIe expansion node assembly 110
PCIe expansion node options 100
internal cable routing 41
internal connectors 30
internal, system board connectors 30
introduction 1
64
M.2 backplane
installation 92
removal 75
management options 13
memory
install 83
UDIMM 83
memory configuration 118
memory module installation order
microprocessor
option install 94
85
N
NMI button 26
node bay filler 99, 110
node operator panel
controls and LEDs 24
LEDs 24
node options
installing 72, 100
O
online documentation 1
option install
CPU 94
microprocessor 94
PHM 94
processor 94
processor-heat-sink module
94
P
J
jumpers
system board
31
K
KVM breakout cable 33
KVM breakout module cable routing
KVM breakout module, install 77
L
LED
ac power 26
dc power 26
drive activity 22
drive status 22
DVD drive activity 22
Ethernet activity 24, 26
Ethernet-link status 26
power supply error 26
power-on 24
130
M
49
parts list 34, 36, 38
PCI
slot 1 26
slot 2 26
PCIe 3.0 x16 LEDs 30
PCIe expansion node 12, 38
PCIe expansion node assembly
installing 110
PCIe expansion node cable routing
PCIe riser assembly, installation
adapter 102
PHM
option install 94
power
power-control button 22
power cords 41
power off the compute node 112
power on the compute node 112
power-control button 22
power-on LED 24
presence detection button 24
processor
option install 94
processor-heat-sink module
51
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
option install
94
switches
system board 31
system
error LED front 24
information LED 24
locator LED, front 24
system board
internal connectors 30
layout 30
switches and jumpers 31
system board internal connectors 30
system board layout 30
System configuration 113
System Management Module 27
system reliability guidelines 55
system-error LED 22
R
RAID adapter, installation 88
rear cable cover, installation 108
rear cable cover, removal 101
rear view 26–27, 30
of the enclosure 26
PCIe 3.0 x16 LEDs 30
System Management Module 27
remove
an M.2 drive in the M.2 backplane 76
EIOM 59
M.2 backplane 75
rear cable cover 101
shuttle 57
removing
air baffle 74
compute node 72
compute node cover 73
Compute-expansion node assembly 101
replacing
air baffle 97
reset button 22
retainer on M.2 backplane
adjustment 91
riser assemblies, installation
PCIe expansion node 105
T
telephone numbers
top cover
removing 73
U
UDIMM
requirement 83
Unbuffered DIMM 83
update the firmware 114
USB
connector 21–22, 26
USB (front)
configure for XCC management
S
sd350
introduction 1
SD530 compute node 8
service and support
before you call 125
hardware 127
software 127
service data 126
shuttle, removal 57
six 2.5-inch drive cable routing 43
NVMe supported 46
SMM 27
software 19
software service and support telephone numbers
solution setup 53
solution setup checklist 53
solution, front view 21–22
specifications 6–8, 12
static-sensitive devices
handling 56
support web page, custom 125
SW1 switch block description 31
switch block 31
© Copyright Lenovo 2017, 2018
127
114
V
validate solution setup
video connector
rear 26
112
W
127
working inside the solution
power on 55
X
XCC management
front USB configuration
114
131
132
ThinkSystem D2 Enclosure, Modular Enclosure and ThinkSystem SD530 Compute Node Setup Guide
Part Number: SP47A24235
Printed in China
(1P) P/N: SP47A24235
*1PSP47A24235*
Download PDF
Similar pages