Appendix G Component Mounting Conditions. Fujitsu 2000 Series, PRIMEQUEST2000 Series
Add to my manuals
285 Pages
Fujitsu PRIMEQUEST2000 Series is designed for use in computer room environments and is intended for system administrators. It features a variety of capabilities, including:
- Network environment setup
- Tool installation
- Operating system installation
- Component configuration and replacement
- PCI card hot maintenance in Red Hat Enterprise Linux 6
- Replacement of HDD/SSD
- PCI Express card hot maintenance in Windows
- Backup and restore
- System startup/shutdown and power control
- Configuration and status checking
- Error notification and maintenance
advertisement
Appendix G Component Mounting Conditions
G.2 DIMM
Appendix G Component Mounting
Conditions
This appendix describes the mounting conditions of components for the PRIMEQUEST 2000 series.
G.1 CPU
This section describes the number of CPUs that can be mounted and the criteria for mixing different types of
CPU.
CPU mounting criteria
- SB with one CPU is allowed in only single SB partition. (*1)
- SB with two CPUs is allowed in single SB partition.
- Only SB with two CPUs is allowed in multiple SB partition.
- CPUs must be mounted starting from CPU#0 on the SB.
- If CPU mounting order is wrong, SB becomes to be error.
- An SB with no CPU mounted on it will cause an error.
(*1) For PRIMEQUEST 2800B, only SB with two CPUs is allowed in single SB partition.
The following lists the number of SBs and CPUs per partition for each model.
TABLE G.1 Numbers of SBs and CPUs per partition
Partition configuration
1 SB
2 SB
3 SB
4 SB
PRIMEQUEST
2400E
1 or 2
4
Not supported
Not supported
PRIMEQUEST
2800E
1 or 2
4
6
8
2
4
6
8
PRIMEQUEST
2800B
CPU mixing condition
- In a partition, all CPUs must have the same frequency, cache size, core number, power, QPI rate, and scale.
- In a system, CPUs which have different frequency, cache size and core number can be mounted.
G.2 DIMM
This section describes the number of DIMMs that can be mounted and the criteria for mixing different types of
DIMM.
DIMM mounting conditions
- At least two DIMMs are required per CPU.
- Up to 24 DIMMs can be mounted per CPU.
- DIMMs must be mounted in the following units:
In below table, ‘N’ means normal mode or performance mode, ‘M’ means full mirror mode or partial mirror mode, and ‘S’ means spare mode.
TABLE G.2 DIMM increment unit
DR
Disable
PCI address mode
Bus
SB Number
(CPU number) per
Partition
1SB (1CPU) (*1)
1SB (2CPU)
2
N
N/A
DIMM increment unit
1CPU/1SB
M S N
2CPU/1SB
M
4
N/A
6
N/A
N/A
2
N/A
4
N/A
6
S
229 C122-E175-01EN
Appendix G Component Mounting Conditions
G.2 DIMM
DR
Enable
PCI address mode
Segment
Bus
SB Number
(CPU number) per
Partition
2SB (4CPU)
3SB (6CPU)
4SB (8CPU)
1SB (1CPU) (*1)
1SB (2CPU)
2SB (4CPU)
3SB (6CPU)
4SB (8CPU)
1SB (1CPU) (*1)
Segment
1SB (2CPU)
2SB (4CPU)
3SB (6CPU)
4SB (8CPU)
1SB (1CPU) (*1)
1SB (2CPU)
2SB (4CPU)
3SB (6CPU)
4SB (8CPU)
N/A: Not available
(*1) PRIMEQUEST 2800B is excluded.
DIMM mixing criteria
N
2
8
N/A
N/A
N/A
N/A
N/A
2
2
2
N/A
2
2
N/A
8
8
8
8
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
6
N/A
N/A
12
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
4
N/A
N/A
8
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
2
N/A
N/A
4
N/A
N/A
N/A
N/A
DIMM increment unit
1CPU/1SB 2CPU/1SB
M S N M
- 8 GB and 16 GB DIMMs can be mounted in a single SB or partition.
- 32 GB and 64 GB DIMMs cannot be mixed with DIMMs of other sizes in an SB or partition.
- Identical DIMMs means those with the same size.
TABLE G.3 Relationship between DIMM size and mutual operability (within an SB)
4
16
N/A
N/A
N/A
N/A
N/A
4
4
4
N/A
4
4
N/A
16
16
16
16
S
6
24
N/A
N/A
N/A
N/A
N/A
6
6
6
N/A
6
6
N/A
24
24
24
24
DIMM size
8 GB
16 GB
32 GB
64 GB
8 GB
Supported
Supported
Not supported
Not supported
16 GB
Supported
Supported
Not supported
Not supported
32 GB
Not supported
Not supported
Supported
Not supported
64 GB
Not supported
Not supported
Not supported
Supported
TABLE G.4 Relationship between DIMM size and mutual operability (within a partition)
DIMM size
8 GB
16 GB
32 GB
64 GB
8 GB
Supported
Supported
Not supported
Not supported
16 GB
Supported
Supported
Not supported
Not supported
32 GB
Not supported
Not supported
Supported
Not supported
64 GB
Not supported
Not supported
Not supported
Supported
TABLE G.5 Relationship between DIMM size and mutual operability (within a cabinet)
DIMM size
8 GB
16 GB
32 GB
64 GB
8 GB
Supported
Supported
Supported
Supported
16 GB
Supported
Supported
Supported
Supported
32 GB
Supported
Supported
Supported
Supported
64 GB
Supported
Supported
Supported
Supported
230 C122-E175-01EN
Appendix G Component Mounting Conditions
G.2 DIMM
DIMM mounting order and DIMM mixed mounting condition
The order of DIMM installation and the condition of DIMM mixed installation are shown below.
In tables of DIMM mounting order, DIMMs are installed in order from one with small number.
In tables of DIMM mixed mounting condition, the same symbol indicates the same DIMM.
DIMM mounting order and DIMM mixed mounting condition are decided depending on CPU number per the partition, whether PCI address mode is Segment or Bus and whether DR is enabled or disabled. Below table shows which tables you should see in each case.
TABLE G.6 DIMM mounting order and DIMM mixed mounting condition in each configuration
DR
Disable
PCI address mode
Bus
SB Number
(CPU number) per
Partition
1SB (1CPU) (*1)
DIMM mounting order
1CPU/1SB 2CPU/1SB
DIMM mixed mounting condition
1CPU/1SB 2CPU/1SB
Enable
Segment
Bus
1SB (2CPU)
2SB (4CPU)
3SB (6CPU)
4SB (8CPU)
1SB (1CPU) (*1)
1SB (2CPU)
2SB (4CPU)
3SB (6CPU)
4SB (8CPU)
1SB (1CPU) (*1)
1SB (2CPU)
2SB (4CPU)
3SB (6CPU)
4SB (8CPU)
1SB (1CPU) (*1) Segment
1SB (2CPU)
2SB (4CPU)
3SB (6CPU)
4SB (8CPU)
N/A: Not available
(*1) PRIMEQUEST 2800B is excluded.
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A N/A
N/A N/A
N/A
N/A
N/A
N/A
N/A N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
231 C122-E175-01EN
Appendix G Component Mounting Conditions
G.2 DIMM
TABLE G.7 DIMM mounting order at 1CPU/1SB (*1)
CPU#0
DIMM
Slot#
0A0 0A3 0B0 0B3 0C0 0C3 0D0 0D3
0A1 0A4 0B1 0B4 0C1 0C4 0D1 0D4
0A2 0A5 0B2 0B5 0C2 0C5 0D2 0D5
1 1 3 3 2 2 4 4 Normal
(Perfor mance)
5
9
1
5
9
1
7 7 6 6 8 8
11 11 10 10 12 12
1 1 2 2 2 2 Full or
Partial
Mirror
3
5
1
1
3
5
1
1
3
5
3
3
3
5
3
3
4
6
2
2
4
6
2
2
4
6
4
4
4
6
4
4 Spare
1 1 3 3 2 2 4 4
(*1) See ‘ TABLE G.11 DIMM mounting order at 1CPU/1SB when a Partition includes 8 sockets with PCI
Address Mode = Segment Mode, or when DR is enabled in a Partition ’ when a Partition includes 4SBs
with PCI Address Mode = Segment Mode, or DR is enabled in a Partition.
TABLE G.8 DIMM mounting order at 2CPU/1SB (*1)
CPU#0 CPU#1
DIMM
Slot#
Normal
(Perfor mance)
0A0 0A3 0B0 0B3 0C0 0C3 0D0 0D3 1A0 1A3 1B0 1B3 1C0 1C3 1D0 1D3
0A1 0A4 0B1 0B4 0C1 0C4 0D1 0D4 1A1 1A4 1B1 1B4 1C1 1C4 1D1 1D4
0A2 0A5 0B2 0B5 0C2 0C5 0D2 0D5 1A2 1A5 1B2 1B5 1C2 1C5 1D2 1D5
1 1 4 4 2 2 6 6 1 1 5 5 3 3 7 7
8 8 12 12 10 10 14 14 9 9 13 13 11 11 15 15
16 16 20 20 18 18 22 22 17 17 21 21 19 19 23 23
1 1 1 1 2 2 2 2 1 1 1 1 3 3 3 3 Full or
Partial
Mirror
Spare
4
8
1
1
4
8
1
1
4
8
4
4
4
8
4
4
6 6 6 6 5
10 10 10 10 9
2
2
2
2
6
6
6
6
1
1
5
9
1
1
5
9
5
5
5
9
5
5
7 7 7 7
11 11 11 11
3
3
3
3
7
7
7
7
1 1 4 4 2 2 6 6 1 1 5 5 3 3 7 7
(*1) See ‘ TABLE G.12 DIMM mounting order at 2CPU/1SB when a Partition includes 8 sockets with PCI
Address Mode = Segment Mode, or when DR is enabled in a Partition ’ when a Partition includes 4SBs
with PCI Address Mode = Segment Mode, or DR is enabled in a Partition.
232 C122-E175-01EN
Appendix G Component Mounting Conditions
G.2 DIMM
TABLE G.9 DIMM mixed mounting condition at 1CPU/1SB (*1)
DIMM
Slot#
Normal
(Perfor mance)
Full or
Partial
Mirror
Spare
CPU#0
0A0 0A3 0B0 0B3 0C0 0C3 0D0 0D3
0A1 0A4 0B1 0B4 0C1 0C4 0D1 0D4
0A2 0A5 0B2 0B5 0C2 0C5 0D2 0D5
□ □ ○ ○ △ △ ☆ ☆
□ □ ○ ○ △ △ ☆ ☆
□ □ ○ ○ △ △ ☆ ☆
□ □ □ □ △ △ △ △
□ □ □ □ △ △ △ △
□ □ □ □ △ △ △ △
□ □ ○ ○ △ △ ☆ ☆
□ □ ○ ○ △ △ ☆ ☆
□ □ ○ ○ △ △ ☆ ☆
233 C122-E175-01EN
Appendix G Component Mounting Conditions
G.2 DIMM
TABLE G.13 DIMM mixed mounting condition at 1CPU/1SB when a Partition includes 8 sockets with PCI
Address Mode = Segment Mode, or when DR is enabled in a Partition ’ when a Partition includes 4SBs
with PCI Address Mode = Segment Mode, or DR is enabled in a Partition.
TABLE G.10 DIMM mixed mounting condition at 2CPU/1SB (*1)
CPU#0 CPU#1
DIMM
Slot#
Normal
(Perfor mance)
Full or
Partial
Mirror
Spare
0A0 0A3 0B0 0B3 0C0 0C3 0D0 0D3 1A0 1A3 1B0 1B3 1C0 1C3 1D0 1D3
0A1 0A4 0B1 0B4 0C1 0C4 0D1 0D4 1A1 1A4 1B1 1B4 1C1 1C4 1D1 1D4
0A2 0A5 0B2 0B5 0C2 0C5 0D2 0D5 1A2 1A5 1B2 1B5 1C2 1C5 1D2 1D5
□
□
□
□
○
○
○
○
△
△
△
△
☆
☆
☆
☆
■
■
■
■
●
●
●
●
▲
▲
▲
▲
★
★
★
★
□ □ ○ ○ △ △ ☆ ☆ ■ ■ ● ● ▲ ▲ ★ ★
□ □ □ □ △ △ △ △ ■ ■ ■ ■ ▲ ▲ ▲ ▲
□ □ □ □ △ △ △ △ ■ ■ ■ ■ ▲ ▲ ▲ ▲
□ □ □ □ △ △ △ △ ■ ■ ■ ■ ▲ ▲ ▲ ▲
□ □ ○ ○ △ △ ☆ ☆ ■ ■ ● ● ▲ ▲ ★ ★
□ □ ○ ○ △ △ ☆ ☆ ■ ■ ● ● ▲ ▲ ★ ★
□ □ ○ ○ △ △ ☆ ☆ ■ ■ ● ● ▲ ▲ ★ ★
4SBs with PCI Address Mode = Segment Mode, or DR is enabled in a Partition.
234 C122-E175-01EN
Appendix G Component Mounting Conditions
G.2 DIMM
TABLE G.11 DIMM mounting order at 1CPU/1SB when a Partition includes 8 sockets with PCI Address Mode =
Segment Mode, or when DR is enabled in a Partition
DIMM
Slot#
Normal
(Perfor mance)
Full or
Partial
Mirror
Spare
CPU#0
1
1
1
1
3
5
0A0 0A3 0B0 0B3 0C0 0C3 0D0 0D3
0A1 0A4 0B1 0B4 0C1 0C4 0D1 0D4
0A2 0A5 0B2 0B5 0C2 0C5 0D2 0D5
1 1 3 3 2 2 4 4
5
9
5
9
7
11
7
11
6
10
6
10
8
12
8
12
1
3
5
1
1
1
1
3
5
3
3
3
1
3
5
3
3
3
2
4
6
2
2
2
2
4
6
2
2
2
2
4
6
4
4
4
2
4
6
4
4
4
TABLE G.12 DIMM mounting order at 2CPU/1SB when a Partition includes 8 sockets with PCI Address Mode =
Segment Mode, or when DR is enabled in a Partition
DIMM
Slot#
Normal
(Perfor mance)
Full or
Partial
Mirror
Spare
CPU#0 CPU#1
0A0 0A3 0B0 0B3 0C0 0C3 0D0 0D3 1A0 1A3 1B0 1B3 1C0 1C3 1D0 1D3
0A1 0A4 0B1 0B4 0C1 0C4 0D1 0D4 1A1 1A4 1B1 1B4 1C1 1C4 1D1 1D4
0A2 0A5 0B2 0B5 0C2 0C5 0D2 0D5 1A2 1A5 1B2 1B5 1C2 1C5 1D2 1D5
1 1 4 4 2 2 6 6 1 1 5 5 3 3 7 7
8 8 12 12 10 10 14 14 9 9 13 13 11 11 15 15
1
1
1
16 16 20 20 18 18 22 22 17 17 21 21 19 19 23 23
1 1 1 1 2 2 2 2 1 1 1 1 3 3 3 3
4
8
4
8
4
8
4
8
6
10
6
10
6
10
6
10
5
9
5
9
5
9
5
9
7
11
7
11
7
11
7
11
1
1
1
4
4
4
4
4
4
2
2
2
2
2
2
6
6
6
6
6
6
1
1
1
1
1
1
5
5
5
5
5
5
3
3
3
3
3
3
7
7
7
7
7
7
235 C122-E175-01EN
Appendix G Component Mounting Conditions
G.2 DIMM
TABLE G.13 DIMM mixed mounting condition at 1CPU/1SB when a Partition includes 8 sockets with PCI Address Mode
= Segment Mode, or when DR is enabled in a Partition
DIMM
Slot#
Normal
(Perfor mance)
Full or
Partial
Mirror
Spare
CPU#0
0A0 0A3 0B0 0B3 0C0 0C3 0D0 0D3
0A1 0A4 0B1 0B4 0C1 0C4 0D1 0D4
0A2 0A5 0B2 0B5 0C2 0C5 0D2 0D5
□ □ ○ ○ △ △ ☆ ☆
□ □ ○ ○ △ △ ☆ ☆
□ □ ○ ○ △ △ ☆ ☆
□ □ □ □ △ △ △ △
□ □ □ □ △ △ △ △
□ □ □ □ △ △ △ △
□ □ ○ ○ △ △ ☆ ☆
□ □ ○ ○ △ △ ☆ ☆
□ □ ○ ○ △ △ ☆ ☆
TABLE G.14 DIMM mixed mounting condition at 2CPU/1SB when a Partition includes 8 sockets with PCI Address Mode
= Segment Mode, or when DR is enabled in a Partition
DIMM
Slot#
Normal
(Perfor mance)
Full or
Partial
Mirror
Spare
CPU#0 CPU#1
0A0 0A3 0B0 0B3 0C0 0C3 0D0 0D3 1A0 1A3 1B0 1B3 1C0 1C3 1D0 1D3
0A1 0A4 0B1 0B4 0C1 0C4 0D1 0D4 1A1 1A4 1B1 1B4 1C1 1C4 1D1 1D4
0A2 0A5 0B2 0B5 0C2 0C5 0D2 0D5 1A2 1A5 1B2 1B5 1C2 1C5 1D2 1D5
□ □ ○ ○ △ △ ☆ ☆ ■ ■ ● ● ▲ ▲ ★ ★
□ □ ○ ○ △ △ ☆ ☆ ■ ■ ● ● ▲ ▲ ★ ★
□ □ ○ ○ △ △ ☆ ☆ ■ ■ ● ● ▲ ▲ ★ ★
□ □ □ □ △ △ △ △ ■ ■ ■ ■ ▲ ▲ ▲ ▲
□ □ □ □ △ △ △ △ ■ ■ ■ ■ ▲ ▲ ▲ ▲
□ □ □ □ △ △ △ △ ■ ■ ■ ■ ▲ ▲ ▲ ▲
□ □ ○ ○ △ △ ☆ ☆ ■ ■ ● ● ▲ ▲ ★ ★
□ □ ○ ○ △ △ ☆ ☆ ■ ■ ● ● ▲ ▲ ★ ★
□ □ ○ ○ △ △ ☆ ☆ ■ ■ ● ● ▲ ▲ ★ ★
236 C122-E175-01EN
Appendix G Component Mounting Conditions
G.8 NIC (Network Interface Card)
G.3 Configuration when using 100 V PSU
PRIMEQUEST 2000 series supports 100 V power supply in case of only PSU_S. Since power efficiency decrease when using 100V PSU, maximum quantity of component may decrease in a system.
G.4 Available internal I/O ports
The following table lists the number of available internal I/O ports.
TABLE G.15 Available internal I/O ports and the quantities
SB
Internal I/O
USB
VGA
HDD/SSD
4
1
4
No. Remarks
Home SB only
Home SB only
Home SB only when DR enabled.
IOU_1GbE
IOU_10GbE
DU
GbE
10GbE
HDD/SSD
2
2
4
G.5 Legacy BIOS Compatibility (CSM)
The PRIMEQUEST 2000 series uses the UEFI, which is firmware that provides the BIOS emulation function.
Currently, the following legacy BIOS restrictions are known:
- Option ROM area restriction: The number of PXE-enabled cards that can operate as boot devices is restricted to four.
- I/O space restriction: In a legacy BIOS environment, I/O space is required on a boot device.
Note
In a CSM environment, I/O space must be allocated to a boot device.
G.6 Rack Mounting
For details on installation in a 19-inch rack, see the PRIMEQUEST 2000 Series Hardware Installation Manual
(C122-H004EN).
G.7 Installation Environment
For details on the environmental conditions for PRIMEQUEST 2000 series installations, see the
PRIMEQUEST 2000 Series Hardware Installation Manual (C122-H004EN).
G.8 NIC (Network Interface Card)
Note the following precautions on mounting of a NIC (network interface card).
Notes
- We recommend specifying the members of teaming between LANs of the same type. (We recommend teaming between cards of the same type in the onboard LAN.)
- If the teaming is specified with different types of LAN, the scaling function on the receive side may be off because of differences in the scaling function.
Consequently, the balance of receive traffic may not be optimized, but this is not a problem for normal operation.
- Depending on the Intel PROSet version used at the time of teaming configuration, a warning may be output about scaling on the receive side being disabled for the above-described reasons. In this event, simply click the [OK] button.
For details on the scaling function on the receive side or other precautions, see the help for Intel PROSet or check the information at [Device Manager] - [Properties of the target LAN] - [Details] - [Receive-Side
Scaling].
237 C122-E175-01EN
Appendix G Component Mounting Conditions
G.8 NIC (Network Interface Card)
- For the WOL (Wake on LAN) support conditions of operating systems, see the respective operating system manuals and restrictions. For remote power control in an operating system that does not support
WOL, perform operations from the MMB Web-UI.
238 C122-E175-01EN
advertisement
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Key Features
- Designed for use in computer room environments
- Intended for system administrators
- Supports network environment setup
- Supports tool installation
- Supports operating system installation
- Supports component configuration and replacement
- Supports PCI card hot maintenance in Red Hat Enterprise Linux 6
- Supports replacement of HDD/SSD
- Supports PCI Express card hot maintenance in Windows
- Supports backup and restore
Related manuals
Frequently Answers and Questions
What is the purpose of the PRIMEQUEST 2000 Series?
Who is the intended audience for the manual?
What operating systems can be installed?
advertisement
Table of contents
- 1 Cover
- 2 Preface
- 13 Contents
- 18 Figures
- 21 Tables
- 24 CHAPTER 1 Network Environment Setup and Tool Installation
- 24 1.1 External Network Configuration
- 26 1.2 How to Configure the External Networks (Management LAN/ Maintenance LAN/Production LAN)
- 26 1.2.1 IP addresses used in the PRIMEQUEST 2000 series server
- 28 1.3 Management LAN
- 28 1.3.1 Overview of the management LAN
- 30 1.3.2 How to configure the management LAN
- 32 1.3.3 Redundant configuration of the management LAN
- 34 1.4 Maintenance LAN/REMCS LAN
- 34 1.5 Production LAN
- 34 1.5.1 Overview of the production LAN
- 35 1.5.2 Redundancy of the production LAN
- 35 1.6 Management Tool Operating Conditions and Use
- 35 1.6.1 MMB
- 35 1.6.2 Remote operation (BMC)
- 49 1.6.3 ServerView Suite
- 50 CHAPTER 2 Operating System Installation (Link)
- 51 CHAPTER 3 Component Configuration and Replacement (Add, Remove)
- 51 3.1 Partition Configuration
- 51 3.1.1 Partition Configuration (PPAR)
- 54 3.1.2 Setting procedure of partition in MMB Web-UI
- 54 3.2 High availability configuration
- 54 3.2.1 Dynamic Reconfiguration (DR)
- 59 3.2.2 Reserved SB
- 66 3.2.3 Memory Operation Mode
- 67 3.2.4 Memory Mirror
- 70 3.2.5 Hardware RAID
- 70 3.2.6 Server View RAID
- 70 3.2.7 Cluster configuration
- 70 3.3 Replacing components
- 71 3.3.1 Replaceable components
- 71 3.3.2 Component replacement conditions
- 72 3.3.3 Replacement procedures in hot maintenance
- 72 3.3.4 Replacement procedures in cold maintenance
- 72 3.3.5 Replacing the battery backup unit of the uninterrupted power supply unit (UPS)
- 72 3.3.6 Replacing the PCI SSD card
- 74 3.4 Expansion of components
- 75 3.4.1 Procedure of expansion in hot maintenance
- 75 3.4.2 Procedure of expansion in cold maintenance
- 76 3.4.3 Expansion of PCI SSD card
- 76 3.5 Process after switching to the Reserved SB and Automatic Partition Reboot
- 77 3.5.1 Checking the status after switching to a Reserved SB and automatic rebooting
- 77 3.5.2 Processing after replacement of a faulty SB
- 78 3.5.3 Checking the source partition configuration information when switching to a Reserved SB
- 79 3.6 Replacing the Home SB No.
- 81 CHAPTER 4 PCI Card Hot Maintenance in Red Hat Enterprise Linux 6
- 81 4.1 Dynamic Reconfiguration (DR)
- 81 4.1.1 DR function configuration setting
- 82 4.1.2 dp Command Package Install/ Uninstall
- 82 4.2 Hot add of SB
- 82 4.2.1 Preparing for SB hot add
- 83 4.2.2 Confirming the status of SB before SB hot add
- 84 4.2.3 DR operation in SB hot add
- 85 4.2.4 How to deal with when timeout occurs while OS is processing SB hot add
- 86 4.2.5 Operation after SB hot add
- 87 4.3 Hot replacement of IOU
- 87 4.3.1 Preparation for IOU hot replacement
- 92 4.3.2 DR operation of IOU hot replacement
- 93 4.3.3 Operation after IOU hot replacement
- 96 4.4 Hot add of IOU
- 96 4.4.1 Preparation for IOU hot add
- 96 4.4.2 DR operation of IOU hot add
- 97 4.4.3 Operation after IOU hot add
- 98 4.5 IOU hot remove
- 98 4.5.1 Preparation for IOU hot remove
- 103 4.5.2 DR operation of IOU hot remove
- 103 4.5.3 Operation after IOU hot remove
- 103 4.6 Hot Replacement of PCI Express Cards
- 104 4.6.1 Overview of common replacement procedures for PCI Express cards
- 104 4.6.2 PCI Express card replacement procedure in detail
- 110 4.6.3 FC card (Fibre Channel card) replacement procedure
- 113 4.6.4 Network card replacement procedure
- 120 4.6.5 Hot replacement procedure for iSCSI (NIC)
- 122 4.7 Hot Addition of PCI Express cards
- 122 4.7.1 Common addition procedures for all PCI Express cards
- 123 4.7.2 PCI Express card addition procedure in detail
- 128 4.7.3 FC card (Fibre Channel card) addition procedure
- 129 4.7.4 Network card addition procedure
- 132 4.8 Removing PCI Express cards
- 132 4.8.1 Common removal procedures for all PCI Express cards
- 132 4.8.2 PCI Express card removal procedure in detail
- 133 4.8.3 FC card (Fibre Channel card) removal procedure
- 133 4.8.4 Network card removal procedure
- 136 4.8.5 Hot removal procedure for iSCSI (NIC)
- 139 CHAPTER 5 Replacement of HDD/SSD
- 139 5.1 Hot replacement of HDD/SSD with Hardware RAID configuration
- 139 5.1.1 Hot replacement of failed HDD/SSD with RAID0 configuration
- 139 5.1.2 Hot replacement of failed HDD/SSD with RAID 1, RAID 1E, RAID 5, RAID 6, or RAID 10 configuration
- 140 5.2 Preventive replacement of HDD/SSD with Hardware RAID configuration
- 140 5.2.1 Preventive replacement of failed HDD/SSD with RAID0 configuration
- 141 5.2.2 Preventive replacement of failed HDD/SSD with RAID 1, RAID 1E, RAID 5, RAID 6, or RAID 10 configuration
- 142 5.3 Replacement of HDD/SSD in case hot replacement cannot be performed
- 143 CHAPTER 6 PCI Express card Hot Maintenance in Windows
- 143 6.1 Overview of Hot Maintenance
- 143 6.1.1 Overall flow
- 144 6.2 Common Hot Plugging Procedure for PCI Express cards
- 144 6.2.1 Replacement procedure
- 152 6.2.2 Addition procedure
- 157 6.2.3 About removal
- 157 6.3 NIC Hot Plugging
- 157 6.3.1 Hot plugging a NIC incorporated into teaming
- 161 6.3.2 Hot plugging a non-redundant NIC
- 161 6.3.3 NIC addition procedure
- 161 6.4 FC Card Hot Plugging
- 162 6.4.1 Hot plugging an FC card incorporated with the ETERNUS multipath driver
- 168 6.4.2 FC card addition procedure
- 168 6.5 Hot Replacement Procedure for iSCSI
- 169 6.5.1 Confirming the incorporation of a card with MPD
- 173 6.5.2 Disconnecting MPD
- 176 CHAPTER 7 Backup and Restore
- 176 7.1 Backing Up and Restoring Configuration Information
- 176 7.1.1 Backing up and restoring UEFI configuration information
- 178 7.1.2 Backing up and restoring MMB configuration information
- 180 CHAPTER 8 Chapter System Startup/Shutdown and Power Control
- 180 8.1 Power On and Power Off the Whole System
- 180 8.2 Partition Power on and Power off
- 180 8.2.1 Various Methods for Powering On the Partition
- 181 8.2.2 Partition Power on unit
- 181 8.2.3 Types of Power off Method of Partition
- 181 8.2.4 Powering Off Partition Units
- 182 8.2.5 Procedure for Partition Power On and Power Off
- 182 8.2.6 Partition Power on by MMB
- 183 8.2.7 Controlling Partition Startup by using the MMB
- 184 8.2.8 Checking the Partition Power status by using the MMB
- 186 8.3 Scheduled operations
- 186 8.3.1 Powering on a partition by scheduled operation
- 187 8.3.2 Power off a Partition by scheduled operation
- 187 8.3.3 Relation of scheduled operation and power restoration function
- 187 8.3.4 Scheduled operation support conditions
- 188 8.4 Automatic Partition Restart Conditions
- 188 8.4.1 Setting automatic partition restart conditions
- 190 8.5 Power Restoration
- 190 8.5.1 Settings for Power Restoration
- 190 8.6 Remote shutdown (Windows)
- 190 8.6.1 Prerequisites for remote shutdown
- 191 8.6.2 How to use remote shutdown
- 192 CHAPTER 9 Configuration and Status Checking (Contents, Methods, and Procedures)
- 192 9.1 MMB Web-UI
- 193 9.2 MMB CLI
- 194 9.3 UEFI
- 194 9.4 ServerView Suite
- 195 CHAPTER 10 Error Notification and Maintenance (Contents, Methods, and Procedures)
- 195 10.1 Maintenance
- 195 10.1.1 Maintenance using the MMB
- 195 10.1.2 Maintenance method
- 195 10.1.3 Maintenance modes
- 196 10.1.4 Maintenance of the MMB
- 196 10.1.5 Maintenance of the PCI_BOX (PEXU)
- 197 10.1.6 Maintenance policy/preventive maintenance
- 197 10.1.7 REMCS service overview
- 197 10.1.8 REMCS linkage
- 198 10.2 Troubleshooting
- 198 10.2.1 Troubleshooting overview
- 200 10.2.2 Items to confirm before contacting a sales representative
- 200 10.2.3 Sales representative (contact)
- 201 10.2.4 Finding out about abnormal conditions
- 203 10.2.5 Investigating abnormal conditions
- 206 10.2.6 Checking into errors in detail
- 207 10.2.7 Problems related to the main unit or a PCI_Box
- 207 10.2.8 MMB-related problems
- 208 10.2.9 Problems with partition operations
- 208 10.3 Notes on Troubleshooting
- 208 10.4 Collecting Maintenance Data
- 208 10.4.1 Logs that can be collected by the MMB
- 213 10.4.2 Collecting data for investigation (Windows)
- 214 10.4.3 Setting up the dump environment (Windows)
- 220 10.4.4 Acquiring data for investigation (RHEL)
- 221 10.4.5 sadump
- 221 10.5 Configuring and Checking Log Information
- 221 10.5.1 List of log information
- 222 10.6 Firmware Updates
- 222 10.6.1 Notes on updating firmware
- 223 Appendix A Functions Provided by the PRIMEQUEST 2000 Series
- 223 A.1 Function List
- 223 A.1.1 Action
- 223 A.1.2 Operation
- 224 A.1.3 Monitoring and reporting functions
- 225 A.1.4 Maintenance
- 226 A.1.5 Redundancy functions
- 226 A.1.6 External linkage functions
- 226 A.1.7 Security functions
- 227 A.2 Correspondence between Functions and Interfaces
- 227 A.2.1 System information display
- 227 A.2.2 System settings
- 227 A.2.3 System operation
- 227 A.2.4 Hardware status display
- 228 A.2.5 Display of partition configuration information and partition status
- 228 A.2.6 Partition configuration and operation setting
- 228 A.2.7 Partition operation
- 229 A.2.8 Partition power control
- 229 A.2.9 OS boot settings
- 229 A.2.10 MMB user account control
- 229 A.2.11 Server management network settings
- 230 A.2.12 Maintenance
- 230 A.3 Management Network Specifications
- 232 Appendix B Physical Mounting Locations and Port Numbers
- 232 B.1 Physical Mounting Locations of Components
- 234 B.2 Port Numbers
- 236 Appendix C Lists of External Interfaces Physical
- 236 C.1 List of External System Interfaces
- 236 C.2 List of External MMB Interfaces
- 237 Appendix D Physical Locations and BUS Numbers of Built-in I/O, and PCI Slot Mounting Locations and Slot Numbers
- 237 D.1 Physical Locations and BUS Numbers of Internal I/O Controllers of the PRIMEQUEST 2000 Series
- 237 D.2 Correspondence between PCI Slot Mounting Locations and Slot Numbers
- 240 Appendix E PRIMEQUEST 2000 Series Cabinets (Link)
- 241 Appendix F Status Checks with LEDs
- 241 F.1. LED Type
- 241 F1.1 Power LED, Alarm LED, and Location LED
- 241 F.1.2 PSU
- 242 F.1.3 FANU
- 242 F.1.4 SB
- 242 F.1.5 IOU
- 243 F.1.6 PCI Express slot of IOU
- 243 F.1.7 DU
- 243 F.1.8 HDD/SSD
- 244 F.1.9 MMB
- 244 F.1.10 LAN
- 245 F.1.11 OPL
- 245 F.1.12 PCI_Box
- 245 F.1.13 PCI Express slot in PCI_Box
- 246 F.1.14 IO_PSU
- 246 F.1.15 IO_FAN
- 246 F.2 LED Mounting Locations
- 248 F.3 LED list
- 251 F.4 Button and switch
- 252 Appendix G Component Mounting Conditions
- 252 G.1 CPU
- 252 G.2 DIMM
- 260 G.3 Configuration when using 100 V PSU
- 260 G.4 Available internal I/O ports
- 260 G.5 Legacy BIOS Compatibility (CSM)
- 260 G.6 Rack Mounting
- 260 G.7 Installation Environment
- 260 G.8 NIC (Network Interface Card)
- 262 Appendix H Tree Structure of the MIB Provided with the PRIMEQUEST 2000 Series
- 262 H.1 MIB Tree Structure
- 263 H.2 MIB File Contents
- 264 Appendix I Windows Shutdown Settings
- 264 I.1 Shutdown From MMB Web-UI
- 265 Appendix J How to Confirm Firmware of SAS RAID Controller Card
- 265 J.1 How to confirm firmware version of SAS RAID controller card
- 270 J.2 How to confirm firmware version of SAS card
- 273 J.3 How to confirm firmware version and UEFI driver version of FC card
- 274 J.3.1 How to confirm firmware version for FC card made by Qlogic
- 277 J.3.2 How to confirm firmware version for FC card made by Emulex
- 282 Appendix K Software (Link)
- 283 Appendix L Failure Report Sheet
- 283 L.1 Failure Report Sheet
- 284 Appendix M Information of PCI Express card