sg247039

sg247039

3

Chapter 3.

Basic partition management

򐂰

򐂰

򐂰

򐂰

This chapter describes the tools used to configure and control

Sserver

p5 systems and includes the following sections:

3.1, “Hardware Management Console” on page 58

3.2, “Advanced System Management Interface” on page 64

3.3, “Resetting a server” on page 69

3.4, “Partition Load Manager” on page 71

© Copyright IBM Corp. 2003, 2004, 2005. All rights reserved.

57

3.1 Hardware Management Console

In order to configure and administer a partioning-capable pSeries server, you must attach at least one IBM Hardware Management Console (HMC) for pSeries

to the system. You can find a general overview of HMC in 1.4, “IBM Hardware

Management Console” on page 11.

One HMC is capable of controlling multiple pSeries servers. At the time of the writing of this book, a maximum of 32 non-clustered pSeries servers and a maximum of 254 partitions are supported by one HMC. You can add a second redundant HMC to the configuration.

You can add, remove, or move resources between partitions. When moving resources in partitions, you can use the Resource Monitoring and Controlling infrastructure to provide a secure and reliable connection channel between the

HMC and the partitions. This connection channel is configured automatically by the HMC and by each AIX partition when the AIX partition is started. The HMC uses the open network LAN connection for this connection channel.

POWER5 processor-based system HMCs require Ethernet connectivity.

Sufficient Ethernet adapters must be available to enable public and private networks if you need both.

The 7310 Model C04 is a desktop model with one native 10/100/1000 Ethernet port, two additional PCI slots for additional Ethernet adapters, two PCI-Express slots, and six USB ports.

The 7310 Model CR3 is a 1U, 19-inch rack-mountable drawer that has two native

Ethernet ports, two additional PCI slots for additional Ethernet adapters, and three USB ports.

When an HMC is connected to

Sserver

p5 systems, the p5-570 integrated serial ports are disabled. If you need serial connections, for example for a non-Ethernet HACMP heartbeat, you must provide an async adapter.

Note: It is not possible to connect POWER4 and POWER5 processor-based

systems to the same HMC simultaneously.

To extend this functionality, you can use:

򐂰 Another HMC for remote access. This remote HMC must have a network

connection to the HMC, which is connected to the servers (see Figure 3-1 on page 59).

58

Partitioning Implementations for IBM

E server

p5 Servers

򐂰 An AIX 5L Web-based System Manager client to connect to the HMC over the network or the Web-based System Manager PC client, which runs on a

Windows or Linux operating systems.

Figure 3-1 Dual HMC configuration

The resources that the HMC manipulates are I/O devices and slots, memory, and processors.

This book does not discuss how to create a basic partition and the full range of functions.

3.1.1 Managing I/O devices and slots

Logical partitions can have desired or required I/O devices or slots. When you specify that an I/O device or slot is desired (or shared), either the I/O device or slot is meant to be shared with other logical partitions or the I/O device or slot is optional. When you specify that an I/O device or slot is required (or dedicated), then you cannot activate the logical partition if the I/O device or slot is unavailable or in use by another logical partition.

Chapter 3. Basic partition management

59

Note: If resources are moved dynamically, the configuration change is

temporary and is not reflected in the partition profile. Thus, all configuration changes are lost the next time that you activate the partition profile. If you want to save your new partition configuration, you must change the partition profile.

In Figure 3-2 and Figure 3-3 on page 61, the virtual adapters are examined.

Figure 3-2 DLPAR Virtual Adapter menu

60

Partitioning Implementations for IBM

E server

p5 Servers

Figure 3-3 Virtual Adapter capabilities

3.1.2 Managing memory

Memory in each logical partition operates within its assigned minimum and maximum values. The full amount of memory that you assign to a logical partition might not be available for the partition's use. Static memory overhead that is required to support the assigned maximum memory affects the reserved or hidden memory amount. This static memory overhead also influences the

minimum memory size of a partition (see Figure 3-4 and Figure 3-5 on page 62).

Figure 3-4 Move memory resources - step 1

Chapter 3. Basic partition management

61

Figure 3-5 Move memory resources - step 2

3.1.3 Managing processing power

The ability to move processor capacity dynamically becomes important when you need to adjust to changing workloads. You can move processing capacity based on the desired, minimum, and maximum values that you created for the profile.

The desired processing value that you establish is the amount of processing resources that you get if you do not overcommit the processing capacity. The minimum and maximum values enable you to establish a range within which you can move the processors dynamically.

For both shared and dedicated processors, you can specify a minimum value that is equal to the minimum amount of processing capacity that you need to support the logical partition. The maximum value must be less than the amount of

processing capacity that is available on the system (see Figure 3-6 on page 63 and Figure 3-7 on page 63).

62

Partitioning Implementations for IBM

E server

p5 Servers

Figure 3-6 Move processor resources

Figure 3-7 Move processing units

Chapter 3. Basic partition management

63

3.1.4 Scheduling movement of resources

You can schedule the dynamic movement of resources to and from logical partitions that are running on your managed system. When this procedure is completed, the managed system is set to perform the dynamic logical partitioning task at the date and time that you specify. You can set the managed system to add resources to a logical partition, remove resources from a logical partition, or move resources from one logical partition to another.

You can schedule the movement of memory and of dedicated processors. (You are not able to schedule the movement of shared processors.)

For more information about dynamic resources, see Chapter 5, “Dynamic logical partitioning” on page 139.

3.2 Advanced System Management Interface

The Advanced System Management Interface (ASMI) is the interface to the service processor that is required to perform general and administrator-level service tasks, such as reading service processor error logs, reading vital product data, setting up the service processor, and controlling the system power. You can access the ASMI through a Web browser, an ASCII console, or the HMC.

This interface is accessible using a Web browser on a client system that is connected to the service processor on an Ethernet network. You can also access it using a terminal that is attached to a serial port on the server.

This section covers two important system management topics in detail: network configuration and power/restart control.

With the system in power standby mode, or with an operating system in control of the machine or controlling the related partition, the service processor is working and checking the system for errors, ensuring the connection to the HMC for manageability purposes.

When the system status is standby, the service processor provides a System

Management Interface (that you can access by pressing any key on an attached serial console keyboard, or the ASMI using a Web browser on a client system that is connected to the service processor on an Ethernet network.

The service processor and the ASMI are standard on all

Sserver

p5 processor-based hardware. Both system management interfaces require you to enter the general or admin ID password. They also allow you to set flags that affect the operation of the system according to the provided password (such as auto power restart), to view information about the system (such as the error log

64

Partitioning Implementations for IBM

E server

p5 Servers

and virtual product data) and the network environment access setup, and to control the system power.

3.2.1 Accessing the ASMI using a Web browser

The ASMI requires password authentication and provides a Web connection to the service processor over the Ethernet using the secure sockets layer (SSL). To establish an SSL connection, open your browser using https://

.

Supported browsers are Netscape (version 7.1), Internet Explorer (version 6.0), and Opera (version 7.23). Later versions of these browsers are not supported.

JavaScript and cookies must be enabled.

The browser-based ASMI is available during all phases of the system operation, including initial program load (IPL) and run time. Some menu options are blocked during the system IPL or run time to prevent usage or ownership conflicts if corresponding resources are in use during that phase.

If accessed on a terminal, ASMI is only available if the system is powered off.

All requested input must be provided in English-language characters regardless of the language selected to view the interface.

3.2.2 Accessing the ASMI using the HMC

To access the ASMI using the HMC, following these steps:

1. In the navigation area, expand the managed system with which you want to work.

2. Expand Service Applications and click Service Focal Point.

3. In the content area, click Service Utilities.

4. From the Service Utilities window, select the managed system with which you want to work.

5. From the Selected menu on the Service Utilities window, select Launch ASM

menu.

3.2.3 Network configuration

򐂰

򐂰

You can initialize network addresses and attributes for the managed system with the ASMI using the following techniques:

Dynamic configuration, using DHCP

Manual configuration, if a specific address is required

Chapter 3. Basic partition management

65

Note: You can only use the ASMI to configure these network attributes when

the system is powered off (see Figure 3-8 and Figure 3-9).

Figure 3-8 ASMI network configuration, powered on

66

Partitioning Implementations for IBM

E server

p5 Servers

Figure 3-9 ASMI network configuration

3.2.4 Service processor

The service processor has a permanent firmware boot side, or A side, and a temporary firmware boot side, or B side. You should install new levels of firmware on the temporary side first in order to test the update’s compatibility with your applications. When the new level of firmware has been approved, you can copy it to the permanent side.

With the system running, the service processor provides the ability to view and change the power-on settings using the ASMI. Also, the surveillance function of the service processor is monitoring the operating system to confirm that it is still running and has not stalled.

3.2.5 Power/Restart control

You can start and shut down the system in addition to setting IPL options. In

ASMI, you can view and change the following IPL options:

򐂰 System boot speed

Fast or Slow. Fast boot results in skipped diagnostic tests and shorter memory tests during the boot.

Chapter 3. Basic partition management

67

򐂰

򐂰

򐂰

򐂰

򐂰

Firmware boot side for next boot

Permanent or Temporary. Firmware updates should be tested by booting from the temporary side before being copied into the permanent side.

System operating mode

Manual or Normal. Manual mode overrides various automatic power-on functions, such as auto-power restart, and enables the power switch button.

AIX/Linux partition mode boot (available only if the system is not managed by the HMC)

Service mode boot from saved list. This is the preferred way to run concurrent

AIX diagnostics

Service mode boot from default list. This is the preferred way to run stand-alone AIX diagnostics

Boot to open firmware prompt

Boot to System Management Service (SMS) to further select the boot devices or network boot options.

Boot to server firmware

Select the state for the server firmware: Standby or Running. When the server is in the server firmware standby state, partitions can be set up and activated.

Refer to Figure 3-10 on page 69 for an example of the power control modes.

68

Partitioning Implementations for IBM

E server

p5 Servers

Figure 3-10 ASMI, Power On/Off System

3.3 Resetting a server

If you need support for an adapter that does not use extended error handling

(EEH), and there is no possible solution to obtain EEH support for it or to provide an alternative hardware solution, there might be no other choice than to reset your server to the factory settings if the adapter is supported in this mode. This is also the case if you no longer wish to manage a system with an HMC.

Note: The p5-590 and p5-595 must be managed by an HMC.

This section describes how to reset a server in both of these cases.

3.3.1 EEH adapters and partitioning

Currently, you can order POWER5-based systems only with adapters that support EEH. Support of a non-EEH adapter (OEM adapter) is only possible when the system has not been configured for partitioning. This is the case when a new system is received, for example, and it is in full system partition which you

Chapter 3. Basic partition management

69

plan to use without an HMC. EEH is disabled for that adapter upon system initialization.

When the platform is prepared for partitioning or is partitioned, the hypervisor prevents disabling EEH upon system initialization. Firmware in the partition detects any non-EEH device drivers that are installed and that are not configured. Therefore, all adapters in

Sserver

p5 systems must be EEH capable in order to be used by a partition. This applies to I/O installed in I/O drawers attached to a

Sserver

p5 system and I/O installed in planar adapter slots found in

Sserver

p5 system units.

You do not need to actually create more than a single partition to put the platform in a state where the hypervisor considers it to be partitioned. The platform becomes partitioned (in general, but also in specific reference to EEH enabled by default) as soon as you attach an HMC and perform any function that relates to partitioning. Simple hardware service operations do not partition the platform, so it is not simply connecting an HMC that has this affect. However, modifying any platform attributes that are related to partitioning (such as booting under HMC control to only PHYP standby and suppressing autoboot to the preinstalled operating system partition) results in a partitioned platform, even if you do not actually create additional partitions.

All

Sserver

p5 platform IO slots (Sacs and drawers) are managed the same with respect to EEH. To return a system to a non-partitioned state, you must perform a reset.

3.3.2 Restoring a server to factory settings

You can reset a server to the factory default settings. It is recommended that you perform this task only when directed to do so by your service provider.

Attention: Before resetting a server, make sure that you have manually

recorded all settings that you need to preserve. You can reset a server only if the identical level of firmware exists on both the permanent firmware boot side, also known as the P side, and the temporary firmware boot side, also known as the T side.

Resetting a server results in the loss of all system settings (such as the HMC access and ASMI passwords, time of day, network configuration, and hardware de configuration policies) that you may have set through user interfaces. Also, you lose the system error logs and partition-related information.

To reset a server, your authority level must be one of the following:

򐂰

Administrator

70

Partitioning Implementations for IBM

E server

p5 Servers

򐂰 Authorized service provider

To restore server settings to factory settings, do the following:

1. On the ASMI Welcome pane, specify your user ID and password, and click

Log In.

2. In the navigation area, expand System Service Aids.

3. Select Factory Configuration.

4. Select the options that you want to restore to factory settings.

5. Click Continue. The service processor reboots.

3.4 Partition Load Manager

The Partition Load Manager (PLM) is a utility that can redistribute processors and memory resources automatically between partitions that are running AIX 5L

Version 5.3. The PLM server monitors the processor and memory load in the managed partitions using the AIX Resource Management and Control subsystem. Based on an administrator defined policy, the PLM server orchestrates the movement of processor and memory resources between the partitions by communicating with the Resource Management and Control client and the HMC.

The PLM software is part of the Advanced POWER Virtualization feature on

Sserver

p5 servers and helps you maximize the dynamic utilization of processor and memory resources of partitions.

PLM provides automated processor and memory resource management across

DLPAR capable logical partitions running AIX 5L. PLM allocates resources to partitions on-demand, within the constraints of a user-defined policy. Partitions with a high demand for resources are given resources from partitions with a lower demand, improving the overall resource utilization of the system.

Resources that would otherwise be unused, if left allocated to a partition that was not utilizing them, can now be used to meet resource demands of other partitions

in the same system. Figure 3-11 on page 72 shows how PLM functionality can

improve partition resource utilization.

Chapter 3. Basic partition management

71

Figure 3-11 Partition Load Manager functionality

PLM uses a client-server model to report and manage resource utilization. The clients, or managed partitions, notify the PLM server when resources are either not used enough or are overused. Upon notification of one of these events, the

PLM server makes resource allocation decisions based on a user-defined resource management policy. This policy determines how much of the available resources are to be allocated to each partition.

PLM works much like any other system management software in that it allows you to view the resources across your partitions, group those resources into manageable chunks, allocate and reallocate those resources within or across the groups, and maintain local logs of activity on the partitions.

PLM is a resource manager that assigns and moves resources based on defined policies and utilization of the resources. PLM manages memory, both dedicated processors and partitions, using Micro-Partitioning technology to readjust the resources. This adds additional flexibility on top of the micro-partitions flexibility that is added by the hypervisor.

PLM, however, has no knowledge about the importance of a workload running in the partitions and cannot re-adjust priority based on the changes of types of

workloads. PLM does not manage Linux and i5/OS partitions. Figure 3-12 on page 73 shows a comparison of features between PLM and the hypervisor.

72

Partitioning Implementations for IBM

E server

p5 Servers

PLM Differentiation Capability

HW Support

OS Support

Physical Processor

Management

Virtual Processor

Management

Physical Memory

Management

Management Policy

Management Domains

Administration

POWER4

PLM automates DLPAR adjustment for P4 install base

POWER5

AIX 5.2

PLM runs on AIX 5.2 on P4 and P5 systems (through PRPQ)

AIX 5.3

pLinux

Dedicated

PLM runs on AIX 5.2 and/or P4 systems

Capped shared

Uncapped shared

Virtual processor minimization for efficiency

Virtual processor adjustment for physical processor growth

Share-based

Minimum and maximum entitlements

Entitlement-based

Goal-based

Application/middleware instrumentation required

Multiple management domains on a single CEC

Cross platform (CEC)

Simple administration

Centralized LPAR monitoring

(PLM command provides usage stats)

TOD-driven policy adjustment

(PLM command supports new policy load based as TOD)

Figure 3-12 Comparison of features of PLM and hypervisor

PLM

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X

P5

PHYP

X

X

X

X

X

X

PLM is set up in a partition or on another system running AIX 5L Version 5.2 ML4 or AIX 5L Version 5.3. Linux for PLM and the clients is not available. You can have other installed applications on the partition or system running PLM as well.

A single instance of PLM can only manage a single server.

To configure PLM, you can use the command line interface or the Web-based

System Manager for graphical setup.

Figure 3-13 on page 74 shows an overview of the components of PLM.

Chapter 3. Basic partition management

73

Figure 3-13 PLM overview

򐂰

򐂰

򐂰

The policy file defines managed partitions, their entitlements, and their thresholds and organizes the partitions into groups. Every node managed by

PLM must be defined in the policy file along with several associated attribute values:

Optional maximum, minimum, and guaranteed resource values

The relative priority or weight of the partition

Upper and lower load thresholds for resource event notification

For each resource (processor and memory), the administrator specifies an upper and a lower threshold for which a resource event should be generated. You can also choose to manage only one resource.

򐂰

򐂰

򐂰

Partitions that have reached an upper threshold become resource

Partitions that have reached a lower threshold become resource

requesters

.

donors

. When a request for a resource is received, it is honored by taking resources from one of three sources when the requester has not reached its maximum value:

A pool of free, unallocated resources

A resource donor

A lower priority partition with excess resources over entitled amount

74

Partitioning Implementations for IBM

E server

p5 Servers

As long as there are resources available in the free pool, they are given to the requester. If there are no resources in the free pool, the list of resource donors is checked. If there is a resource donor, the resource is moved from the donor to the requester. The amount of resource moved is the minimum of the delta values for the two partitions, as specified by the policy. If there are no resource donors, the list of excess users is checked.

When determining if resources can be taken from an excess user, the weight of the partition is determined to define the priority. Higher priority partitions can take resources from lower priority partitions. A partition's priority is defined as the ratio of its excess to its weight, where excess is expressed with the formula (current amount - desired amount) and weight is the policy-defined weight. A lower value

for this ratio represents a higher priority. Figure 3-14 shows an overview of the

process for partitions.

Figure 3-14 PLM resource distribution for partitions

In Figure 3-14, all partitions are capped partitions. LPAR3 is under heavy load

and over its high processor average threshold value becoming a requestor.

There are no free resources in the free pool and no donor partitions available.

PLM then checks the excess list to find a partition that has resources allocated over its guaranteed value and with a lower priority. Calculating the priority, LPAR1 has the highest ratio number and therefore the lowest priority. PLM de-allocates resources from LPAR1 and allocates them to LPAR3.

Chapter 3. Basic partition management

75

If the request for a resource cannot be honored, it is queued and re-evaluated when resources become available. A partition cannot fall below or rise above its maximum definition for each resource.

The policy file, once loaded, is static and has no knowledge of the nature of the workload on the managed partitions. A partition's priority does not change with the arrival of high priority work. The priority of partitions can only be changed by an action external to PLM by loading a new policy.

PLM handles memory and both types of processor partitions (dedicated and shared processor). All the partitions in a group must be of the same processor type.

3.4.1 Managing memory

򐂰

򐂰

PLM manages memory by moving logical memory blocks across partitions. To determine when there is demand for memory, PLM uses two metrics:

Utilization percentage (ratio of memory in use to available memory)

The page replacement rate

For workloads that result in significant file caching, the memory utilization on AIX can never fall below the specified lower threshold. With this type of workload, a partition can never become a memory donor, even if the memory is not currently being used.

In the absence of memory donors, PLM can only take memory from excess users. Because the presence of memory donors cannot be guaranteed and is unlikely with some workloads, memory management with PLM may only be effective if there are excess users present. One way to ensure the presence of excess users is to assign each managed partition a low guaranteed value, such that it will always have more than its guaranteed amount. With this sort of policy,

PLM can redistribute memory to partitions based on their demand and priority.

3.4.2 Managing processors

For dedicated processor partitions, PLM moves physical processors, one at a time, from partitions that are not using them to partitions that have demand for them. This movement enables dedicated processor partitions that are running

AIX 5L Version 5.2 and AIX 5L Version 5.3 to better use their resources. If one partition needs more processor capacity, PLM automatically moves processors from a partition that has idle capacity.

For shared processor partitions, PLM manages the entitled capacity and the number of virtual processors for capped or uncapped partitions. When a partition has requested more processor capacity, PLM increases the entitled capacity for

76

Partitioning Implementations for IBM

E server

p5 Servers

the requesting partition if additional processor capacity is available. For uncapped partitions, PLM can increase the number of virtual processors to increase the partition's potential to consume processor resources under high load conditions. Conversely, PLM also decreases entitled capacity and the number of virtual processors under low-load conditions to more efficiently use the underlying physical processors.

With the goal of maximizing a partition's and the system's ability to consume available processor resources, the administrator now can:

1. Configure partitions that have high workload peaks as uncapped partitions with a large number of virtual processors. This approach has the advantage of allowing these partitions to consume more processor resource when it is needed and available, with very low latency and no dynamic reconfiguration.

For example, consider a 16-way system that uses two highly loaded partitions that are configured with eight virtual processors each, in which case, all physical processors could have been fully used. The disadvantage of this approach is that when these partitions are consuming at or below their desired capacity, there is an overhead that is associated with the large number of virtual processors defined.

2. Use PLM to vary the capacity and number of virtual processors for the partitions. This approach has the advantages of allowing partitions to consume all of the available processor resource on demand, and it maintains a more optimal number of virtual processors. The disadvantage to this approach is that since PLM performs dynamic reconfiguration operations to shift capacity to and from partitions, there is a much higher latency for the reallocation of resources. Though this approach offers the potential to more fully use the available resource in some cases, it significantly increases the latency for redistribution of available capacity under a dynamic workload, because dynamic reconfiguration operations are required.

3.4.3 Limitations and considerations

Consider the following limitations when managing your system with PLM:

򐂰

򐂰

You can use PLM in partitions that are running AIX 5L Version 5.2 ML4 or AIX

5L Version 5.3. Linux or i5OS support is not available.

A single instance of PLM can only manage a single server. However, you can run multiple instances of PLM on a single system, each managing a different server.

򐂰

򐂰

PLM cannot move I/O resources between partitions. Only processor and memory resources can be managed by PLM.

PLM requires HMC Release 3 Version 2.6 or higher on an HMC and a

Sserver

p5 system.

Chapter 3. Basic partition management

77

3.4.4 Installing Partition Load Manager

To install PLM, complete the following steps:

1. Mount the PLM CD to your system.

2. Using either the

installp

command or the smitty install_latest fastpath, install the following filesets:

– plm.server

– plm.sysmgt

3. When PLM is installed, install and configure OpenSSH.

4. Run

plmsetup

for the managed partitions.

3.4.5 Querying partition status

Any user can run the

xlplm

command to obtain status information for running instances of PLM. To query the status of all running instances of PLM, type the following command: xlplm -Q

A list of the instances that are running is displayed (see Figure 3-15). If there are

no instances running, no output is displayed.

Figure 3-15 PLM, Show LPAR Statistics

You can allocate resources to specific partitions and even reserve resources for specific partitions regardless of when those partitions will use the resources. The

xlplm -R

command allows you to reserve and allocate resources from a group of managed partitions. Those resources that are reserved can be used to create a

78

Partitioning Implementations for IBM

E server

p5 Servers

new unmanaged partition or to make room for a new partition to enter the managed group.

Reserved resources will not be allocated to any existing partition in a group unless they are first released. If a previously offline partition comes online and enters a managed group, any reserved resources within that group are removed automatically from the collection of reserved resources, called the

free pool

, and are assigned to the new partition. If the reserved resources are used instead to create a new, unmanaged partition, they can be released back to the group after the new partition has booted and can then be reclaimed automatically by the managed group if they later become available and are needed.

򐂰

򐂰

򐂰

The requested reservation amount is absolute, so a reserve command can result in either a reserve or a release, depending on the current reservation amount.

The minimum allowed changes in the reservation amounts are:

1 MB for memory

1 processor unit for a dedicated processor group

0.01 processor unit for a share processor group

When you reserve resources, the free pool for the target group is first checked for available resources. If the free pool has enough resources to satisfy the request, the requested amount is removed from the free pool. If the free pool does not have enough resources to satisfy the request, resources are taken from one or more partitions with the lowest workload or the least need for the resources. A reservation request fails if the requested amount is more than the minimum that is allowed for the group.

3.4.6 Managing memory resource requests

The following is an example of how to use PLM to manage memory resource requests. This example shows how PLM responds to memory resource requests between two partitions:

The two partitions, LP0 and LP1, have these attributes:

LP0:

Minimum = 1024 MB

Guaranteed = 1024 MB

Maximum = 4096 MB

Weight = 2

Current Entitlement = 1024 MB

LP1:

Minimum = 1024 MB

Guaranteed = 1024 MB

Maximum = 4096 MB

Current Entitlement = 1024 MB

Weight = 1

Chapter 3. Basic partition management

79

The total amount of memory that the PLM manages is 5120 MB. With each partition's current memory allocation, shown as Current Entitlement = 1024 MB, that leaves 3072 MB that the PLM assumes is unallocated and available.

If both partitions become loaded in terms of memory use, then events that demand more memory resources are generated and sent to the PLM server. For each event received, PLM tags the partition as a taker. At the same time, PLM checks whether the partition is currently using more than its guaranteed amount.

If so, the partition is tagged as an excess user. Because there are available resources, PLM satisfies the request immediately and allocates memory in the amount of mem_increment (defined either in the PLM policy or by the internal default value) to the partition from the available memory. After the available memory is depleted, the new entitlement allocations are:

LP0:

Current Entitlement = 2560 MB

LP1:

Current Entitlement = 2560 MB

Even with the current allocations, the partitions continue to generate events that demand more memory resources.

For each event, PLM continues to tag the partition as a taker and excess user because the partition has more resources allocated than are shown as its guaranteed entitlement. However, because there are no available resources, the request is queued if there are no other resource donors or any other excess users. When the request from the second partition is received, it is also marked as a taker and an excess user. Because there is an excess user already queued,

PLM can satisfy the resource request.

Because both LP0 and LP1 are takers and excess users, PLM uses the weight that is associated with each as the determining factor of how the extra entitlement (the sum of the current entitlement for each partition minus the sum of each partition's guaranteed allotment) will be distributed between the two partitions.

In this example, of the extra 3072 MB, the LP0 partition should be allocated

2048 MB and the LP1 partition should be allocated 1024 MB. PLM assigns the mem_incrememt MB of memory from the LP1 partition to the LP0 partition.

With constant memory requests from each partition, PLM eventually distributes the memory so that current entitlements become the following:

LP0:

Current Entitlement = 3072 MB

LP1:

Current Entitlement = 2048 MB

80

Partitioning Implementations for IBM

E server

p5 Servers

3.4.7 Processor resources in a shared partition environment

The following example describes how PLM manages processor resources in a shared partition environment. The two partitions are configured as follows:

LP0:

LP1:

Minimum = 0.1

Guaranteed = 0.5

Maximum = 2.0

Max entitlement per virtual processor = 0.8

Weight = 3

Current entitlement = 0.1

Current number of virtual processors = 1

Minimum = 0.1

Guaranteed = 0.5

Maximum = 2.0

Max entitlement per virtual processor = 0.8

Weight = 1

Current entitlement = 0.1

Current number of virtual processors = 1

The total amount of processor entitlement managed by PLM is 2.0. The amount that is currently allocated to each partition, 0.1, leaves 1.8 of unallocated processor entitlement that PLM can distribute.

If both partitions begin running processor-intensive jobs, they request more processor entitlement by sending requests to PLM. PLM then tags the demanding partitions as takers and as excess users if the current entitlement is above its guaranteed value.

In addition to managing processor entitlement, PLM also manages the number of virtual processors. When either partition's current entitlement exceeds 0.8, a virtual processor is also added.

In this example, PLM assigns the available entitlement until the partitions reach the following state:

LP0:

LP1:

Current entitlement = 1.0

Current number of virtual processors = 2

Current entitlement = 1.0

Current number of virtual processors = 2

If the partitions continue to demand more resource, then PLM redistributes the assigned entitlement based on the weight and excess entitlement. In this example, between the LP0 partition and the LP1 partition, the total excess amount is 1.5. Because LP0 has a weight of 3 and LP1 has a weight of 1, PLM removes processor entitlement from the LP1 partition and reassigns it to the LP0

Chapter 3. Basic partition management

81

partition. If both partitions remain busy, then the resource allocation becomes the following:

LP0:

Current entitlement = 1.25

Current number of virtual processors = 2

LP1:

Current entitlement = 0.75

Current number of virtual processors = 2

82

Partitioning Implementations for IBM

E server

p5 Servers

Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement

Table of contents