- Computers & electronics
- Computers
- Servers
- Red Hat
- LINUX VIRTUAL SERVER 5.1 - ADMINISTRATION
- Installation Guide
Chapter 5. Configuring Red Hat Cluster With system-config-cluster. Red Hat LINUX VIRTUAL SERVER 5.1 - ADMINISTRATION
Add to my manuals
143 Pages
advertisement
Red Hat Ent erprise Linux 5 Clust er Administ rat ion
Chapter 5. Configuring Red Hat Cluster With system-configcluster
This chapter describes how to configure Red Hat Cluster software using system-config-cluster, and consists of the following sections:
Section 5.1, “ Configuration Tasks”
Section 5.2, “ Starting the Clu st er Co n f ig u rat io n T o o l”
Section 5.3, “ Configuring Cluster Properties”
Section 5.4, “ Configuring Fence Devices”
Section 5.5, “ Adding and Deleting Members”
Section 5.6, “ Configuring a Failover Domain”
Section 5.7, “ Adding Cluster Resources”
Section 5.8, “ Adding a Cluster Service to the Cluster”
Section 5.9, “ Propagating The Configuration File: New Cluster”
Section 5.10, “ Starting the Cluster Software”
Note
While system-config-cluster provides several convenient tools for configuring and managing a Red Hat Cluster, the newer, more comprehensive tool, Co n g a, provides more convenience and flexibility than system-config-cluster. You may want to consider using
Co n g a instead (refer to Chapter 3, Configuring Red Hat Cluster With Conga
Managing Red Hat Cluster With Conga ).
5.1. Configurat ion T asks
Configuring Red Hat Cluster software with system-config-cluster consists of the following steps:
1. Starting the Clu st er Co n f ig u rat io n T o o l, system-config-cluster. Refer to
Section 5.2, “ Starting the Clu st er Co n f ig u rat io n T o o l”
.
2. Configuring cluster properties. Refer to Section 5.3, “ Configuring Cluster Properties” .
3. Creating fence devices. Refer to Section 5.4, “ Configuring Fence Devices”
.
4. Creating cluster members. Refer to Section 5.5, “ Adding and Deleting Members” .
5. Creating failover domains. Refer to
Section 5.6, “ Configuring a Failover Domain”
.
6. Creating resources. Refer to Section 5.7, “ Adding Cluster Resources”
.
7. Creating cluster services.
Refer to Section 5.8, “ Adding a Cluster Service to the Cluster” .
56
Chapt er 5. Configuring Red Hat Clust er Wit h syst em- config- clust er
8. Propagating the configuration file to the other nodes in the cluster.
Refer to Section 5.9, “ Propagating The Configuration File: New Cluster”
.
9. Starting the cluster software. Refer to Section 5.10, “ Starting the Cluster Software”
.
5.2. St art ing t he Clust er Configurat ion T ool
You can start the Clu st er Co n f ig u rat io n T o o l by logging in to a cluster node as root with the ssh
-Y command and issuing the system-config-cluster command. For example, to start the
Clu st er Co n f ig u rat io n T o o l on cluster node nano-01, do the following:
1. Log in to a cluster node and run system-config-cluster. For example:
$ ssh -Y root@nano-01
.
.
.
# system-config-cluster
2. If this is the first time you have started the Clu st er Co n f ig u rat io n T o o l, the program prompts you to either open an existing configuration or create a new one. Click Create New
Configuration to start a new configuration file (refer to Figure 5.1, “ Starting a New
Fig u re 5.1. St art in g a New Co n f ig u rat io n File
Note
The Clu st er Man ag emen t tab for the Red Hat Cluster Suite management GUI is available after you save the configuration file with the Clu st er Co n f ig u rat io n T o o l, exit, and restart the Red Hat Cluster Suite management GUI ( system-config-
cluster). (The Clu st er Man ag emen t tab displays the status of the cluster service manager, cluster nodes, and resources, and shows statistics concerning cluster service operation. To manage the cluster system further, choose the Clu st er
Co n f ig u rat io n tab.)
3. Clicking Create New Configuration causes the New Configuration dialog box to be displayed (refer to
Figure 5.2, “ Creating A New Configuration” ). The
New Configuration
57
Red Hat Ent erprise Linux 5 Clust er Administ rat ion dialog box provides a text box for cluster name and the following checkboxes: Cu st o m
Co n f ig u re Mu lt icast and Use a Q u o ru m Disk. In most circumstances you only need to configure the cluster name.
Note
Choose the cluster name carefully. The only way to change the name of a Red Hat cluster is to create a new cluster configuration with the new name.
Cust om Configure Mult icast
Red Hat Cluster software chooses a multicast address for cluster management communication among cluster nodes. If you need to use a specific multicast address, click the Cu st o m Co n f ig u re Mu lt icast checkbox and enter a multicast address in the Ad d ress text boxes.
Note
IPV6 is not supported for Cluster Suite in Red Hat Enterprise Linux 5.
If you do not specify a multicast address, the Red Hat Cluster software (specifically, cman, the
Cluster Manager) creates one. It forms the upper 16 bits of the multicast address with 239.192
and forms the lower 16 bits based on the cluster ID.
Note
The cluster ID is a unique identifier that cman generates for each cluster. To view the cluster ID, run the cman_tool status command on a cluster node.
If you do specify a multicast address, you should use the 239.192.x.x series that cman uses.
Otherwise, using a multicast address outside that range may cause unpredictable results. For example, using 224.0.0.x (which is "All hosts on the network") may not be routed correctly, or even routed at all by some hardware.
Note
If you specify a multicast address, make sure that you check the configuration of routers that cluster packets pass through. Some routers may take a long time to learn addresses, seriously impacting cluster performance.
Use a Quorum Disk
If you need to use a quorum disk, click the Use a Q u o ru m d isk checkbox and enter quorum disk parameters. The following quorum-disk parameters are available in the dialog box if you enable Use a Q u o ru m d isk: In t erval, T KO , Vo t es, Min imu m Sco re, Device, Lab el, and
Q u o ru m Disk Heu rist ic.
Table 5.1, “ Quorum-Disk Parameters”
describes the parameters.
58
Chapt er 5. Configuring Red Hat Clust er Wit h syst em- config- clust er
Important
Quorum-disk parameters and heuristics depend on the site environment and special requirements needed. To understand the use of quorum-disk parameters and heuristics, refer to the qdisk(5) man page. If you require assistance understanding and using quorum disk, contact an authorized Red Hat support representative.
Note
It is probable that configuring a quorum disk requires changing quorum-disk parameters after the initial configuration. The Clu st er Co n f ig u rat io n T o o l
( system-config-cluster) provides only the display of quorum-disk parameters after initial configuration. If you need to configure quorum disk, consider using Co n g a instead; Co n g a allows modification of quorum disk parameters.
Overall:
While system-config-cluster provides several convenient tools for configuring and managing a Red Hat Cluster, the newer, more comprehensive tool, Co n g a, provides more convenience and flexibility than system-config-cluster. You may
want to consider using Co n g a instead (refer to Chapter 3, Configuring Red Hat Cluster
With Conga and Chapter 4, Managing Red Hat Cluster With Conga ).
59
Red Hat Ent erprise Linux 5 Clust er Administ rat ion
60
Fig u re 5.2. Creat in g A New Co n f ig u rat io n
4. When you have completed entering the cluster name and other parameters in the New
Configuration dialog box, click OK. Clicking OK starts the Clu st er Co n f ig u rat io n T o o l,
displaying a graphical representation of the configuration ( Figure 5.3, “ The Cluster
Chapt er 5. Configuring Red Hat Clust er Wit h syst em- config- clust er
Fig u re 5.3. T h e Clu st er Co n f ig u rat io n T o o l
T ab le 5.1. Q u o ru m- Disk Paramet ers
Paramet er Descrip t io n
Use a Q u o ru m Disk Enables quorum disk. Enables quorum-disk parameters in the New
Configuration dialog box.
In t erval
T KO
Vo t es
Min imu m Sco re
Device
The frequency of read/write cycles, in seconds.
The number of cycles a node must miss in order to be declared dead.
The number of votes the quorum daemon advertises to CMAN when it has a high enough score.
The minimum score for a node to be considered "alive". If omitted or set to
0, the default function, floor((n+1)/2), is used, where n is the sum of the heuristics scores. The Min imu m Sco re value must never exceed the sum of the heuristic scores; otherwise, the quorum disk cannot be available.
The storage device the quorum daemon uses. The device must be the same on all nodes.
61
Red Hat Ent erprise Linux 5 Clust er Administ rat ion
Paramet er
Lab el
Q u o ru m Disk
Heu rist ics
Descrip t io n
Specifies the quorum disk label created by the mkqdisk utility. If this field contains an entry, the label overrides the Device field. If this field is used, the quorum daemon reads /proc/partitions and checks for qdisk signatures on every block device found, comparing the label against the specified label. This is useful in configurations where the quorum device name differs among nodes.
Pro g ram — The program used to determine if this heuristic is alive. This can be anything that can be executed by /bin/sh -c. A return value of
0 indicates success; anything else indicates failure. This field is required.
Sco re — The weight of this heuristic. Be careful when determining scores for heuristics. The default score for each heuristic is 1.
In t erval — The frequency (in seconds) at which the heuristic is polled.
The default interval for every heuristic is 2 seconds.
5.3. Configuring Clust er Propert ies
In addition to configuring cluster parameters in the preceding section ( Section 5.2, “ Starting the
(optional), a Co n f ig Versio n (optional), and Fen ce Daemo n Pro p ert ies. To configure cluster properties, follow these steps:
1. At the left frame, click Clu st er.
2. At the bottom of the right frame (labeled Pro p ert ies), click the Edit Cluster Properties button. Clicking that button causes a Cluster Properties dialog box to be displayed.
The Cluster Properties dialog box presents text boxes for Clu st er Alias, Co n f ig
Versio n , and two Fen ce Daemo n Pro p ert ies parameters: Po st - Jo in Delay and Po st -
Fail Delay.
3. (Optional) At the Clu st er Alias text box, specify a cluster alias for the cluster. The default cluster alias is set to the true cluster name provided when the cluster is set up (refer to
Section 5.2, “ Starting the Clu st er Co n f ig u rat io n T o o l”
). The cluster alias should be descriptive enough to distinguish it from other clusters and systems on your network (for example, nfs_cluster or httpd_cluster). The cluster alias cannot exceed 15 characters.
4. (Optional) The Co n f ig Versio n value is set to 1 by default and is automatically incremented each time you save your cluster configuration. However, if you need to set it to another value, you can specify it at the Co n f ig Versio n text box.
5. Specify the Fen ce Daemo n Pro p ert ies parameters: Po st - Jo in Delay and Po st - Fail
Delay.
a. The Po st - Jo in Delay parameter is the number of seconds the fence daemon
( fenced) waits before fencing a node after the node joins the fence domain. The
Po st - Jo in Delay default value is 3. A typical setting for Po st - Jo in Delay is between 20 and 30 seconds, but can vary according to cluster and network performance.
b. The Po st - Fail Delay parameter is the number of seconds the fence daemon
( fenced) waits before fencing a node (a member of the fence domain) after the node has failed.The Po st - Fail Delay default value is 0. Its value may be varied to suit cluster and network performance.
62
Chapt er 5. Configuring Red Hat Clust er Wit h syst em- config- clust er
Note
For more information about Po st - Jo in Delay and Po st - Fail Delay, refer to the fenced(8) man page.
6. Save cluster configuration changes by selecting File => Save.
5.4 . Configuring Fence Devices
Configuring fence devices for the cluster consists of selecting one or more fence devices and specifying fence-device-dependent parameters (for example, name, IP address, login, and password).
To configure fence devices, follow these steps:
1. Click Fen ce Devices. At the bottom of the right frame (labeled Pro p ert ies), click the Add a
Fence Device button. Clicking Add a Fence Device causes the Fence Device
Configuration dialog box to be displayed (refer to Figure 5.4, “ Fence Device
Fig u re 5.4 . Fen ce Device Co n f ig u rat io n
2. At the Fence Device Configuration dialog box, click the drop-down box under Ad d a
New Fen ce Device and select the type of fence device to configure.
3. Specify the information in the Fence Device Configuration dialog box according to the
type of fence device. Refer to Appendix B, Fence Device Parameters
for more information about fence device parameters.
4. Click OK.
5. Choose File => Save to save the changes to the cluster configuration.
63
Red Hat Ent erprise Linux 5 Clust er Administ rat ion
5.5. Adding and Delet ing Members
The procedure to add a member to a cluster varies depending on whether the cluster is a newlyconfigured cluster or a cluster that is already configured and running. To add a member to a new
cluster, refer to Section 5.5.1, “ Adding a Member to a Cluster” . To add a member to an existing
cluster, refer to Section 5.5.2, “ Adding a Member to a Running Cluster” . To delete a member from a
cluster, refer to Section 5.5.3, “ Deleting a Member from a Cluster”
.
5.5.1. Adding a Member t o a Clust er
To add a member to a new cluster, follow these steps:
1. Click Clu st er No d e.
2. At the bottom of the right frame (labeled Pro p ert ies), click the Add a Cluster Node button. Clicking that button causes a Node Properties dialog box to be displayed. The
Node Properties dialog box presents text boxes for Clu st er No d e Name and Q u o ru m
Vo t es (refer to
Figure 5.5, “ Adding a Member to a New Cluster” ).
Fig u re 5.5. Ad d in g a Memb er t o a New Clu st er
3. At the Clu st er No d e Name text box, specify a node name. The entry can be a name or an IP address of the node on the cluster subnet.
Note
Each node must be on the same subnet as the node from which you are running the
Clu st er Co n f ig u rat io n T o o l and must be defined either in DNS or in the
/etc/hosts file of each cluster node.
64
Note
The node on which you are running the Clu st er Co n f ig u rat io n T o o l must be explicitly added as a cluster member; the node is not automatically added to the cluster configuration as a result of running the Clu st er Co n f ig u rat io n T o o l.
4. Optionally, at the Q u o ru m Vo t es text box, you can specify a value; however in most configurations you can leave it blank. Leaving the Q u o ru m Vo t es text box blank causes the quorum votes value for that node to be set to the default value of 1.
5. Click OK.
Chapt er 5. Configuring Red Hat Clust er Wit h syst em- config- clust er
6. Configure fencing for the node: a. Click the node that you added in the previous step.
b. At the bottom of the right frame (below Pro p ert ies), click Manage Fencing For
This Node. Clicking Manage Fencing For This Node causes the Fence
Configuration dialog box to be displayed.
c. At the Fence Configuration dialog box, bottom of the right frame (below
Pro p ert ies), click Add a New Fence Level. Clicking Add a New Fence Level causes a fence-level element (for example, Fen ce- Level- 1, Fen ce- Level- 2, and so on) to be displayed below the node in the left frame of the Fence Configuration dialog box.
d. Click the fence-level element.
e. At the bottom of the right frame (below Pro p ert ies), click Add a New Fence to
this Level. Clicking Add a New Fence to this Level causes the Fence
Properties dialog box to be displayed.
f. At the Fence Properties dialog box, click the Fen ce Device T yp e drop-down box and select the fence device for this node. Also, provide additional information required (for example, Po rt and Swit ch for an APC Power Device).
g. At the Fence Properties dialog box, click OK. Clicking OK causes a fence device element to be displayed below the fence-level element.
h. To create additional fence devices at this fence level, return to step 6d. Otherwise, proceed to the next step.
i. To create additional fence levels, return to step 6c. Otherwise, proceed to the next step.
j. If you have configured all the fence levels and fence devices for this node, click
Close.
7. Choose File => Save to save the changes to the cluster configuration.
5.5.2. Adding a Member t o a Running Clust er
The procedure for adding a member to a running cluster depends on whether the cluster contains only two nodes or more than two nodes. To add a member to a running cluster, follow the steps in one of the following sections according to the number of nodes in the cluster:
For clusters with only two nodes —
Section 5.5.2.1, “ Adding a Member to a Running Cluster That Contains Only Two Nodes”
For clusters with more than two nodes —
Section 5.5.2.2, “ Adding a Member to a Running Cluster That Contains More Than Two Nodes”
5 .5 .2 .1 . Adding a Me m be r t o a Running Clust e r T hat Co nt ains Only T wo No de s
To add a member to an existing cluster that is currently in operation, and contains only two nodes, follow these steps:
1. Add the node and configure fencing for it as in
65
Red Hat Ent erprise Linux 5 Clust er Administ rat ion
Section 5.5.1, “ Adding a Member to a Cluster”
.
2. Click Send to Cluster to propagate the updated configuration to other running nodes in the cluster.
3. Use the scp command to send the updated /etc/cluster/cluster.conf file from one of the existing cluster nodes to the new node.
4. At the Red Hat Cluster Suite management GUI Clu st er St at u s T o o l tab, disable each service listed under Services.
5. Stop the cluster software on the two running nodes by running the following commands at each node in this order: a. service rgmanager stop b. service gfs stop, if you are using Red Hat GFS c. service clvmd stop, if CLVM has been used to create clustered volumes d. service cman stop
6. Start cluster software on all cluster nodes (including the added one) by running the following commands in this order: a. service cman start b. service clvmd start, if CLVM has been used to create clustered volumes c. service gfs start, if you are using Red Hat GFS d. service rgmanager start
7. Start the Red Hat Cluster Suite management GUI. At the Clu st er Co n f ig u rat io n T o o l tab, verify that the configuration is correct. At the Clu st er St at u s T o o l tab verify that the nodes and services are running as expected.
5 .5 .2 .2 . Adding a Me m be r t o a Running Clust e r T hat Co nt ains More Than T wo No de s
To add a member to an existing cluster that is currently in operation, and contains more than two nodes, follow these steps:
1. Add the node and configure fencing for it as in
Section 5.5.1, “ Adding a Member to a Cluster”
.
2. Click Send to Cluster to propagate the updated configuration to other running nodes in the cluster.
3. Use the scp command to send the updated /etc/cluster/cluster.conf file from one of the existing cluster nodes to the new node.
4. Start cluster services on the new node by running the following commands in this order: a. service cman start b. service clvmd start, if CLVM has been used to create clustered volumes c. service gfs start, if you are using Red Hat GFS
66
Chapt er 5. Configuring Red Hat Clust er Wit h syst em- config- clust er d. service rgmanager start
5. Start the Red Hat Cluster Suite management GUI. At the Clu st er Co n f ig u rat io n T o o l tab, verify that the configuration is correct. At the Clu st er St at u s T o o l tab verify that the nodes and services are running as expected.
5.5.3. Delet ing a Member from a Clust er
To delete a member from an existing cluster that is currently in operation, follow these steps:
1. At one of the running nodes (not to be removed), run the Red Hat Cluster Suite management
GUI. At the Clu st er St at u s T o o l tab, under Services, disable or relocate each service that is running on the node to be deleted.
2. Stop the cluster software on the node to be deleted by running the following commands at that node in this order: a. service rgmanager stop b. service gfs stop, if you are using Red Hat GFS c. service clvmd stop, if CLVM has been used to create clustered volumes d. service cman stop
3. At the Clu st er Co n f ig u rat io n T o o l (on one of the running members), delete the member as follows: a. If necessary, click the triangle icon to expand the Clu st er No d es property.
b. Select the cluster node to be deleted. At the bottom of the right frame (labeled
Pro p ert ies), click the Delete Node button.
c. Clicking the Delete Node button causes a warning dialog box to be displayed requesting confirmation of the deletion (
Figure 5.6, “ Confirm Deleting a Member”
).
Fig u re 5.6 . Co n f irm Delet in g a Memb er d. At that dialog box, click Yes to confirm deletion.
e. Propagate the updated configuration by clicking the Send to Cluster button.
(Propagating the updated configuration automatically saves the configuration.)
4. Stop the cluster software on the remaining running nodes by running the following commands at each node in this order: a. service rgmanager stop
67
Red Hat Ent erprise Linux 5 Clust er Administ rat ion b. service gfs stop, if you are using Red Hat GFS c. service clvmd stop, if CLVM has been used to create clustered volumes d. service cman stop
5. Start cluster software on all remaining cluster nodes by running the following commands in this order: a. service cman start b. service clvmd start, if CLVM has been used to create clustered volumes c. service gfs start, if you are using Red Hat GFS d. service rgmanager start
6. Start the Red Hat Cluster Suite management GUI. At the Clu st er Co n f ig u rat io n T o o l tab, verify that the configuration is correct. At the Clu st er St at u s T o o l tab verify that the nodes and services are running as expected.
5 .5 .3.1 . Re m o ving a Me m be r fro m a Clust e r at t he Co m m and-Line
If desired, you can also manually relocate and remove cluster members by using the clusvcadm commmand at a shell prompt.
1. To prevent service downtime, any services running on the member to be removed must be relocated to another node on the cluster by running the following command: clusvcadm -r cluster_service_name -m cluster_node_name
Where cluster_service_name is the name of the service to be relocated and
cluster_member_name is the name of the member to which the service will be relocated.
2. Stop the cluster software on the node to be removed by running the following commands at that node in this order: a. service rgmanager stop b. service gfs stop and/or service gfs2 stop, if you are using gfs, gfs2 or both c. umount -a -t gfs and/or umount -a -t gfs2, if you are using either (or both) in conjunction with rgmanager d. service clvmd stop, if CLVM has been used to create clustered volumes e. service cman stop remove
3. To ensure that the removed member does not rejoin the cluster after it reboots, run the following set of commands: chkconfig cman off chkconfig rgmanager off chkconfig clvmd off chkconfig gfs off chkconfig gfs2 off
68
Chapt er 5. Configuring Red Hat Clust er Wit h syst em- config- clust er
5.6. Configuring a Failover Domain
A failover domain is a named subset of cluster nodes that are eligible to run a cluster service in the event of a node failure. A failover domain can have the following characteristics:
Unrestricted — Allows you to specify that a subset of members are preferred, but that a cluster service assigned to this domain can run on any available member.
Restricted — Allows you to restrict the members that can run a particular cluster service. If none of the members in a restricted failover domain are available, the cluster service cannot be started
(either manually or by the cluster software).
Unordered — When a cluster service is assigned to an unordered failover domain, the member on which the cluster service runs is chosen from the available failover domain members with no priority ordering.
Ordered — Allows you to specify a preference order among the members of a failover domain. The member at the top of the list is the most preferred, followed by the second member in the list, and so on.
Note
Changing a failover domain configuration has no effect on currently running services.
Note
Failover domains are not required for operation.
By default, failover domains are unrestricted and unordered.
In a cluster with several members, using a restricted failover domain can minimize the work to set up the cluster to run a cluster service (such as httpd), which requires you to set up the configuration identically on all members that run the cluster service). Instead of setting up the entire cluster to run the cluster service, you must set up only the members in the restricted failover domain that you associate with the cluster service.
Note
To configure a preferred member, you can create an unrestricted failover domain comprising only one cluster member. Doing that causes a cluster service to run on that cluster member primarily (the preferred member), but allows the cluster service to fail over to any of the other members.
The following sections describe adding a failover domain, removing a failover domain, and removing members from a failover domain:
Section 5.6.1, “ Adding a Failover Domain”
Section 5.6.2, “ Removing a Failover Domain”
69
Red Hat Ent erprise Linux 5 Clust er Administ rat ion
Section 5.6.3, “ Removing a Member from a Failover Domain”
5.6.1. Adding a Failover Domain
To add a failover domain, follow these steps:
1. At the left frame of the Clu st er Co n f ig u rat io n T o o l, click Failo ver Do main s.
2. At the bottom of the right frame (labeled Pro p ert ies), click the Create a Failover
Domain button. Clicking the Create a Failover Domain button causes the Add
Failover Domain dialog box to be displayed.
3. At the Add Failover Domain dialog box, specify a failover domain name at the Name f o r
n ew Failo ver Do main text box and click OK. Clicking OK causes the Failover Domain
Configuration dialog box to be displayed ( Figure 5.7, “ Failover Domain Configuration:
Configuring a Failover Domain” ).
Note
The name should be descriptive enough to distinguish its purpose relative to other names used in your cluster.
70
Fig u re 5.7. Failo ver Do main Co n f ig u rat io n : Co n f ig u rin g a Failo ver Do main
4. Click the Availab le Clu st er No d es drop-down box and select the members for this failover domain.
Chapt er 5. Configuring Red Hat Clust er Wit h syst em- config- clust er
5. To restrict failover to members in this failover domain, click (check) the Rest rict Failo ver T o
T h is Do main s Memb ers checkbox. (With Rest rict Failo ver T o T h is Do main s
Memb ers checked, services assigned to this failover domain fail over only to nodes in this failover domain.)
6. To prioritize the order in which the members in the failover domain assume control of a failed cluster service, follow these steps:
a. Click (check) the Prio rit iz ed List checkbox ( Figure 5.8, “ Failover Domain
Configuration: Adjusting Priority” ). Clicking Prio rit iz ed List causes the Prio rit y
column to be displayed next to the Memb er No d e column.
Fig u re 5.8. Failo ver Do main Co n f ig u rat io n : Ad ju st in g Prio rit y b. For each node that requires a priority adjustment, click the node listed in the Memb er
No d e/Prio rit y columns and adjust priority by clicking one of the Ad ju st Prio rit y arrows. Priority is indicated by the position in the Memb er No d e column and the value in the Prio rit y column. The node priorities are listed highest to lowest, with the highest priority node at the top of the Memb er No d e column (having the lowest
Prio rit y number).
7. Click Close to create the domain.
8. At the Clu st er Co n f ig u rat io n T o o l, perform one of the following actions depending on whether the configuration is for a new cluster or for one that is operational and running:
New cluster — If this is a new cluster, choose File => Save to save the changes to the cluster configuration.
Running cluster — If this cluster is operational and running, and you want to propagate the change immediately, click the Send to Cluster button. Clicking Send to
Cluster automatically saves the configuration change. If you do not want to propagate the change immediately, choose File => Save to save the changes to the cluster
71
Red Hat Ent erprise Linux 5 Clust er Administ rat ion configuration.
5.6.2. Removing a Failover Domain
To remove a failover domain, follow these steps:
1. At the left frame of the Clu st er Co n f ig u rat io n T o o l, click the failover domain that you want to delete (listed under Failo ver Do main s).
2. At the bottom of the right frame (labeled Pro p ert ies), click the Delete Failover Domain button. Clicking the Delete Failover Domain button causes a warning dialog box do be displayed asking if you want to remove the failover domain. Confirm that the failover domain identified in the warning dialog box is the one you want to delete and click Yes. Clicking Yes causes the failover domain to be removed from the list of failover domains under Failo ver
Do main s in the left frame of the Clu st er Co n f ig u rat io n T o o l.
3. At the Clu st er Co n f ig u rat io n T o o l, perform one of the following actions depending on whether the configuration is for a new cluster or for one that is operational and running:
New cluster — If this is a new cluster, choose File => Save to save the changes to the cluster configuration.
Running cluster — If this cluster is operational and running, and you want to propagate the change immediately, click the Send to Cluster button. Clicking Send to
Cluster automatically saves the configuration change. If you do not want to propagate the change immediately, choose File => Save to save the changes to the cluster configuration.
5.6.3. Removing a Member from a Failover Domain
To remove a member from a failover domain, follow these steps:
1. At the left frame of the Clu st er Co n f ig u rat io n T o o l, click the failover domain that you want to change (listed under Failo ver Do main s).
2. At the bottom of the right frame (labeled Pro p ert ies), click the Edit Failover Domain
Properties button. Clicking the Edit Failover Domain Properties button causes
the Failover Domain Configuration dialog box to be displayed ( Figure 5.7, “ Failover
Domain Configuration: Configuring a Failover Domain” ).
3. At the Failover Domain Configuration dialog box, in the Memb er No d e column, click the node name that you want to delete from the failover domain and click the Remove
Member from Domain button. Clicking Remove Member from Domain removes the node from the Memb er No d e column. Repeat this step for each node that is to be deleted from the failover domain. (Nodes must be deleted one at a time.)
4. When finished, click Close.
5. At the Clu st er Co n f ig u rat io n T o o l, perform one of the following actions depending on whether the configuration is for a new cluster or for one that is operational and running:
New cluster — If this is a new cluster, choose File => Save to save the changes to the cluster configuration.
Running cluster — If this cluster is operational and running, and you want to propagate the change immediately, click the Send to Cluster button. Clicking Send to
Cluster automatically saves the configuration change. If you do not want to propagate
72
Chapt er 5. Configuring Red Hat Clust er Wit h syst em- config- clust er the change immediately, choose File => Save to save the changes to the cluster configuration.
5.7. Adding Clust er Resources
To specify a resource for a cluster service, follow these steps:
1. On the Reso u rces property of the Clu st er Co n f ig u rat io n T o o l, click the Create a
Resource button. Clicking the Create a Resource button causes the Resource
Configuration dialog box to be displayed.
2. At the Resource Configuration dialog box, under Select a Reso u rce T yp e, click the
drop-down box. At the drop-down box, select a resource to configure. Appendix C, HA
Resource Parameters describes resource parameters.
3. When finished, click OK.
4. Choose File => Save to save the change to the /etc/cluster/cluster.conf
configuration file.
5.8. Adding a Clust er Service t o t he Clust er
To add a cluster service to the cluster, follow these steps:
1. At the left frame, click Services.
2. At the bottom of the right frame (labeled Pro p ert ies), click the Create a Service button.
Clicking Create a Service causes the Add a Service dialog box to be displayed.
3. At the Add a Service dialog box, type the name of the service in the Name text box and click OK. Clicking OK causes the Service Management dialog box to be displayed (refer to
Figure 5.9, “ Adding a Cluster Service”
).
Note
Use a descriptive name that clearly distinguishes the service from other services in the cluster.
73
Red Hat Ent erprise Linux 5 Clust er Administ rat ion
74
Fig u re 5.9 . Ad d in g a Clu st er Service
4. If you want to restrict the members on which this cluster service is able to run, choose a
failover domain from the Failo ver Do main drop-down box. (Refer to Section 5.6,
“ Configuring a Failover Domain” for instructions on how to configure a failover domain.)
5. Au t o st art T h is Service checkbox — This is checked by default. If Au t o st art T h is
Service is checked, the service is started automatically when a cluster is started and running. If Au t o st art T h is Service is not checked, the service must be started manually any time the cluster comes up from stopped state.
6. Ru n Exclu sive checkbox — This sets a policy wherein the service only runs on nodes that have no other services running on them. For example, for a very busy web server that is clustered for high availability, it would would be advisable to keep that service on a node alone with no other services competing for his resources — that is, Ru n Exclu sive checked.
On the other hand, services that consume few resources (like NFS and Samba), can run together on the same node without little concern over contention for resources. For those types of services you can leave the Ru n Exclu sive unchecked.
Chapt er 5. Configuring Red Hat Clust er Wit h syst em- config- clust er
Note
Circumstances that require enabling Ru n Exclu sive are rare. Enabling Ru n
Exclu sive can render a service offline if the node it is running on fails and no other nodes are empty.
7. Select a recovery policy to specify how the resource manager should recover from a service failure. At the upper right of the Service Management dialog box, there are three Reco very
Po licy options available:
Rest art — Restart the service in the node the service is currently located. The default setting is Rest art . If the service cannot be restarted in the current node, the service is relocated.
Relo cat e — Relocate the service before restarting. Do not restart the node where the service is currently located.
Disab le — Do not restart the service at all.
8. Click the Add a Shared Resource to this service button and choose the a resource
listed that you have configured in Section 5.7, “ Adding Cluster Resources”
.
Note
If you are adding a Samba-service resource, connect a Samba-service resource directly to the service, not to a resource within a service. That is, at the Service
Management dialog box, use either Create a new resource for this service or Add a Shared Resource to this service; do not use Attach a new
Private Resource to the Selection or Attach a Shared Resource to
the selection.
9. If needed, you may also create a private resource that you can create that becomes a subordinate resource by clicking on the Attach a new Private Resource to the
Selection button. The process is the same as creating a shared resource described in
Section 5.7, “ Adding Cluster Resources” . The private resource will appear as a child to the
shared resource to which you associated with the shared resource. Click the triangle icon next to the shared resource to display any private resources associated.
10. When finished, click OK.
11. Choose File => Save to save the changes to the cluster configuration.
75
Red Hat Ent erprise Linux 5 Clust er Administ rat ion
Note
To verify the existence of the IP service resource used in a cluster service, you must use the
/sbin/ip addr list command on a cluster node. The following output shows the
/sbin/ip addr list command executed on a node running a cluster service:
1: lo: <LOOPBACK,UP> mtu 16436 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP> mtu 1356 qdisc pfifo_fast qlen 1000
link/ether 00:05:5d:9a:d8:91 brd ff:ff:ff:ff:ff:ff
inet 10.11.4.31/22 brd 10.11.7.255 scope global eth0
inet6 fe80::205:5dff:fe9a:d891/64 scope link
inet 10.11.4.240/22 scope global secondary eth0
valid_lft forever preferred_lft forever
5.8.1. Relocat ing a Service in a Clust er
Service relocation functionality allows you to perform maintenance on a cluster member while maintaining application and data availability.
To relocate a service, drag the service icon from the Services Tab onto the member icon in the
Members tab. The cluster manager stops the service on the member on which it was running and restarts it on the new member.
5.9. Propagat ing T he Configurat ion File: New Clust er
For newly defined clusters, you must propagate the configuration file to the cluster nodes as follows:
1. Log in to the node where you created the configuration file.
2. Using the scp command, copy the /etc/cluster/cluster.conf file to all nodes in the cluster.
Note
Propagating the cluster configuration file this way is necessary for the first time a cluster is created. Once a cluster is installed and running, the cluster configuration file is propagated using the Red Hat cluster management GUI Send to Cluster button.
For more information about propagating the cluster configuration using the GUI Send
to Cluster button, refer to Section 6.3, “ Modifying the Cluster Configuration”
.
5.10. St art ing t he Clust er Soft ware
After you have propagated the cluster configuration to the cluster nodes you can either reboot each node or start the cluster software on each cluster node by running the following commands at each node in this order:
76
Chapt er 5. Configuring Red Hat Clust er Wit h syst em- config- clust er
1. service cman start
2. service clvmd start, if CLVM has been used to create clustered volumes
Note
Shared storage for use in Red Hat Cluster Suite requires that you be running the cluster logical volume manager daemon ( clvmd) or the High Availability Logical
Volume Management agents (HA-LVM). If you are not able to use either the clvmd daemon or HA-LVM for operational reasons or because you do not have the correct entitlements, you must not use single-instance LVM on the shared disk as this may result in data corruption. If you have any concerns please contact your Red Hat service representative.
3. service gfs start, if you are using Red Hat GFS
4. service rgmanager start
5. Start the Red Hat Cluster Suite management GUI. At the Clu st er Co n f ig u rat io n T o o l tab, verify that the configuration is correct. At the Clu st er St at u s T o o l tab verify that the nodes and services are running as expected.
77
advertisement
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Related manuals
advertisement
Table of contents
- 5 Table of Contents
- 8 Introduction
- 8 1. Document Conventions
- 8 1.1. Typographic Conventions
- 10 1.2. Pull-quote Conventions
- 10 1.3. Notes and Warnings
- 11 2. Feedback
- 12 Chapter 1. Red Hat Cluster Configuration and Management Overview
- 12 1.1. Configuration Basics
- 12 1.1.1. Setting Up Hardware
- 13 1.1.2. Installing Red Hat Cluster software
- 13 1.1.2.1. Upgrading the Cluster Software
- 14 1.1.3. Configuring Red Hat Cluster Software
- 15 1.2. Conga
- 18 1.3. system-config-cluster Cluster Administration GUI
- 19 1.3.1. Cluster Configuration Tool
- 20 1.3.2. Cluster Status Tool
- 21 1.4. Command Line Administration Tools
- 23 Chapter 2. Before Configuring a Red Hat Cluster
- 23 2.1. General Configuration Considerations
- 24 2.2. Compatible Hardware
- 24 2.3. Enabling IP Ports
- 25 2.3.1. Enabling IP Ports on Cluster Nodes
- 25 2.3.2. Enabling IP Ports on Computers That Run luci
- 26 2.4. Configuring ACPI For Use with Integrated Fence Devices
- 27 2.4.1. Disabling ACPI Soft-Off with chkconfig Management
- 28 2.4.2. Disabling ACPI Soft-Off with the BIOS
- 29 2.4.3. Disabling ACPI Completely in the grub.conf File
- 30 2.5. Considerations for Configuring HA Services
- 33 2.6. Configuring max_luns
- 33 2.7. Considerations for Using Quorum Disk
- 34 2.8. Red Hat Cluster Suite and SELinux
- 35 2.9. Multicast Addresses
- 35 2.10. Configuring the iptables Firewall to Allow Cluster Components
- 36 2.11. Considerations for Using Conga
- 36 2.12. Configuring Virtual Machines in a Clustered Environment
- 38 Chapter 3. Configuring Red Hat Cluster With Conga
- 38 3.1. Configuration Tasks
- 38 3.2. Starting luci and ricci
- 39 3.3. Creating A Cluster
- 40 3.4. Global Cluster Properties
- 43 3.5. Configuring Fence Devices
- 43 3.5.1. Creating a Shared Fence Device
- 45 3.5.2. Modifying or Deleting a Fence Device
- 45 3.6. Configuring Cluster Members
- 45 3.6.1. Initially Configuring Members
- 46 3.6.2. Adding a Member to a Running Cluster
- 47 3.6.3. Deleting a Member from a Cluster
- 48 3.7. Configuring a Failover Domain
- 49 3.7.1. Adding a Failover Domain
- 50 3.7.2. Modifying a Failover Domain
- 51 3.8. Adding Cluster Resources
- 51 3.9. Adding a Cluster Service to the Cluster
- 54 3.10. Configuring Cluster Storage
- 56 Chapter 4. Managing Red Hat Cluster With Conga
- 56 4.1. Starting, Stopping, and Deleting Clusters
- 57 4.2. Managing Cluster Nodes
- 58 4.3. Managing High-Availability Services
- 59 4.4. Backing Up and Restoring the luci Configuration
- 59 4.5. Diagnosing and Correcting Problems in a Cluster
- 60 Chapter 5. Configuring Red Hat Cluster With system-config-cluster
- 60 5.1. Configuration Tasks
- 61 5.2. Starting the Cluster Configuration Tool
- 62 Custom Configure Multicast
- 62 Use a Quorum Disk
- 66 5.3. Configuring Cluster Properties
- 67 5.4. Configuring Fence Devices
- 68 5.5. Adding and Deleting Members
- 68 5.5.1. Adding a Member to a Cluster
- 69 5.5.2. Adding a Member to a Running Cluster
- 69 5.5.2.1. Adding a Member to a Running Cluster That Contains Only Two Nodes
- 70 5.5.2.2. Adding a Member to a Running Cluster That Contains More Than Two Nodes
- 71 5.5.3. Deleting a Member from a Cluster
- 72 5.5.3.1. Removing a Member from a Cluster at the Command-Line
- 73 5.6. Configuring a Failover Domain
- 74 5.6.1. Adding a Failover Domain
- 76 5.6.2. Removing a Failover Domain
- 76 5.6.3. Removing a Member from a Failover Domain
- 77 5.7. Adding Cluster Resources
- 77 5.8. Adding a Cluster Service to the Cluster
- 80 5.8.1. Relocating a Service in a Cluster
- 80 5.9. Propagating The Configuration File: New Cluster
- 80 5.10. Starting the Cluster Software
- 82 Chapter 6. Managing Red Hat Cluster With system-config-cluster
- 82 6.1. Starting and Stopping the Cluster Software
- 82 6.2. Managing High-Availability Services
- 84 6.3. Modifying the Cluster Configuration
- 85 6.4. Backing Up and Restoring the Cluster Database
- 86 6.5. Disabling Resources of a Clustered Service for Maintenance
- 87 6.6. Disabling the Cluster Software
- 88 6.7. Diagnosing and Correcting Problems in a Cluster
- 89 Example of Setting Up Apache HTTP Server
- 89 A.1. Apache HTTP Server Setup Overview
- 89 A.2. Configuring Shared Storage
- 90 A.3. Installing and Configuring the Apache HTTP Server
- 93 Fence Device Parameters
- 103 HA Resource Parameters
- 114 HA Resource Behavior
- 114 D.1. Parent, Child, and Sibling Relationships Among Resources
- 115 D.2. Sibling Start Ordering and Resource Child Ordering
- 116 D.2.1. Typed Child Resource Start and Stop Ordering
- 117 Typed Child Resource Starting Order
- 117 Typed Child Resource Stopping Order
- 117 D.2.2. Non-typed Child Resource Start and Stop Ordering
- 118 Non-typed Child Resource Starting Order
- 119 Non-typed Child Resource Stopping Order
- 119 D.3. Inheritance, the <resources> Block, and Reusing Resources
- 120 D.4. Failure Recovery and Independent Subtrees
- 121 D.5. Debugging and Testing Services and Resource Ordering
- 123 Cluster Service Resource Check and Failover Timeout
- 123 E.1. Modifying the Resource Status Check Interval
- 123 E.2. Enforcing Resource Timeouts
- 124 E.3. Changing Consensus Timeout
- 126 High Availabilty LVM (HA-LVM)
- 127 F.1. Configuring HA-LVM Failover with CLVM (preferred, Red Hat Enterprise Linux 5.6 and later)
- 128 F.2. Configuring HA-LVM Failover with Tagging
- 130 Upgrading A Red Hat Cluster from RHEL 4 to RHEL 5
- 133 Revision History
- 137 Index