Red Hat Enterprise Linux 5 Global Network Block Device

Red Hat Enterprise Linux 5 Global Network Block Device
Red Hat Enterprise Linux 5
Global Network
Block Device
Using GNBD with Red Hat Global File System
Global Network Block Device
Red Hat Enterprise Linux 5 Global Network Block Device
Using GNBD with Red Hat Global File System
Edition 3
Copyright © 2009 Red Hat, Inc. This material may only be distributed subject to the terms and
conditions set forth in the Open Publication License, V1.0 or later (the latest version of the OPL is
presently available at http://www.opencontent.org/openpub/).
Red Hat and the Red Hat "Shadow Man" logo are registered trademarks of Red Hat, Inc. in the United
States and other countries.
All other trademarks referenced herein are the property of their respective owners.
1801 Varsity Drive
Raleigh, NC 27606-2072 USA
Phone: +1 919 754 3700
Phone: 888 733 4281
Fax: +1 919 754 3701
PO Box 13588 Research Triangle Park, NC 27709 USA
This book provides an overview on using Global Network Block Device (GNBD) with Red Hat GFS for
Red Hat Enterprise Linux 5.
Introduction
v
1. About This Guide ............................................................................................................ v
2. Audience ........................................................................................................................ v
3. Software Versions ........................................................................................................... v
4. Related Documentation ................................................................................................... v
5. Feedback ....................................................................................................................... vi
6. Document Conventions ................................................................................................... vi
6.1. Typographic Conventions ..................................................................................... vi
6.2. Pull-quote Conventions ....................................................................................... viii
6.3. Notes and Warnings ............................................................................................ ix
1. Using GNBD with Red Hat GFS
1
2. Considerations for Using GNBD with Device-Mapper Multipath
3
2.1. Linux Page Caching ..................................................................................................... 3
2.2. Fencing GNBD Server Nodes ....................................................................................... 3
3. GNBD Driver and Command Usage
5
3.1. Exporting a GNBD from a Server .................................................................................. 5
3.2. Importing a GNBD on a Client ...................................................................................... 7
4. Running GFS on a GNBD Server Node
9
A. Revision History
11
Index
13
iii
iv
Introduction
1. About This Guide
This book describes how to use Global Network Block Device (GNDB) with Global File System (GFS),
including information about device-mapper multipath, GNDB driver and command usage, and running
GFS on a GNBD server node.
2. Audience
This book is intended to be used by system administrators managing systems running the Linux
operating system. It requires familiarity with Red Hat Enterprise Linux 5 and GFS file system
administration.
3. Software Versions
Software
Description
RHEL5
refers to RHEL5 and higher
GFS
refers to GFS for RHEL5 and higher
Table 1. Software Versions
4. Related Documentation
For more information about using Red Hat Enterprise Linux, refer to the following resources:
• Red Hat Enterprise Linux Installation Guide — Provides information regarding installation of Red
Hat Enterprise Linux 5.
• Red Hat Enterprise Linux Deployment Guide — Provides information regarding the deployment,
configuration and administration of Red Hat Enterprise Linux 5.
For more information about Red Hat Cluster Suite for Red Hat Enterprise Linux 5, refer to the following
resources:
• Red Hat Cluster Suite Overview — Provides a high level overview of the Red Hat Cluster Suite.
• Configuring and Managing a Red Hat Cluster — Provides information about installing, configuring
and managing Red Hat Cluster components.
• LVM Administrator's Guide: Configuration and Administration — Provides a description of the
Logical Volume Manager (LVM), including information on running LVM in a clustered environment.
• Global File System: Configuration and Administration — Provides information about installing,
configuring, and maintaining Red Hat GFS (Red Hat Global File System).
• Global File System 2: Configuration and Administration — Provides information about installing,
configuring, and maintaining Red Hat GFS2 (Red Hat Global File System 2).
• Using Device-Mapper Multipath — Provides information about using the Device-Mapper Multipath
feature of Red Hat Enterprise Linux 5.
v
Introduction
• Linux Virtual Server Administration — Provides information on configuring high-performance
systems and services with the Linux Virtual Server (LVS).
• Red Hat Cluster Suite Release Notes — Provides information about the current release of Red Hat
Cluster Suite.
Red Hat Cluster Suite documentation and other Red Hat documents are available in HTML,
PDF, and RPM versions on the Red Hat Enterprise Linux Documentation CD and online at http://
www.redhat.com/docs/.
5. Feedback
If you spot a typo, or if you have thought of a way to make this manual better, we would love to
hear from you. Please submit a report in Bugzilla (http://bugzilla.redhat.com/bugzilla/) against the
component rh-cs.
Be sure to mention the manual's identifier:
Bugzilla component: Documentation-cluster
Book identifier: Global_Network_Block_Device(EN)-5 (2009-01-05T15:25)
By mentioning this manual's identifier, we know exactly which version of the guide you have.
If you have a suggestion for improving the documentation, try to be as specific as possible. If you have
found an error, please include the section number and some of the surrounding text so we can find it
easily.
6. Document Conventions
This manual uses several conventions to highlight certain words and phrases and draw attention to
specific pieces of information.
1
In PDF and paper editions, this manual uses typefaces drawn from the Liberation Fonts set. The
Liberation Fonts set is also used in HTML editions if the set is installed on your system. If not,
alternative but equivalent typefaces are displayed. Note: Red Hat Enterprise Linux 5 and later includes
the Liberation Fonts set by default.
6.1. Typographic Conventions
Four typographic conventions are used to call attention to specific words and phrases. These
conventions, and the circumstances they apply to, are as follows.
Mono-spaced Bold
Used to highlight system input, including shell commands, file names and paths. Also used to highlight
key caps and key-combinations. For example:
To see the contents of the file my_next_bestselling_novel in your current
working directory, enter the cat my_next_bestselling_novel command at the
shell prompt and press Enter to execute the command.
1
https://fedorahosted.org/liberation-fonts/
vi
Typographic Conventions
The above includes a file name, a shell command and a key cap, all presented in Mono-spaced Bold
and all distinguishable thanks to context.
Key-combinations can be distinguished from key caps by the hyphen connecting each part of a keycombination. For example:
Press Enter to execute the command.
Press Ctrl+Alt+F1 to switch to the first virtual terminal. Press Ctrl+Alt+F7 to
return to your X-Windows session.
The first sentence highlights the particular key cap to press. The second highlights two sets of three
key caps, each set pressed simultaneously.
If source code is discussed, class names, methods, functions, variable names and returned values
mentioned within a paragraph will be presented as above, in Mono-spaced Bold. For example:
File-related classes include filesystem for file systems, file for files, and dir for
directories. Each class has its own associated set of permissions.
Proportional Bold
This denotes words or phrases encountered on a system, including application names; dialogue
box text; labelled buttons; check-box and radio button labels; menu titles and sub-menu titles. For
example:
Choose System > Preferences > Mouse from the main menu bar to launch Mouse
Preferences. In the Buttons tab, click the Left-handed mouse check box and click
Close to switch the primary mouse button from the left to the right (making the mouse
suitable for use in the left hand).
To insert a special character into a gedit file, choose Applications > Accessories
> Character Map from the main menu bar. Next, choose Search > Find… from the
Character Map menu bar, type the name of the character in the Search field and
click Next. The character you sought will be highlighted in the Character Table.
Double-click this highlighted character to place it in the Text to copy field and then
click the Copy button. Now switch back to your document and choose Edit > Paste
from the gedit menu bar.
The above text includes application names; system-wide menu names and items; application-specific
menu names; and buttons and text found within a GUI interface, all presented in Proportional Bold and
all distinguishable by context.
Note the > shorthand used to indicate traversal through a menu and its sub-menus. This is to avoid
the difficult-to-follow 'Select Mouse from the Preferences sub-menu in the System menu of the main
menu bar' approach.
Mono-spaced Bold Italic or Proportional Bold Italic
Whether Mono-spaced Bold or Proportional Bold, the addition of Italics indicates replaceable or
variable text. Italics denotes text you do not input literally or displayed text that changes depending on
circumstance. For example:
To connect to a remote machine using ssh, type ssh username@domain.name at
a shell prompt. If the remote machine is example.com and your username on that
machine is john, type ssh john@example.com.
vii
Introduction
The mount -o remount file-system command remounts the named file
system. For example, to remount the /home file system, the command is mount -o
remount /home.
To see the version of a currently installed package, use the rpm -q package
command. It will return a result as follows: package-version-release.
Note the words in bold italics above — username, domain.name, file-system, package, version and
release. Each word is a placeholder, either for text you enter when issuing a command or for text
displayed by the system.
Aside from standard usage for presenting the title of a work, italics denotes the first use of a new and
important term. For example:
When the Apache HTTP Server accepts requests, it dispatches child processes
or threads to handle them. This group of child processes or threads is known as
a server-pool. Under Apache HTTP Server 2.0, the responsibility for creating and
maintaining these server-pools has been abstracted to a group of modules called
Multi-Processing Modules (MPMs). Unlike other modules, only one module from the
MPM group can be loaded by the Apache HTTP Server.
6.2. Pull-quote Conventions
Two, commonly multi-line, data types are set off visually from the surrounding text.
Output sent to a terminal is set in Mono-spaced Roman and presented thus:
books
books_tests
Desktop
Desktop1
documentation
downloads
drafts
images
mss
notes
photos
scripts
stuff
svgs
svn
Source-code listings are also set in Mono-spaced Roman but are presented and highlighted as
follows:
package org.jboss.book.jca.ex1;
import javax.naming.InitialContext;
public class ExClient
{
public static void main(String args[])
throws Exception
{
InitialContext iniCtx = new InitialContext();
Object
ref
= iniCtx.lookup("EchoBean");
EchoHome
home
= (EchoHome) ref;
Echo
echo
= home.create();
System.out.println("Created Echo");
System.out.println("Echo.echo('Hello') = " + echo.echo("Hello"));
viii
Notes and Warnings
}
}
6.3. Notes and Warnings
Finally, we use three visual styles to draw attention to information that might otherwise be overlooked.
Note
A note is a tip or shortcut or alternative approach to the task at hand. Ignoring a note
should have no negative consequences, but you might miss out on a trick that makes your
life easier.
Important
Important boxes detail things that are easily missed: configuration changes that only
apply to the current session, or services that need restarting before an update will apply.
Ignoring Important boxes won't cause data loss but may cause irritation and frustration.
Warning
A Warning should not be ignored. Ignoring warnings will most likely cause data loss.
ix
x
Chapter 1.
Using GNBD with Red Hat GFS
GNBD (Global Network Block Device) provides block-level storage access over an Ethernet LAN.
GNBD components run as a client in a GFS node and as a server in a GNBD server node. A GNBD
server node exports block-level storage from its local storage (either directly attached storage or SAN
storage) to a GFS node.
Table 1.1, “GNBD Software Subsystem Components” summarizes the GNBD software subsystems
components.
Software Subsystem
Components
Description
GNBD
gnbd.ko
Kernel module that implements the GNBD
device driver on clients.
gnbd_export
Command to create, export and manage
GNBDs on a GNBD server.
gnbd_import
Command to import and manage GNBDs on
a GNBD client.
gnbd_serv
A server daemon that allows a node to
export local storage over the network.
Table 1.1. GNBD Software Subsystem Components
You can configure GNBD servers to work with device-mapper multipath. GNBD with device-mapper
multipath allows you to configure multiple GNBD server nodes to provide redundant paths to the
storage devices. The GNBD servers, in turn, present multiple storage paths to GFS nodes via
redundant GNBDs. When using GNBD with device-mapper multipath, if a GNBD server node
becomes unavailable, another GNBD server node can provide GFS nodes with access to storage
devices.
This document how to use GNBD with Red Hat GFS and consists of the following chapters:
• Chapter 2, Considerations for Using GNBD with Device-Mapper Multipath, which describes some of
the issues you should take into account when configuring multipathed GNBD server nodes
• Chapter 3, GNBD Driver and Command Usage, which describes the restrictions that apply when
you are running GFS on a GNBD server node
• Chapter 4, Running GFS on a GNBD Server Node, which describes the user commands that
configure GNBD
1
2
Chapter 2.
Considerations for Using GNBD with
Device-Mapper Multipath
GNBD with device-mapper multipath allows you to configure multiple GNBD server nodes (nodes
that export GNBDs to GFS nodes) to provide redundant paths to the storage devices. The GNBD
server nodes, in turn, present multiple storage paths to GFS nodes via redundant GNBDs. When using
GNBD with device-mapper multipath, if a GNBD server node becomes unavailable, another GNBD
server node can provide GFS nodes with access to storage devices.
If you are using GNBD with device-mapper multipath, you need to take the following into
consideration:
• Linux page caching, as desribed in Section 2.1, “Linux Page Caching”.
• Fencing GNBD server nodes, as described in Section 2.2, “Fencing GNBD Server Nodes”.
• GNBD device names; export names for GNBD devices must be unique. Additionally, you must
specify the -u or -U when using the gnbd_export command. Exporting GNBD devices is
described in Chapter 3, GNBD Driver and Command Usage.
2.1. Linux Page Caching
For GNBD with device-mapper multipath, do not specify Linux page caching (the -c option of
the gnbd_export command). All GNBDs that are part of a logical volume must run with caching
disabled. Data corruption occurs if the GNBDs are run with caching enabled. Refer to Section 3.1,
“Exporting a GNBD from a Server” for more information about using the gnbd_export command for
GNBD with device-mapper multipath.
2.2. Fencing GNBD Server Nodes
GNBD server nodes must be fenced using a fencing method that physically removes the nodes from
the network. To physically remove a GNBD server node, you can use any fencing device: except the
following: fence_brocade fence agent, fence_vixel fence agent, fence_mcdata fence agent,
fence_sanbox2 fence agent, fence_scsi fence agent. In addition, you cannot use the GNBD
fencing device (fence_gnbd fence agent) to fence a GNBD server node. For information about
configuring fencing for GNBD server nodes, refer to the Global File System manual.
3
4
Chapter 3.
GNBD Driver and Command Usage
The Global Network Block Device (GNBD) driver allows a node to export its local storage as a GNBD
over a network so that other nodes on the network can share the storage. Client nodes importing the
GNBD use it like any other block device. Importing a GNBD on multiple clients forms a shared storage
configuration through which GFS can be used.
The GNBD driver is implemented through the following components.
• gnbd_serv — Implements the GNBD server. It is a user-space daemon that allows a node to
export local storage over a network.
• gnbd.ko — Implements the GNBD device driver on GNBD clients (nodes using GNBD devices).
Two user commands are available to configure GNBD:
• gnbd_export (for servers) — User program for creating, exporting, and managing GNBDs on a
GNBD server.
• gnbd_import (for clients) — User program for importing and managing GNBDs on a GNBD client.
3.1. Exporting a GNBD from a Server
The gnbd_serv daemon must be running on a node before it can export storage as a GNBD. You
can start the gnbd_serv daemon running gnbd_serv as follows:
#gnbd_serv
gnbd_serv: startup succeeded
Once local storage has been identified to be exported, the gnbd_export command is used to export
it.
Note
When you configure GNBD servers with device-mapper multipath, you must not use page
caching. All GNBDs that are part of a logical volume must run with caching disabled. By
default, the gnbd_export command exports with caching turned off.
Note
A server should not import the GNBDs to use them as a client would. If a server exports
the devices uncached, the underlying devices may also be used by gfs.
Usage
gnbd_export -d pathname -e gnbdname [-c][-u][-U
5
Chapter 3. GNBD Driver and Command Usage
pathname
Specifies a storage device to export.
gnbdname
Specifies an arbitrary name selected for the GNBD. It is used as the device name on GNBD
clients. This name must be unique among all GNBDs exported in a network.
-o
Export the device as read-only.
-c
Enable caching. Reads from the exported GNBD and takes advantage of the Linux page cache.
By default, the gnbd_export command does not enable caching.
Note
When you configure GNBD servers with device-mapper multipath, do not specify the c option. All GNBDs that are part of a logical volume must run with caching disabled.
Note
If you have been using GFS 5.2 or earlier and do not want to change your GNBD
setup you should specify the -c option. Before GFS Release 5.2.1, Linux caching was
enabled by default for gnbd_export. If the -c option is not specified, GNBD runs
with a noticeable performance decrease. Also, if the -c option is not specified, the
exported GNBD runs in timeout mode, using the default timeout value (the -t option).
For more information about the gnbd_export command and its options, refer to the
gnbd_export man page.
-u uid
Manually sets the Universal Identifier for an exported device. This option is used with -e. The
UID is used by device-mapper multipath to determine which devices belong in a multipath map. A
device must have a UID to be multipathed. However, for most SCSI devices the default Get UID
command, /usr/sbin/gnbd_get_uid, will return an appropriate value.
Note
The UID refers to the device being exported, not the GNBD itself. The UIDs of two
GNBD devices should be equal, only if they are exporting the same underlying device.
This means that both GNBD servers are connected to the same physical device.
Warning
This option should only be used for exporting shared storage devices, when the -U
command option does not work. This should almost never happen for SCSI devices.
If two GNBD devices are not exporting the same underlying device, but are given the
same UID, data corruption will occur.
6
Examples
-U Command
Gets the UID command. The UID command is a command the gnbd_export command will run
to get a Universal Identifier for the exported device. The UID is necessary to use device-mapper
multipath with GNBD. The command must use the full path of any executeable that you wish to
run. A command can contain the %M, %m or %n escape sequences. %M will be expanded to the
major number of the exported device, %m will be expaned to the minor number of the exported
device, and %n will be expanded to the sysfs name for the device. If no command is given,
GNBD will use the default command /usr/sbin/gnbd_get_uid. This command will work for
most SCSI devices.
Examples
This example is for a GNBD server configured with GNBD multipath. It exports device /dev/sdc2 as
GNBD gamma. Cache is disabled by default.
gnbd_export -d /dev/sdc2 -e gamma -U
This example is for a GNBD server not configured with GNBD multipath. It exports device /dev/sdb2
as GNBD delta with cache enabled.
gnbd_export -d /dev/sdb1 -e delta -c
This example exports device /dev/sdb2 as GNBD delta with cache enabled.
gnbd_export -d /dev/sdb2 -e delta -c
3.2. Importing a GNBD on a Client
The gnbd.ko kernel module must be loaded on a node before it can import GNBDs. When GNBDs
are imported, device nodes are created for them in /dev/gnbd/ with the name assigned when they
were exported.
Usage
gnbd_import -i Server
Server
Specifies a GNBD server by hostname or IP address from which to import GNBDs. All GNBDs
exported from the server are imported on the client running this command.
Example
This example imports all GNBDs from the server named nodeA.
7
Chapter 3. GNBD Driver and Command Usage
gnbd_import -i nodeA
8
Chapter 4.
Running GFS on a GNBD Server Node
You can run GFS on a GNBD server node, with some restrictions. In addition, running GFS on a
GNBD server node reduces performance. The following restrictions apply when running GFS on a
GNBD server node.
Important
When running GFS on a GNBD server node you must follow the restrictions listed;
otherwise, the GNBD server node will fail.
1. A GNBD server node must have local access to all storage devices needed to mount a GFS file
system. The GNBD server node must not import (gnbd_import command) other GNBD devices
to run the file system.
2. The GNBD server must export all the GNBDs in uncached mode, and it must export the raw
devices, not logical volume devices.
3. GFS must be run on top of a logical volume device, not raw devices.
Note
You may need to increase the timeout period on the exported GNBDs to accommodate
reduced performance. The need to increase the timeout period depends on the quality of
the hardware.
9
10
Appendix A. Revision History
Revision 1.0
11
12
Index
D
device-mapper multipath, 3
fencing GNBD server nodes, 3
Linux page caching, 3
driver and command usage, 5
exporting from a server, 5
importing on a client, 7
E
exporting from a server daemon, 5
F
feedback, vi, vi
fencing GNBD server nodes, 3
G
GFS, using on a GNBD server node, 9
GNBD, using with Red Hat GFS, 1
gnbd.ko module, 5, 7
gnbd_export command , 5, 5
gnbd_import command , 5, 7
gnbd_serv daemon, 5, 5
I
importing on a client module, 7
L
Linux page caching, 3
S
software subsystem components, 1
13
14
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertising