GNBD multipath allows you to configure multiple GNBD server nodes (nodes that export GNBDs to GFS nodes) with redundant paths between the GNBD server nodes and storage devices. The GNBD server nodes, in turn, present multiple storage paths to GFS nodes via redundant GNBDs. With GNBD multipath, if a GNBD server node becomes unavailable, another GNBD server node can provide GFS nodes with access to storage devices.
If you are using GNBD multipath, you need to take the following into consideration:
Linux page caching
Lock server startup
CCS file location
Fencing GNBD server nodes
For GNBD multipath, do not specify Linux page caching (the -c option of the gnbd_export command). All GNBDs that are part of the pool must run with caching disabled. Data corruption occurs if the GNBDs are run with caching enabled. Refer to Section 11.1.1 Exporting a GNBD from a Server for more information about using the gnbd_export command for GNBD multipath.
Lock servers can reside on the following types of nodes: dedicated lock server nodes, GFS nodes, or GNBD server nodes. In any case, a lock server must be running before the GNBD servers can be started.
In a GFS cluster configured for GNBD multipath, the location of CCS files for each node depends on how a node is deployed. If a node is deployed as a dedicated GFS node, its CCS files can reside on a GNBD, local storage, or FC-attached storage (if available). If a node is deployed as a dedicated GNBD server, its CCS files must reside on local storage or FC-attached storage. If a node is deployed as a dedicated lock server, its CCS files must reside on local storage or FC-attached storage. Because lock servers need to start before GNBD servers can start, a lock server cannot access CCS files through a GNBD. If a lock server is running on a GFS node, the CCS files for that node must be located on local storage or FC-attached storage.
If a GNBD server that is exporting CCS files is also exporting GNBDs in multipath mode, it must export the CCS files as read-only. (Refer to Section 11.1.1 Exporting a GNBD from a Server for more information about exporting a GNBD as read-only.) Under those circumstances, a GNBD client cannot use ccs_tool to update its copy of the CCS files. Instead, the CCS files must be updated on a node where the CCS files are stored locally or on FC-attached storage.
If FC-attached storage can be shared among nodes, the CCS files can be stored on that shared storage.
A node with CCS files stored on local storage or FC-attached storage can serve the CCS files to other nodes in a GFS cluster via ccs_servd. However, doing so would introduce a single point of failure. For information about ccs_servd, refer to Section 7.5.1 CCA File and Server.
|Node Deployment||CCS File Location|
|GFS dedicated||GNBD, local, or FC-attached storage|
|GFS with lock server||Local or FC-attached storage only|
|GNBD server dedicated||Local or FC-attached storage only|
|GNBD server with lock server||Local or FC-attached storage only|
|Lock server dedicated||Local or FC-attached storage only|
Table 11-1. CCS File Location for GNBD Multipath Cluster
Before a GNBD client node can activate (using the pool_assemble command) a GNBD-multipath pool, it must activate the GNBD-exported CCS pool and start ccsd and lock_gulmd. The following example shows activating an GNBD-exported CCS pool labeled as CCS:
# pool_assemble CCS
GNBD server nodes must be fenced using a fencing method that physically removes the nodes from the network. To physically remove a GNBD server node, you can use any of the following fencing devices: APC MasterSwitch (fence_apc fence agent), WTI NPS (fence_wti fence agent), Brocade FC switch (fence_brocade fence agent), McData FC switch (fence_mcdata fence agent), Vixel FC switch (fence_vixel fence agent), HP RILOE (fence_rib fence agent), or xCAT (fence_xcat fence agent). You cannot use the GNBD fencing device (fence_gnbd fence agent) to fence a GNBD server node. For information about configuring fencing for GNBD server nodes, refer to Chapter 6 Creating the Cluster Configuration System Files.