This appendix contains examples of setting up and using GFS in the following basic scenarios:
The examples follow the process structure for procedures and associated tasks defined in Chapter 4 Initial Configuration.
This example sets up a cluster with three nodes and two GFS file systems. It requires three nodes for the GFS cluster. All nodes in the cluster mount the GFS file system and run the LOCK_GULM servers.
This section provides the following information about the example:
This example configuration has the following key characteristics:
Fencing device — An APC MasterSwitch (single-switch configuration). Refer to Table C-1 for switch information.
Number of GFS nodes — 3. Refer to Table C-2 for node information.
Number of lock server nodes — 3. The lock servers are run on the GFS nodes (embedded). Refer to Table C-2 for node information.
Locking protocol — LOCK_GULM. The LOCK_GULM server is run on every node that mounts GFS.
Number of shared storage devices — 2. Refer to Table C-3 for storage device information.
Number of file systems — 2.
File system names — gfs01 and gfs02.
File system mounting — Each GFS node mounts the two file systems.
Cluster name — alpha.
Host Name | IP Address | APC Port Number |
---|---|---|
n01 | 10.0.1.1 | 1 |
n02 | 10.0.1.2 | 2 |
n03 | 10.0.1.3 | 3 |
Table C-2. GFS and Lock Server Node Information
Major | Minor | #Blocks | Name |
---|---|---|---|
8 | 16 | 8388608 | sda |
8 | 17 | 8001 | sda1 |
8 | 18 | 8377897 | sda2 |
8 | 32 | 8388608 | sdb |
8 | 33 | 8388608 | sdb1 |
Table C-3. Storage Device Information
Notes | |
---|---|
For shared storage devices to be visible to the nodes, it may be necessary to load an appropriate device driver. If the shared storage devices are not visible on each node, confirm that the device driver is loaded and that it loaded without errors. The small partition (/dev/sda1) is used to store the cluster configuration information. The two remaining partitions (/dev/sda2, sdb1) are used for the GFS file systems. You can display the storage device information at each node in your GFS cluster by running the following command: cat /proc/partitions. Depending on the hardware configuration of the GFS nodes, the names of the devices may be different on each node. If the output of the cat /proc/partitions command shows only entire disk devices (for example, /dev/sda instead of /dev/sda1), then the storage devices have not been partitioned. To partition a device, use the fdisk command. |
Each node must have the following kernel modules loaded:
gfs.o
lock_harness.o
lock_gulm.o
pool.o
The setup process for this example consists of the following steps:
Create pool configurations for the two file systems.
Create pool configuration files for each file system's pool: pool_gfs01 for the first file system, and pool_gfs02 for the second file system. The two files should look like the following:
poolname pool_gfs01 subpools 1 subpool 0 0 1 pooldevice 0 0 /dev/sda2 |
poolname pool_gfs02 subpools 1 subpool 0 0 1 pooldevice 0 0 /dev/sdb1 |
Create a pool configuration for the CCS data.
Create a pool configuration file for the pool that will be used for CCS data. The pool does not need to be very large. The name of the pool will be alpha_cca. (The name of the cluster, alpha, followed by _cca). The file should look like the following:
poolname alpha_cca subpools 1 subpool 0 0 1 pooldevice 0 0 /dev/sda1 |
Use the pool_tool command to create all the pools as follows:
n01# pool_tool -c pool_gfs01.cf pool_gfs02.cf alpha_cca.cf Pool label written successfully from pool_gfs01.cf Pool label written successfully from pool_gfs02.cf Pool label written successfully from alpha_cca.cf |
Activate the pools on all nodes.
Note | |
---|---|
This step must be performed every time a node is rebooted. If it is not, the pool devices will not be accessible. |
Activate the pools using the pool_assemble -a command for each node as follows:
n01# pool_assemble -a <-- Activate pools alpha_cca assembled pool_gfs01 assembled pool_gfs02 assembled n02# pool_assemble -a <-- Activate pools alpha_cca assembled pool_gfs01 assembled pool_gfs02 assembled n03# pool_assemble -a <-- Activate pools alpha_cca assembled pool_gfs01 assembled pool_gfs02 assembled |
Create CCS files.
Create a directory called /root/alpha on node n01 as follows:
n01# mkdir /root/alpha n01# cd /root/alpha |
Create the cluster.ccs file. This file contains the name of the cluster and the name of the nodes where the LOCK_GULM server is run. The file should look like the following:
cluster { name = "alpha" lock_gulm { servers = ["n01", "n02", "n03"] } } |
Create the nodes.ccs file. This file contains the name of each node, its IP address, and node-specific I/O fencing parameters. The file should look like the following:
nodes { n01 { ip_interfaces { eth0 = "10.0.1.1" } fence { power { apc { port = 1 } } } } n02 { ip_interfaces { eth0 = "10.0.1.2" } fence { power { apc { port = 2 } } } } n03 { ip_interfaces { eth0 = "10.0.1.3" } fence { power { apc { port = 3 } } } } } |
Note | |
---|---|
If your cluster is running Red Hat GFS 6.0 for Red Hat Enterprise Linux 3 Update 5 and later, you can use the optional usedev parameter to explicitly specify an IP address rather than relying on an IP address from libresolv. For more information about the optional usedev parameter, refer to the file format in Figure 6-23 and the example in Example 6-26. Refer to Table 6-3 for syntax description of the usedev parameter. |
Create the fence.ccs file. This file contains information required for the fencing method(s) used by the GFS cluster. The file should look like the following:
fence_devices { apc { agent = "fence_apc" ipaddr = "10.0.1.10" login = "apc" passwd = "apc" } } |
Create the CCS Archive on the CCA Device.
Note | |
---|---|
This step only needs to be done once and from a single node. It should not be performed every time the cluster is restarted. |
Use the ccs_tool command to create the archive from the CCS configuration files:
n01# ccs_tool create /root/alpha /dev/pool/alpha_cca Initializing device for first time use... done. |
Start the CCS daemon (ccsd) on all the nodes.
Note | |
---|---|
This step must be performed each time the cluster is rebooted. |
The CCA device must be specified when starting ccsd.
n01# ccsd -d /dev/pool/alpha_cca n02# ccsd -d /dev/pool/alpha_cca n03# ccsd -d /dev/pool/alpha_cca |
At each node, start the LOCK_GULM server:
n01# lock_gulmd n02# lock_gulmd n03# lock_gulmd |
Create the GFS file systems.
Create the first file system on pool_gfs01 and the second on pool_gfs02. The names of the two file systems are gfs01 and gfs02, respectively, as shown in the example:
n01# gfs_mkfs -p lock_gulm -t alpha:gfs01 -j 3 /dev/pool/pool_gfs01 Device: /dev/pool/pool_gfs01 Blocksize: 4096 Filesystem Size:1963216 Journals: 3 Resource Groups:30 Locking Protocol:lock_gulm Lock Table: alpha:gfs01 Syncing... All Done n01# gfs_mkfs -p lock_gulm -t alpha:gfs02 -j 3 /dev/pool/pool_gfs02 Device: /dev/pool/pool_gfs02 Blocksize: 4096 Filesystem Size:1963416 Journals: 3 Resource Groups:30 Locking Protocol:lock_gulm Lock Table: alpha:gfs02 Syncing... All Done |
Mount the GFS file systems on all the nodes.
Mount points /gfs01 and /gfs02 are used on each node:
n01# mount -t gfs /dev/pool/pool_gfs01 /gfs01 n01# mount -t gfs /dev/pool/pool_gfs02 /gfs02 n02# mount -t gfs /dev/pool/pool_gfs01 /gfs01 n02# mount -t gfs /dev/pool/pool_gfs02 /gfs02 n03# mount -t gfs /dev/pool/pool_gfs01 /gfs01 n03# mount -t gfs /dev/pool/pool_gfs02 /gfs02 |