PowerHA provides highly available NFS services, which allows the backup NFS server to recover the current NFS activity if the primary NFS server fails. This feature is available only for two-node clusters when using NFSv2/NFSv3, and more than two nodes when using NFSv4. If NFS exports are defined through PowerHA, all NFS exports must be controlled by PowerHA. AIX and PowerHA NFS exports cannot be mixed. NFS information is kept in /usr/es/sbin/cluster/etc/exports, which has the same format as the AIX exports file (/etc/exports).
When configuring NFS through PowerHA, you can control these items:
- The network that PowerHA will use for NFS mounting.
- NFS exports and mounts at the directory level.
- the field “file systems mounted before IP configured” must be set to true (this prevents client access before needed)
- default is to export filesystems rw to the world, in /usr/es/sbin/cluster/etc/exports you can control that
-------------------------------------------
NFS cross-mounts
By default, NFS exported file systems, will automatically be cross-mounted (so each node will be an NFS client). This means, that the node that is hosting the resource group mounts the file systems locally, NFS exports them, and NFS mounts them (This node becomes NFS server and NFS client at the same time.) All other nodes of the resource group simply NFS-mount the file systems, thus becoming NFS clients. If the resource group is acquired by another node, that node mounts the file system locally and NFS exports them, thus becoming the new NFS server.
Syntax for configuration: /a; /fsa (/a: local mount point; /fsa: exported dir)
For example:
Node1 with service IP label svc1 will locally mount /fsa and NFS exports it.
Node1 will also NFS-mount svc1:/fsa on /a
Node2 will NFS-mount svc1:/fsa on /a
-------------------------------------------
NFS tiebreaker
When we use a linked cluster (where the cluster nodes are located at different geographical sites), there is an option to use a tiebreaker disk or NFS tiebreaker.
A cluster split event splits the cluster into two (or more) partitions, each of them containing one or more cluster nodes. The resulting situation is commonly referred to as a split-brain situation. In a split-brain situation, the two partitions have no knowledge of each other’s status, each of them considering the other as being offline. As a consequence, each partition tries to bring online the other partition’s resource groups (RGs), thus generating a high risk of data corruption.
When a split-brain situation occurs, each partition attempts to acquire the tiebreaker by placing a lock on the tiebreaker disk or on the NFS file. The partition that first locks the SCSI disk or reserves the NFS file wins, and the other loses. All nodes in the winning partition continue to process cluster events, and all nodes in the losing partition attempt to recover according to the defined split and merge policies. (most probably restarting the cluster services)
-------------------------------------------
No comments:
Post a Comment