Strategy for using NFS with Novell Cluster Services on OES 2

  • 7009757
  • 16-Nov-2011
  • 09-May-2012


Novell Open Enterprise Server 2 (OES 2) Linux Support Pack 3


Clustering NFS services with Novell Cluster Services (NCS) on OES 2 has not been tested thoroughly by Novell.  NFS Server on SuSE Linux is the standard Linux NFS Server and is not "Cluster aware" in the Novell Clustering sense, nor does it fully understand NSS volumes.  However, within certain limitations, it can still be made a clustered service.
For concerns between NFS and NSS in general (regardless of clustering) see KB 7005949
The remainder of this document will discuss only cluster-specific concerns, not the general NFS/NSS concerns addressed by the above document.


A strategy for clustering NFS Server with Novell Cluster Services (NCS) on OES 2 is outlined below.  This document assumes NFS Server is activate on each node in the cluster.  Only the strategy specific to exporting NSS files systems via cluster resource scripts is described here -- not preliminary install / setup steps.  It is recommended to read the full document before implementing alterations to cluster scripts.
1.  When clustered NSS volumes are desired to be shared through NFS, exporting (and unexporting) those volumes / paths should be accomplished through "exportfs" commands inside the cluster start and stop scripts.  The /etc/exports file should not be used for clustered exports.  Therefore, Yast would normally not be used to configure the exports.  However, Yast can still be used to enable NFS Server itself, and to configure local (non-clustered) exports.
2.  To export a NSS file system "on the fly" when a cluster resource comes up, add exportfs commands to the resource start script, in the format:
exportfs -o rw,sync,no_root_squash,fsid=n client-spec:/server-path
Important notes:
For fsid=n, n should be a non-zero value between 1 and 255, which is unique for each different path being exported.  Even two different exported paths on the same volume would receive a different fsid.
For client-spec, this is to identify the NFS client(s) which are allowed to mount the share.  Some administrators mistakenly list the server host name here, since it is followed by the server path.  That is not correct.  Client-spec can be an individual client host name or address; a subnet identifier; or a netgroup.  If multiple clients need to be identified and a subnet identifier or netgroup is not desired, multiple exportfs lines can be used, one for each client.  If client-spec is left off or * is used, this will allow ALL clients to mount.  This is highly discouraged, since NFS export of and NSS volume requires "no_root_squash", or in other words, client root users are treated as true root users on the NFS-mounted file system as well.  Granting true root user status to the root user of every conceivable client who may try to mount the share is not a good practice.  So the realm of trusted clients should be restricted.
For server-path, this will be in the format:  /media/nss/VOLNAME or /media/nss/VOLNAME/path
3.  To unexport a NSS file system "on the fly" when a cluster resource is being taken down, add exportfs command to the resource stop script, in the format:
exportfs -u client-spec:/server-path
These exportfs -u lines should correspond to the exportfs lines in the start script.  I.E.  typically there will be a exportfs -u line in the stop script for every exportfs line in the start script, with matching client-spec:/server-path.
4.  Load order in cluster scripts.

In some cases, stale file handle errors may occur on NFS clients after an NFS Server resource has failed over during active I/O.  Careful design of the order items in the cluster scripts has been seen to resolve this.

The order in load script should be, NSS, NFS, IP, and NCP.  The order would be reversed in an unload script.

Example of a load script:

. /opt/novell/ncs/lib/ncsfuncs
exit_on_error nss /poolact=AUTO_POOL_16
exit_on_error ncpcon mount AUTO_VOL_162=223
exit_on_error ncpcon mount AUTO_VOL_161=224
exit_on_error exportfs -o rw,sync,no_root_squash,fsid=216 client1:/media/nss/AUTO_VOL_161
exit_on_error add_secondary_ipaddress
exit_on_error ncpcon bind --ncpservername=CGAO_OES11SP1_BT1_7_CLUSTER_AUTO_POO L_16_SERVER --ipaddress=
exit 0
Example of unload script:
. /opt/novell/ncs/lib/ncsfuncs
ignore_error ncpcon unbind --ncpservername=CGAO_OES11SP1_BT1_7_CLUSTER_AUTO_POO L_16_SERVER --ipaddress=
ignore_error del_secondary_ipaddress
ignore_error exportfs -u client1:/media/nss/AUTO_VOL_161
ignore_error nss /pooldeact=AUTO_POOL_16
exit 0