How to set up OES FTP in an NCS Cluster

  • 7023190
  • 20-Jul-2018
  • 25-Jul-2018


Open Enterprise Server 2018 (OES 2018) Linux
Open Enterprise Server 2015 (OES 2015) Linux
Open Enterprise Server 11 (OES 11) Linux


This document focuses on Cluster-enabling OES FTP, not on setting up OES FTP itself. It is recommended to first become familiar with setting up OES FTP on one node (as a standalone service), before attempting to cluster-enable the service. For more information on OES FTP itself, see TID


First, let us discuss the implementation of one OES FTP service within a cluster. The online doc discusses "multiple instances of OES FTP," meaning multiple different ftp processes which all run at the same time, to provide different FTP services on different virtual address for different cluster resources. But it is best to ignore that possibility initially, and simply discuss one cluster-enabled OES FTP service that may move around within the cluster, from node to node.
1. Each node of the cluster needs certain packages and installed. These will already be present on any node where the "OES FTP" pattern was selected during OES install, or selected later during additional OES configuration. However, if those steps have not already been taken, and especially if the Yast GUI is not available at the time, the simplest way to accomplish this is as follows. NOTE: This assumes that the SLES package "pure-ftpd" is not already installed.
zypper in novell-oes-pure-ftpd
zypper in novell-pure-ftpd-config
NOTE: That second package (novell-pure-ftpd-config) installs and runs a script to reconfigure /etc/pam.d/pure-ftpd and also add new options to /etc/pure-ftpd/pure-ftpd.conf. A good sanity check of whether those files gave been properly adjusted is (a) whether /etc/pam.d/pure-ftpd contains references to, and (b) whether /etc/pure-ftpd/pure-ftpd.conf contains mention of the "remote_server" setting.
2. When OES FTP is clustered, administrators generally set it up to move around the cluster with a specific clustered NSS volume, and to start and stop from that resource's load and unload scripts. This is often referred to as an "active/passive" approach, as the individual FTP service for any given volume will be active on only one node of the cluster. (For some reason, the online doc refers to this as an active/active approach, but the author of this TID does not know why.) Since the FTP service is going to be tied to a specific clustered NSS volume, the configuration file location needs to change locations from /etc/pure-ftpd/pure-ftpd.conf to something present on the NSS volume. Typically, it is recommended to copy it to:
Where VOLNAME is replaced by the NSS volume name.
Any location within the clustered volume should, in theory, be suitable. However, this one is often referenced in instructions. It is also acceptable to name the conf file something other than pure-ftpd.conf.
3. The pure-ftpd.conf (now in it's new location) should be altered in two ways:
a. Find, unremark (if needed) and set:
Bind x.y.z.a,21
Where "x.y.z.a" is replaced by the virtual IP address which belongs to the cluster volume / virtual server in question.
b. Find, unremark (if needed) and set:
PIDFile /media/nss/VOLNAME/etc/opt/novell/pure-ftpd/
Where "VOLNAME" is replace by the NSS volume name. This doesn't have to be set exactly as shown. The primary goal is to have them PID file be on the clustered volume, and it is most intuitive to have this file reside in the same folder as the conf file. Similarly, if the conf file is given a customized name such as "datavol-ftp.conf" then it is suggested that the pid be named "".
4. Since the conf file is no longer in it's standard location, the normal methods of starting / stopping pure-ftpd will no longer be suitable. Specifically:
On SLES 11, normal init commands such as "rcpure-ftpd start", "/etc/init.d/pure-ftpd start", "service pure-ftpd start" should no longer be used.
On SLES 12, any of above SLES 11 commands which might still work for backward compatibility should no longer be used. Also, the new systemd method such as "systemd start pure-ftpd" should no longer be used.
Instead, the cluster resource load and unload scripts (which control the clustered volume in question) should be modified.
a. In the LOAD script, AFTER the lines which mount the volume and bind the virtual IP address is bound, the following line should be inserted.
exit_on_error /usr/sbin/ /media/nss/VOLNAME/etc/opt/novell/pure-ftpd/pure-ftpd.conf
The path for the conf file should be modified according to where it was actually placed and how it was named. Apply (save) the change.
b. In the UNLOAD script, BEFORE the lines which unbind the virtual IP and dismount the volume, the following line should be inserted:
ignore_error /usr/sbin/ /media/nss/VOLNAME/etc/opt/novell/pure-ftpd/pure-ftpd.conf
Again, the path for the conf file should be modified according to where it was actually placed. Apply (save) the change.
5. Since the OES FTP service will now be controlled by the cluster , it should not be started as a standalone process during boot.
a. On SLES 11, this can be disabled with:
chkconfig pure-ftpd off
b. On SLES 12, this can be disabled with:
systemctl disable pure-ftpd
That concludes the necessary steps to set up a single, cluster-enabled OES FTP instance which can move from node to node.
Next, let us discuss having multiple instances of OES FTP services within a cluster.
First, it should be pointed out that through its' enhancements, one OES FTP service has the potential to give users access to all NCP volumes in the cluster, and even anywhere in the eDirectory tree. (See the "remote server" feature of OES FTP.) So "multiple instances" may not be needed to reach multiple volumes. However, in some cases, it may be desirable for different instances of FTP services to exist, which are configured in different ways. This would allow an administrator to restrict, enhance, or otherwise set up different behavior in one instance, as compared to other instances. For example, if you want users who FTP into one resource to be put in one chrooted location, but users who log into another resource to be put in a differently location (chrooted or not) then 2 separate instances can be used.
The procedure to set up a second instance (or third, etc) of OES FTP is essentially identical to the procedure for the first. The setup of the first (if done as described above) does not need to change to accommodate additional instances, nor do additional instances need to know about the first instance, or any others. If one instance of clustered OES FTP already exists in a certain cluster, just use steps 2 - 4 above to create additional instances in that same cluster. In other words, as long as each instance has it's own unique conf file, with unique Bind and PIDFile settings, and that unique conf file (and the unique PIDFile it points to) will be stored on the volume resource through which the OES FTP instance will be controlled, each instance of OES FTP will be able to start and stop independently of other instances. Two or more instances can even be tied to ONE volume resource, as long as the conf file names, BIND settings, and PIDFile names are unique for each instance, and separate commands are given to start/stop each OES FTP instance, within the LOAD and UNLOAD scripts.