NFS Installation and Configuration
Assumptions
The following assumptions are used in this document.
- the user has reasonable RHEL/Centos System administration skills
- installations are on Centos 7.3 minimal systems (fully patched)
- the user is or has deployed the example two node Stroom cluster storage hierarchy described here
- the configuration of this NFS is NOT secure. It is highly recommended to improve its security in a production environment. This could include improved firewall configuration to limit NFS access, NFS4 with Kerberos etc.
Installation of NFS software
We install NFS on each node, via
and enable the relevant services, viasudo systemctl enable rpcbind
sudo systemctl enable nfs-server
sudo systemctl enable nfs-lock
sudo systemctl enable nfs-idmap
sudo systemctl start rpcbind
sudo systemctl start nfs-server
sudo systemctl start nfs-lock
sudo systemctl start nfs-idmap
Configuration of NFS exports
We now export the node’s /stroomdata directory (in case you want to share the working directories) by configuring /etc/exports. For simplicity sake, we will allow all nodes with the hostname nomenclature of stroomp*.strmdev00.org to mount the /stroomdata directory.
This means the same configuration applies to all nodes.
# Share Stroom data directory
/stroomdata stroomp*.strmdev00.org(rw,sync,no_root_squash)
This can be achieved with the following on both nodes
On both nodes restart the NFS service to ensure the above export takes effect via
So that our nodes can offer their filesystems, we need to enable NFS access on the firewall. This is done via
Test Mounting
You should do test mounts on each node.
- Node:
stroomp00.strmdev00.org
- Node:
stroomp01.strmdev00.org
If you are concerned you can’t see the mount with a df try a df --type=nfs4 -a or a sudo df. Irrespective, once the mounting works, make the mounts permanent by adding the following to each node’s /etc/fstab file.
- Node:
stroomp00.strmdev00.org
stroomp01.strmdev00.org:/stroomdata/stroom-data-p01 /stroomdata/stroom-data-p01 nfs4 soft,bg
achieved with
- Node:
stroomp01.strmdev00.org
stroomp00.strmdev00.org:/stroomdata/stroom-data-p00 /stroomdata/stroom-data-p00 nfs4 soft,bg
achieved with
At this point reboot all processing nodes to ensure the directories mount automatically. You may need to give the nodes a minute to do this.
Addition of another Node
If one needs to add another node to the cluster, lets say, stroomp02.strmdev00.org, on which /stroomdata follows the same storage hierarchy
as the existing nodes and all nodes have added mount points (directories) for this new node, you would take the following steps in order.
-
Node:
stroomp02.strmdev00.org- Install NFS software as above
- Configure the exports file as per
- Restart the NFS service and make the firewall enable NFS access as per
- Test mount the existing node file systems
- Once the test mounts work, we make them permanent by adding the following to the /etc/fstab file.
stroomp00.strmdev00.org:/home/stroomdata/stroom-data-p00 /home/stroomdata/stroom-data-p00 nfs4 soft,bg
stroomp01.strmdev00.org:/home/stroomdata/stroom-data-p01 /home/stroomdata/stroom-data-p01 nfs4 soft,bg
achieved with
-
Node:
stroomp00.strmdev00.organdstroomp01.strmdev00.org- Test mount the new node’s filesystem as per
- Once the test mount works, make the mount permanent by adding the following to the /etc/fstab file
stroomp02.strmdev00.org:/stroomdata/stroom-data-p02 /stroomdata/stroom-data-p02 nfs4 soft,bg
achieved with