In those examples, I'm using 2x VPS on the same private network, both running CentOS 7.3.

First assign a private IP on each one:

On the VPS acting as the NFS server:

ip addr add 10.42.226.1/24 dev eth1

On the VPS acting as the NFS client:

ip addr add 10.42.226.2/24 dev eth1

Optional - make the private IPs permanent:

If you want to make those settings permanent, you'll need to edit the file /etc/sysconfig/network-scripts/ifcfg-eth1 on each server and make sure you have something like this in place:

# Virtio Network Device Private Interface
DEVICE=eth1
BOOTPROTO=static
ONBOOT=yes
IPADDR=10.42.226.1
NETMASK=255.255.255.0

You can always follow up with `service network restart` to have CentOS pick up those changes without rebooting.

On the VPS acting as a server:

1. Install NFS:

yum -y install nfs-utils

2. Also install the net-tools package so that we can get netstat - this is always useful for diagnosing issues:

yum -y install net-tools

3. Set NFS to start on boot:

systemctl enable nfs

4. Move your old NFS configuration file out of the way:

mv /etc/sysconfig/nfs /etc/sysconfig/nfs-old

5. Create /etc/sysconfig/nfs and use this config -- this will force NFS to use those ports instead of assigning random ones.

# Port rpc.mountd should listen on.
# Port rpc.statd should listen on.
STATD_PORT=662
# Outgoing port statd should used. The default is port
# is random
STATD_OUTGOING_PORT=2020
# Specify callout program
# Enable usage of gssproxy. See gssproxy-mech(8).
#GSS_USE_PROXY="yes"

6. Restart the necessary daemons so that those changes are picked up:

systemctl restart nfslock rpcbind nfs

7. There's 3 mandatory ports that NFS needs to be listening on - those are: 111, 892 and 2049 - check if those are currently listening:

netstat -ntlp | egrep '111|892|2049'
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1/systemd 
tcp 0 0 0.0.0.0:2049 0.0.0.0:* LISTEN - 
tcp6 0 0 :::111 :::* LISTEN 1/systemd 
tcp6 0 0 :::2049 :::* LISTEN -

8. Now let's create a directory that we are going to use as an NFS share:

mkdir /nfsshare

9. For NFS to make this directory available to others, we need to add it to our exports - edit the file /etc/exports and add the line:

/nfsshare 10.42.226.0/24(rw,no_root_squash)

10. Tell NFS to re-read the exports file to make this change live:

exportfs -va

You should get output like this:

exporting 10.42.226.0/24:/nfsshare

11. Before we even try to mount this share from our client, let's see if the export actually shows up when queried for. To do this, we use the showmount command on the localhost:

showmount -e 127.0.0.1
Export list for 127.0.0.1:
/nfsshare 10.42.226.0/24

We're good! Let's move on to the client.

On the VPS acting as a client:

1. Install the necessary utilities:

yum -y install nfs-utils

2. See if we can query the shares over the network:

showmount -e 10.42.226.1

You should see the same exports as before:

Export list for 10.42.226.1:
/nfsshare 10.42.226.0/24

3. Attempt to mount the /nfsshare on /media:

mount.nfs 10.42.226.1:/nfsshare /media -v

Note the -v - this is to get verbose output - this is what I got:

mount.nfs: timeout set for Wed Jun 7 05:13:17 2017
mount.nfs: trying text-based options 'vers=4,addr=10.42.226.1,clientaddr=10.42.226.2'

This worked, but is NFS using UDP or TCP ? Let's find out:

mount | grep nfs | grep proto=tcp
10.42.226.1:/nfsshare on /media type nfs4 (rw,relatime,vers=4.0,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.42.226.2,local_lock=none,addr=10.42.226.1)

The above gives us an abundance of good info - let's break it down:

type nfs4 = this is using NFS4 instead of the older NFS3

hard = this is a hard mount - this is a very important option - it means that in the case the NFS server fails, the system will infinitely continue to try to reconnect. This sounds like a good idea, but in reality it can be disastrous. If your NFS server goes down, the client will lock and even a simple ls /media will not be interruptible.

timeo=600 = this can be a very misleading and dangerous option. First of all - this number is not in seconds, it's in deciseconds (tenths of a second). In other words a value of 600, means that the timeout is 60 seconds. But there's more - allow me to quote the manual:

"For NFS over TCP the default timeo value is 600 (60 seconds). The NFS client performs linear backoff: After each retransmission the timeout is increased by timeo up to the maximum of 600 seconds."

Waaaaaaaait a second... remember how I said "the system will infinitely continue to try to reconnect" ? Here's what's going on:

If the NFS mount is a "hard" mount, then the "NFS requests are retried indefinitely".

retrans=2 = I'll copy paste what the manual says on this one:

"The number of times the NFS client retries a request before it attempts further recovery action. If the retrans option is not specified, the NFS client tries each request three times. The NFS client generates a "server not responding" message after retrans retries, then attempts further recovery (depending on whether the hard mount option is in effect)"

OK - seriously - reread that last part:

"generates a .. message, then attempts further recovery.. hard mount option is in effect". Allow me to save you some time and rephrase: If it's a hard mount, it will simply print a message saying that the server is not responding and go it's merry way continuing indefinitely to reconnect, which means that the "retrans" option doesn't really do anything!

In sort --- "hard" mounts can be a very bad thing on a cloud environment where servers should not be taken for granted.

Let's unmount this mount and re-mount it as a soft mount:

umount /media
mount -o proto=tcp,soft,timeo=10,retrans=3 10.42.226.1:/media/drive1 /media

Now let's review what's going on here:

soft = this is a soft mount, so it will not indefinitely try to reconnect

timeo=10 = remember - this is not in seconds, it's in deciseconds, so this equals 1 second. Why so low? Because we're also using 'retrans=3'

retrans=3 = The connection will be automatically re-attempted 3 times and "The NFS client performs linear backoff: After each retransmission the timeout is increased by timeo up to the maximum of 600 seconds." So we don't really want timeo to have a high value, NFS will increase it on it's own.

So are "hard" mounts really such a bad thing?

If your servers are likely to "disappear", then yes, a hard mount is a very bad thing. However, soft mounts can lead to data corruption as if a file transfer is interrupted half-way and is not retried then that means you'll end up with an incomplete (corrupt) file.

Sources: