Tunneling NFS4 over SSH

Submitted by dag on Thu, 2008/05/22 - 20:26

Today we had the need to mount a filesystem from a system that was almost completely isolated and instead of having to transfer a huge amount of data over a tunneled SSH connection, I thought, why not pursue mounting NFS over an SSH tunnel.

Since NFS4 by default does TCP if both client and server can do that, this would be the perfect opportunity to test the new capability. In fact, it should not be hard at all.

Consider the folowing situation:

some-server (EL4) <-> mgmt-server (Solaris) <-> nfs-server (EL4)

So we connect to our server using SSH from the mgmt-server using:

ssh -R 3049:nfs-server:2049 some-server

If "AllowTcpForwarding yes" is set in your sshd_config, this will create a tunnel back from our server to our nfs-server over the mgmt-server SSH connection. So that connecting to port 3049 on some-server will take us to port 2049 on nfs-server.

If you have a dedicated management server, you may want to hardcode this in your ~/.ssh/config as:

Host *
RemoteForward 3049 nfs-server:2049

On the nfs-server side, things become a bit more complicated. Configuring NFS4 is a bit different than what I was used to do. Look at the next example config:

/srv/nfs *(ro,sync,insecure,hide,no_root_squash,fsid=0,no_subtree_check)
/srv/nfs/share *(ro,sync,insecure,nohide,no_root_squash,fsid=1)

The difference is that the export with "fsid=0" is considered the root of the exported directories. No longer does NFS expect directories to be exported with the same location on the NFS server.

The downside is that you may have to bind-mount your real path to the tree that you export. In my case I would have to do:

mount -o bind /path/share /srv/nfs/share

And as a result, /srv/nfs/share will be exported as /share.

All nice and dandy.

Yes, now let's do the mount:

mkdir /path
mount -t nfs4 -o port=3049,hard,intr localhost:/share /path

And this should work. At least if the permissions are set correctly. If you do have problems, the kernel messages and mountd message in /var/log/messages usually give a good indication of what the cause is. If you are unlucky, nothing is logged and it becomes guesswork.

Update: My original article indicated that doing this was not completely possible. But the problem was related to the new NFS4 configuration.

solaris 10?

Solaris 10 has ( I believe ) IPFilter installed by default. Check for the existence of /usr/sbin/ipnat ( and /usr/sbin/ipf )

coombs.anu.edu.au/~avalon/ is the home page. I have used it on solaris but not for some time - solaris 2.6/7 timeframe - well, at least on solaris :->

The insecure option

Try adding "insecure" to your exports; from exports(5):
"[The secure] option requires that requests originate on an Internet port less than IPPORT_RESERVED (1024). This option is on by default. To turn it off, specify insecure."


I am ashamed to say that I overlooked it !

I expected it to be part of the nfsd configuration and not something that is set on a per export basis.

Thanks a lot and if we ever meet, I will buy you a Belgian beer :-)


I was answering too soon. My analysis of the problem was incorrect (unprivileged port) and in fact when I arrived at work I noticed I already had "insecure" in the NFS options.

Also when I tested it with "secure" and using an unprivileged port, the kernel messages would return

kernel: nfsd: request from insecure port (!

So the problem lied somewhere else. Good news though, I eventually made it work ! And will update the article for future reference.

PS I still owe you the beer though :)

Right tool for the job?

Why not use openvpn or some other Virtual Private Network?


That is a possibility, but I don't see how that would make the situation more simple.

NFSv4 via SSH

NFSv4 works quite well with Kerberos. This gives you the benefit of SSO as well as encryption if so desired over a single tcp port making it ideal for mounting thru firewalls and over insecure networks. Not sure why you would want to tunnel NFSv4 via SSH, unless you dont have kerberos/ldap or AD . I've never tried it in Linux, but it works well with Solaris 10 and AD though. I've also got it to work with Solaris 10 and Kerberos/Openldap.

I'm always amazed by the number of protocols that people will tunnel via SSH and the hoops that they jump thru to get it to work. I usually use ssh for terminal or X forwarding but general purpose VPN is better handled by IPSEC or SSL types of VPN like OpenVPN.

Think isolated servers in a bank datacenter

For isolated servers where only SSH is allowed to the system for management purposes and no data connections (except business related) can go into the management or corporate network, you have few options.

What I usually do is provide by default a remote forward for HTTP to a distribution server, so that whenever you access these servers you can download updates and additional software. But in certain cases filesystem access is more useful so being able to tunnel NFS over SSH is very useful as well.

The benefit of doing it like this is that there is only a short period of time (during a maintenance window) that this can be used, but by default there is no possibility to exploit it.

Compared to a permanent firewall rule this is more secure and less prone to attacks.


The bank data center is the point. NFSv4 was designed to overcome the limitations of previous versions. Using NFSv4 with a mature authentication/encryption system, like kerberos/ldap/certificates, allows one to design better bank data centers. Think NetAPP filers with terabytes of data as the backend in the corporate network and the centos/redhat systems in the DMZ as the front ends. There is no reason why the systems in the front end cant use the backend filers thru firewall, especially a mount point that only has patches, packages, etc. - you can even export it read only. I'll bet money that NFSv4/kerberos natively will out perform tunneled thru ssh any day.

In the case of Centos/Redhat, coupled with SELinux enabled on the front end dmz servers and virtualized via XEN/KVM/OpenVZ to only perform the tasks assigned to them, the risk is very minimal. You get the benefits of performance, simplicity, separation and security.

SSH has been used as "glueware" for far too long. This is due to the fact that most network file systems historically have had a bad reputation for insecurity. Modern network protocols, like NFSv4, do not require SSH when implemented properly.

The problem is one of transition from older versions of NFSv2/3 to the newer one and implementing a solid directory/kerberos AAA system. The bigger problem though for most enterprises is to go with what they know so they continue to use old school methods. Lack of investment in training the sysadmins kills the spirit of innovation. Not sure what your workplace is like.

SSH use, IMHO, should be limited to network terminal access, sftp and maybe X. I think I'll write a doc on implementing NFSv4 and Kerberos/LDAP and toss it on the Centos wiki as a project.

Not even sure how I got onto this blog. Oh yeah, I was trying to get a release date for Centos 5.2.


I think you're looking too

I think you're looking too much into it. This is not offered as a permanent solution nor is it used as it.

Fact is that for isolated systems the company does not allow any network connectivity going out of that tier. The network team will not allow that communication on the firewalls because that is permanent.

Doing this with SSH only exposes the traffic tunneled for the time you're using it. The tunnel is gone when you log out and so will the NFS mount.

SSH is not being used for security, although it doesn't hurt either. Everything else you say unrelated to what I described.

use rsync over SSH ?

if you don't have to modify files on both sides at the same time.

rsync [-n] -e ssh -avDH --delete server1:/file/repo /local/copy/file/repo/.

But that means to initially transfert all datas the first time (to have a copy).
On the other hand, it's also provides a nice backup (at least a copy somewhere else).


Normally I would push using SFTP or rsync, but if your dataset is a few GB when not the complete dataset is going to be used and there are a few systems where you have to perform the action on, tunneling NFS over SSH is a much more viable alternative than expecting there is enough room available on each server and having to transfer the whole dataset while only part is effectively used.

Eventually I did it using sftp, but in the future for these cases NFS is more useful.


The simplest option in such cases is to use sshfs. Any situation that permits SFTP will also permit sshfs, and sshfs is dead easy compared to NFS anything.

sshfs would work

Except that it needs fuse and it does not come shipped with RHEL. So in this case NFS over SSH is simply easier.

Besides, to be able to do sshfs from that isolated server, you need to tunnel again through your existing SSH connection. So that would be sshfs over SSH, which has somewhat more overhead than NFS over SSH.

But in general sshfs is a viable alternative for remote mounting and for that reason I have it packaged in RPMforge as fuse-sshfs.

nfsclient: 2.6.32-71.el6.x86_


I have yet to get NFS4 to work through a hardware firewall. tcpdump output is showing client originating mount on random ports.
tcpdump command used:
tcpdump -vv -x -X -i eth0 port 2049

Sample output:
[root@nfs_client ~]# tcpdump -vv -x -X -i eth0 port 2049
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
07:48:00.005079 IP (tos 0x0, ttl 64, id 31254, offset 0, flags [DF], proto TCP (6), length 60)
nfs_client.893 > nfs_server.nfs: Flags [S], cksum 0x504e (correct), seq 491207845, win 5840, options [mss 1460,sackOK,TS val 7358132 ecr 0,nop,wscale 7], length 0
0x0000: 4500 003c 7a16 4000 4006 8c52 0a0a 1e02 E..