• Call: +1 (858) 429-9131

How to Setup NFS Server on AWS EC2

This tutorial explains how to set up a highly available NFS server that can be used as storage solution (NAS – networked attached storage) for a cluster of web server instances that are being load balanced by a load balancer , for example ELB . If you have a web server cluster with two or more instances that serve the same web site contents, then these instances must access the same amount of data so that each one serves the same data, no matter if the load balancer directs the user to instance 1 or instance n. This can be achieved with an NFS share on an NFS server that all web server instances (the NFS clients) can access.

Install the package nfs-utils in the NFS server instance

[shell]# yum install nfs-utils[/shell]

The NFS setup also needs the portmap daemon

[shell]# yum install portmap[/shell]

Every file system being exported to remote ec2 instance via NFS, as well as the access level for those file systems, are listed in the /etc/exports file. When the nfs service starts, the exportfs command launches and reads this file, passes control to rpc.mountd (if NFSv2 or NFSv3) for the actual mounting process, then to rpc.nfsd where the file systems are then available to remote instances.

Lets say you want to export the directory /opt/nfstest of the nfs server to two remote instances having private ip’s 192.168.1.9 and 192.168.1.10

[shell]# vi /etc/exports[/shell]

Add

[shell]
/opt/nfstest 192.168.1.9/255.255.255.0(rw, no_root_squash, async)
/opt/nfstest 192.168.1.10/255.255.255.0(rw, no_root_squash, async)
[/shell]

and save the file

Now we use the exportfs command. It is used to maintain the current table of exported file systems for NFS. This list is kept in a separate file named /var/lib/nfs/xtab which is read by mountd when a remote instance requests access to mount a directory.
Normally this xtab file is initialized with the list of all file systems named in /etc/exports by invoking exportfs -a.

[shell]#exportfs -a[/shell]

Getting the services started

Starting the Portmapper
NFS depends on the portmapper daemon, either called portmap or rpc.portmap. It will need to be started first.

[shell]# /etc/init.d/portmap start[/shell]

it is worth making sure that it is running before you begin working with NFS

[shell]# ps aux | grep portmap[/shell]

Now start the NFS daemon

[shell]# /etc/init.d/nfs start[/shell]

Verifying that NFS is running

[shell]# rpcinfo -p [/shell]

you will get something like this.

program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100011 1 udp 749 rquotad
100005 2 tcp 766 mountd
100005 3 udp 769 mountd
100005 3 tcp 771 mountd
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
300019 1 tcp 830 amd
300019 1 udp 831 amd
100024 1 udp 944 status
100024 1 tcp 946 status
100021 1 udp 1042 nlockmgr
100021 3 udp 1042 nlockmgr
100021 4 udp 1042 nlockmgr

Please note that portmap listens on port 111 and nfs on port 2049.

If you later decide to add more NFS exports to the /etc/exports file, you will need to either restart NFS daemon or run command exportfs:

[shell]#exportfs -ar[/shell]

Mount remote file system on client

First we need to create a mount point:

[shell]# mkdir /opt/localmount [/shell]

If you are sure that the NFS client and mount point are ready, you can run the mount command to mount exported NFS remote file system:

[shell]# mount -t nfs 192.168.1.3:/opt/nfstest /opt/localmount[/shell]

Here 192.168.1.3 is the ip of the NFS server, -t is used to specify the file system.

Restart the portmap daemon in the client

[shell]# /etc/init.d/portmap restart[/shell]

done !

Now you can verify your NFS mounts with the showmount command which shows the exports from the NFS server.

[shell]# showmount -e 192.168.1.3[/shell]

which will show output like

[shell]
/opt/nfstest 192.168.1.9/255.255.255.0,192.168.1.10/255.255.255.0
[/shell]

15 Comments

  1. Ed Boas says:

    This doesn’t work for me. When I try to mount, I get the following error:

    mount.nfs: mount to NFS server ‘10.162.183.36’ failed: timed out, retrying

    I am using Basic Fedora Core 8. That doesn’t recognize portmap, so I changed it to rpcbind. I also tried opening ports 111, 2049, and 35563. Any other ideas?

    Thanks!

  2. Ed Boas says:

    I finally got it to work. When starting the instance, you have to use a security group that opens all ports to the default group. For example, you could use the default security group, with SSH enabled. Also, you have to start rpc.statd. Finally, there are no spaces in “(rw,no_root_squash,async)”

  3. Srinivas T says:

    I had to do a few more steps.

    If instances belong to different security groups, additional settings are required. First assign a fixed port number to mountd. Otherwise, each time the OS boots up, it picks up a different port number and enabling ports under Amazon security group settings becomes difficult. In Ubuntu, do the following on server
    > sudo vi /etc/default/nfs-kernel-server
    RPCMOUNTDOPTS=”–manage-gids -p 34642″ # assign fixed mountd port

    Add following entries to Amazon security group settings in AWS Management Console
    # For portmapper
    – tcp 111 111
    – udp 111 111
    # For NFS
    – tcp 2049 2049
    – udp 2049 2049
    # For mountd
    – tcp 34642 34642
    – udp 34642 34642

    If client has an elastic IP, then hostname in the /etc/exports file should preferably be the name assigned by EC2 and not the elastic IP address itself.
    E.g.,
    /share/srv ec2-255-254-253-211.compute-1.amazonaws.com(rw,sync,no_subtree_check)
    and not
    /share/srv 255.254.253.211(rw,sync,no_subtree_check)

    The EC2 name resolves to internal IP, it avoids getting charged for network traffic within same availability zone. (See http://alestic.com/2009/06/ec2-elastic-ip-internal)

    Lastly client and server user and group names should have same id value. (See http://www.troubleshooters.com/linux/nfs.htm#_If_it_mounts_but_cant_access) Do ‘chown’ and ‘chgrp’ on shared directories to these common ids.

  4. admin says:

    Thanks Srinivas 🙂

  5. Kirk True says:

    I ran into the portmap vs. rpcbind issue as well as the spaces in the options issue as well (didn’t look at the comments quickly enough 🙂 I also didn’t need to specify the subnet masks in /etc/export and _seems_ to work just fine.

    My NFS setup is to share files within a set of EC2 instances internally. As such I didn’t need to mess with opening ports or anything.

    Thanks!

  6. Michael Grant says:

    I needed to nfs mount between 2 (or more) amazon instances.

    I ended up adding the group to the group. For example, if my security group was named “my_aws_group”, in the “Source (IP or group)” field, I put “my_aws_group” and this seemed to open all ports between all instances that were in that group. This avoids the problem of having to pin the port number for mountd.

  7. stuck says:

    I’m stuck:

    mount.nfs: mount to NFS server ‘rpcbind’ failed: RPC Error: Program not registered
    mount.nfs: internal error

    The only different from this tutorial is that, when I do a rpcinfo I don’t see all you see:

    # rpcinfo -p
    program vers proto port service
    100000 4 tcp 111 portmapper
    100000 3 tcp 111 portmapper
    100000 2 tcp 111 portmapper
    100000 4 udp 111 portmapper
    100000 3 udp 111 portmapper
    100000 2 udp 111 portmapper
    100000 4 0 111 portmapper
    100000 3 0 111 portmapper
    100000 2 0 111 portmapper
    100005 1 udp 48621 mountd
    100005 1 tcp 48621 mountd
    100005 2 udp 48621 mountd
    100005 2 tcp 48621 mountd
    100005 3 udp 48621 mountd
    100005 3 tcp 48621 mountd

    any idea ?

  8. Julius says:

    This didn’t work for my setup — it just hangs when trying to mount. However, I found these instructions which did work for me. http://www.cupcakewithsprinkles.com/amazon-ec2-and-nfs/ In any case, thanks for the post!

  9. Sam Vic says:

    Excellent article!

    the public dns name changes when EC2 instance is stopped and than started. It makes the NFS setup complicated looking for old dns entry. Any suggestions how to avoid that (without elastic ip)?

  10. Brooks says:

    If you’re load balancing for fault tolerance, wouldn’t the NFS server be a single point of failure?

  11. @Sam: You’ll need to use an Elastic IP to persist the ability to connect to your instance after it is rebooted.

  12. Xavier Boix says:

    I still don’t get why do you call this “a highly available NFS server”.

    To me it looks like an ordinary single-instance NFS server, that can provide shared storage to other machines, but the NFS server itself it’s a single point of failure of the entire ecosystem.

    Maybe I’m missing something?

  13. Tim says:

    Brilliant. So no you are running inherently insecure NFS in an unsecured amazon cloud. good luck!

  14. stre10k says:

    And where is the highly available NFS server?

  15. galaxy says:

    Well, although the first words of this article say “highly available NFS server” they are misleading. There is nothing in this configuration that supports these words. It’s just a single point of failure, if something happens with this NFS server instance the whole thing is screwed. 🙁

Leave a Reply