dropdown menu

NFS

NFS-Network Filesystem

server             computer that makes its file systems, dirs and other resources available for remote access
clients            computers that use a server's resources
export             the act of making file systems available to remote clients
mount              the act of a client accessing the file systems that a server exports

-----------------------------------------------------

Main daemons, what are needed for NFS:

During system boot portmap (from /etc/rc.tcpip) and NFS related daemons (from /etc/rc.nfs) are being started. After portmap is running other daemons can be registered to portmap, so it knows correct port of each daemon. When nfs (/etc/rc.nfs) is started it checks the existence of /etc/exports file. If the file exists than the system will be a server and the appropriate daemons will be started.

# lssrc -g nfs
Subsystem         Group            PID          Status
 biod             nfs              11469140     active
 nfsd             nfs              5374622      active
 rpc.mountd       nfs              8258290      active
 rpc.lockd        nfs              9044514      active
 rpc.statd        nfs              6947500      active
 nfsrgyd          nfs                           inoperative
 gssd             nfs                           inoperative


1. portmap: (111 TCP/UDP) (on nfs server)
It tells the RPC request, which port should be used for communication.
/etc/rpc             <--this file contains all the rpc program numbers (program identifier)
rpcinfo              <--this command can be used for workability checking


2.mountd: (random TCP) (on nfs server)
After the files, directories and/or filesystems have been exported, an NFS client must explicitly mount them before it can use them. It is handled by the mountd daemon. It answers RPC requests, it checks /etc/xtab file to find out what are exported. It shows the currently mounted filesystems, with the help of the command showmount. Port can be specified in /etc/services.


3. nfsd (2049 TCP) (on nfs server)
Services client requests for filesystem operations. Once a client's mount request has been validated by mountd, it is allowed to request various filesystem operations. These requests are handled on the server side by nfsd. If the /etc/exports file does not exist, the nfsd and the rpc.mountd daemons will not start. You can get around this by creating an empty/etc/exports file.


4. lockd and statd (both on server and client)
lockd (file locking) and statd (file lock recovery at crash) are on both server and client, and they run as a team. The lockd daemon on the client sends lock requests to the server lock daemon through the RPC. The lockd daemon then asks the statd (status monitor) daemon for monitor service. The statd daemon interacts with the lockd daemon to provide crash and recovery functions for the locking services

The status monitor maintains information about the location of connections and the status in the /var/statmon/sm, /var/statmon/sm.bak and the /var/statmon/state file. When statd restarted it queries these files and tries to reestablish the connection it had prior to termination. To restart the statd daemon, and subsequently the lockd daemon, without prior knowledge of existing locks or status, delete these files before restarting the statd daemon.

The statd daemon should always be started before the lockd daemon.


5. biod (on nfs client)
Not needed anymore, it no longer plays an active role in management of the NFS client subsystem, because an NFS client internally manages its I/O operations to NFS servers. (The biod daemon is retained for compatibility reasons, becuase earlier versions might have scripts that invoke biod.) The biod daemon might be removed in future AIX  releases. (man biod)

(chnfs and biod has a parameter: NumberofBiod, which specifies the number of biod threads on the client. This option has no effect and should not be used. If needed number of biod threads should be set as a mount option: mount -o biods=16 ...)

For historical reasons only, this is what biod was used for in the past (below info is not valid anymore):
In order to improve overall NFS performance, most systems include the biod daemon which does basic read-ahead and write-behind filesystem block caching. For example, when an NFS client requests three bytes from a file, a much larger chunk (usually 4K) is actually read. When the client reads the next three bytes, no network transaction needs to occur. It is strongly recommended to run this daemon on all NFS clients, but it is not strictly required.


In NFS V4:
only nfsd, portmap, biod and nfsgryd (name translation service), and supports only TCP (in NFS V3 TCP and UDP as well)

-----------------------------------------------------

/etc/exports

The /etc/exports file contains an entry for each directory that can be exported to NFS clients. This file is read automatically by the exportfs command. If you change this file, you must run the exportfs command to activate these changes. If this file is present during system startup only then does the rc.nfs script execute the exportfs command and start the nfsd and mountd daemons.

(You cannot export either a parent directory or a subdirectory of an exported directory within the same file system.)

Syntax for exported directories: /directory -option1,option2,option3...

Options:
access=client1:client2...      <--gives mount access to each client listed (If not specified, any client is allowed to mount)

ro                             <--exports dir with read-only permission. (If ro is not specified, it is exported with read-write permission.)
ro=client1:client2             <--exports dir with ro to the specified clients, if other clients can access it, those have rw permission
rw=client1:client2             <--exports dir with read-write to the specified clients, clients not int the list have ro permission

root=client1:client2...        <--allows root access from the specified clients ('access' option is still needed to restrict clients)


Some examples:
/apps                          <--export to the world
/apps    -access=lpar1:lpar2   <--export only to these systems
/apps -root=lpar1:lpar2        <--export to the world, but root access only possible from these systems
/apps -access=lpar1,root=lpar1 <--export to that server and root access is also possible from there
/apps -access=lpar1:lpar2,ro=lpar1,root=lpar2  <--lpar1 has ro option only (lpar2 is rw), and root of lpar2 can write in the dir as roor

-----------------------------------------------------

showmount:

showmount command displays a list of all clients that have remotely mounted filesystems. Showmount talks to the rpc.mountd daemon and rpc.mountd daemon stores this information in the /etc/rmtab file.

root@aix40: / # showmount -a    <--it shows that from the host:aix40 (default is the current host if no host specified)
aix31.domain.com:/db2           <--the dir /db2 is mounted to the host aix31
                                (showmount -a <server> <--shows which dirs of the <server> are in use (mounted) by other hosts)

root@aix40: / # showmount -e    <--it shows which directories are exported and from which hosts can be accessed
export list for aix40:          (it does not show the existing mounts, just the posibilities (what is exported))
/sapcd (everyone)
/db2   (everyone)

checking on the client if the directory is exported on the nfs server:
showmount -e <nfs server>

-----------------------------------------------------

Commands:

/etc/xtab                       <--shows what are currently exported, after 'exportfs' this file is updated (remove an entry with exportfs -u)
/etc/rmtab                      <--contains a list of clients, which are mounting resources from server (showmount command is using this file)
                                (client entry will only be removed from this file, if 'umount' command is given on client)

exportfs                        <--lists the content of /etc/xtab (this file never should be edited manually)
exportfs -a                     <--exports all directories in the /etc/exports file
exportfs /directory             <--exports only the given directory
exportfs -u /dir                <--unexports the given directory
exportfs -u -a                  <--unexports all directories in /etc/exports file

showmount -e <nfs server>
       <--you can check from a client what dirs are exported on given nfs server
showmount -a <nfs server>       <--shows which clients are currently mounting resources from the given nfs server

mknfsexp -d dir                 <--exports the given directory (it inserts a line in /etc/exports)
mknfsmnt -f <mount point> -d <remote dir> -h <nfs server> -A -E <--add entry to /etc/filesystems, so nfs mounts can be automatically mounted
                                                                (-A: auto mount, -E: alloows keyboard interrupts on hard mounts)

for scripting:
on NFS server: for i in `cat list`;do echo mknfsexp -d $i;done
on NFS client: for i in `cat list`;do /usr/sbin/mknfsmnt -f $i -d $i -h qlhdlhfc -A -E;done

-----------------------------------------------------

CONFIGURE NFS SERVER AND CLIENT:

1. Pre-checks on the client

  - check if the needed ports are open to NFS server:
    111                     TCP and UDP    portmap daemon
    2049                    TCP            nfs server daemon (shilp)
    42812 (any chosen port) TCP            mountd

    telnet <nfs server ip> 111


  - check if RPC is possible between the client and server:
    rpcinfo -p <sever name>      it queries portmap daemon for info regarding services on the specified server
                                 (it should show nfs, mountd)
    rpcinfo -u aix31 nfs 3       makes a call to the specific program and version number using UDP
    showmount -e <nfs server>    it shows also if communication is OK to nfs server


  - check if all necessary daemons are running:
    lssrc -g nfs

    Daemons on client: rpc.statd, rpc.lockd, (on nfs verion 4: nfsrgryd, gssd)

    To configure these on client:
    1. startsrc -g nfs; stopsrc -s nfsd; stopsrc -s rpc.mountd
    2. chitab "rcnfs:23456789:wait:/etc/rc.nfs > /dev/console 2>&1 # Start NFS Daemons"

    (strange but mount command works if none of these daemons are running)

-----------------------------------------------------

2. Configure NFS server

  - check if all the necessary daemons are running (don't forget inetd and portmap)
    lssrc -g nfs

    Daemons on server: rpc.mountd, nfsd, rpc.statd, rpc.lockd, portmap, (on nfs verion 4: nfsrgryd, gssd)

    To configure these on nfs server:
    1. startsrc -g nfs (it should start all daemons)
    2. chitab "rcnfs:23456789:wait:/etc/rc.nfs > /dev/console 2>&1 # Start NFS Daemons"


  - exporting a directory:
    There are 3 alternatives: smitty, editing /etc/exports manually, mknfsexp

    SMITTY:
    smitty nfs -> Netw. File Sys. -> Add a Directory to Exports List
    (this will do all the necessary actions automatically)
   

    MANUALLY:
    1. vi /etc/exports
       /ora_backup -sec=sys:none,rw,root=aixacadb1:aixacadb2

    2. export the directories which are in the /etc/exports (/etc/xtab: used by the system, and it shows what is currently exported)
        exportfs -a           exports all items listed in /etc/exports, and copies these entries to /etc/xtab   
        exportfs /dirname     exports named directory
        exportfs -i /dirname  temporarily exports (-i: it specifies that the /etc/exports file is not to be checked)
        exportfs              shows what is currently in /etc/xtab
        exportfs -u /dir      unexports the directories you specify


    MKNFSEXP:
    mknfsexp -d dir            it exports the given directory (it inserts a line in /etc/exports)

-----------------------------------------------------

3. Mount exported directory on client
There are 3 alternatives: smitty, mounting manually, mknfsmnt

    SMITTY:
    smitty nfs -> Netw. File Sys. -> Add a File System for Mounting   
    (this will do all necessary actions automatically)

    MANUALLY:
    mount <server name>:/<exported dir> /<mount point>
    (this will not put fs in the /etc/filesystems)

    MKNFSMNT:
    mknfsmnt -f <mount point> -d <remote dir> -h <nfs server> -A -E
    (it adds an entry to /etc/filesystems)

-----------------------------------------------------
It would seem that mounting filesystems soft would get around the hanging problem. This is fine for filesystems mounted read-only. However, for a read-write filesystem, a pending request could be a write request, and so simply giving up could result in corrupted files on the remote filesystem. Therefore, read-write remote filesystems should always be mounted hard, and the intr option should be specifed to allow users to make their own decisions about hung processes.
-----------------------------------------------------
A soft mount will try to re-transmit a number of times. This re-transmit value is defined by the retrans option. After the set number of retransmissions has been used, the soft mount gives up and returns an error.

A hard mount retries a request until a server responds. The hard option is the default value. On hard mounts, the intr option should be used to allow a user to interrupt a system call that is waiting on a crashed server.

Define bg in the /etc/filesystems file when establishing a predefined mount that will be mounted during system startup. Mounts that are non-interruptible and running in the foreground can hang the client if the network or server is down when the client system starts up. If a client cannot access the network or server, the user must start the machine again in maintenance mode and edit the appropriate mount requests.
-----------------------------------------------------

Full NFS reset and recycle:

# stopsrc -g nfs             <--stopping nfs daemons   
# rm /etc/exports            <--removing exports file (you can just rename it if you wish to keep it)
# touch /etc/exports         <--creating again
# rm /etc/rmtab /etc/xtab    <--removing cache files
# rm -rf /var/statmon/*      <--removing 2 directories (sm, sm.bak) and a file (state)
# startsrc -g nfs            <--starting nfs daemons (content of var/statmon will be created again automatically)

-----------------------------------------------------

/homesXXX:
        dev             = "/vol/vfs01_data01/homesXXX"
        vfs             = nfs
        nodename        = xx-server01.domain.com
        mount           = true
        type            = nas
        options         = rw,bg,hard,intr,rsize=32768,wsize=32768,timeo=600
        account         = false
-----------------------------------------------------

NFSv4

NFSv4 is using the "domain" concept. NFSv3 allows access to a file based on the user id, but with NFSv4 it will first check to see if the NFS domains are the same between the client and server. If the configured domains differ between client and server, NFS will deny access.

On NFS Server:
1. chnfsdom mydomain.com                 <--set nfs domain (value is stored in /etc/nfs/local_domain, smitty chnfsdom works as well)
2. chnfsdomain                           <--show the domain
3. startsrc -s nfsd                      <--if not running start it (makes file system operations available to clients)
4. startsrc -s nfsrgyd                   <--if not running start it ( makestranslation between user/group names and ids from servers and clients, NFSv4 is String based and not ID based)

Other damons are not needed for NFSv4:
# lssrc -g nfs
Subsystem         Group            PID          Status
 nfsrgyd          nfs              3866830      active
 nfsd             nfs              3735726      active
 rpc.mountd       nfs                           inoperative
 biod             nfs                           inoperative
 gssd             nfs                           inoperative
 rpc.lockd        nfs                           inoperative
 rpc.statd        nfs                           inoperative

5. update exports (mknfsexp, smitty nfs, vi /etc/exports and exportfs)
cat /etc/exports: /test1 -vers=4,sec=sys:krb5p:krb5i:krb5:dh,rw,access=10.10.10.101,root=10.10.10.101


on NFS Client:
1. chnfsdom mydomain.com                       <--set nfs domain (chnfsdomain command without parameter will show domain)
2. startsrc -s nfsrgyd                         <--if not running start it (that is the only daemon needed for client)
3. mount -o vers=4 10.10.10.100:/test1 /mnt    <--mount using NFSv4

-----------------------------------------------------

How to give write permission to root:

If we want the root user have write rights on it, it has to be defined in /etc/exports:

aix21:root: /etc # cat exports
/home -
/notes_temp -sec=sys:none,rw,root=aix41

(exportfs -a needed, at client side umount is not needed, setting will be active automatically)
-----------------------------------------------------


Tips:
-when you remove a fs from a server, before that remove from nfs server, and when removed from client, recreate it (smitty nfs))


smitty nfs         same menus as smitty jfs2 (add, change remove nfs fs, or export a dir, remove an export...)
nfsstat            displays statistical information about NFS and Remote Procedure Call (RPC) calls
netpmon            shows the number of reads and writes that each client is sending to the nfs server
netstat -an| grep ESTABLISHED | grep 2049  it will show the servers (NFS clients) which have NFS mounts from this server
echo clio | kdb    display nfs statistics (io count, io waiting, max wait)

------------------------

NFS through firewall:

The mountd ports are selected dynamically each time the mountd server is initialized. Therefore, the port numbers will vary from one boot to another, or when mountd is stopped and restarted.

Unfortunately, this causes a problem when used through a firewall.
The solution:

The mountd TCP and UDP ports must be different.[I used the same values and it worked] Any free port number is valid.

1. rpcinfo -p <nfs server> | grep mount

      Produces output similar to:
      100005 1 udp 37395 mountd
      100005 2 udp 37395 mountd
      100005 3 udp 37395 mountd
      100005 1 tcp 34095 mountd
      100005 2 tcp 34095 mountd
      100005 3 tcp 34095 mountd

2. stopsrc -s rpc.mountd
3. Update /etc/services with new mountd entries.

    mountd 33333/tcp
    mountd 33334/udp

    this worked for me as well:
    mountd 33333/tcp
    mountd 33333/udp

4. startsrc -s rpc.mountd

5. rpcinfo -p <nfs server> | grep mount

      Produces output similar to:

      100005 1 udp 33334 mountd
      100005 2 udp 33334 mountd
      100005 3 udp 33334 mountd
      100005 1 tcp 33333 mountd
      100005 2 tcp 33333 mountd
      100005 3 tcp 33333 mountd

------------------------

mount: giving up on:
        10.126.0.13:/bb

vmount: Permission denied   
or
vmount: Not owner         


check/set nfs_use_reserved ports parameter:
1. nfso -a                           <--check nfs_use_reserved_ports if it is on 0 or 1
2. nfso -o nfs_use_reserved_ports=1  <--set that parameter to 1, to survive reboot this one as well: nfso -po nfs_use_reserved_ports=1

------------------------

# mount 10.126.0.13:/bb /bb
nfsmnthelp: 10.126.0.13: Error -1 occurred.
mount: giving up on:
        10.126.0.13:/bb
Error -1 occurred.


This is usually caused by reverse lookup problem.
Chek "host <ip>" command and make an entry (on both - server and client), if necessary in order to get the same results.
NFS server must know about the client (put in /etc/hosts)

------------------------

The /etc/rmtab file
When mountd accepts a mount request from a client, it notes the directory name passed in the mount request and the client host name in /etc/rmtab. Entries in /etc/rmtab are long-lived; they remain in the file until the client performs an explicit unmount of the file system.

It is this file that is read to generate the showmount -a output. The information in /etc/rmtab can become stale if the server goes down abruptly, or if clients are physically removed without unmounting the file system.

In this case, you would remove all locks and the rmtab file. For example:
# stopsrc -g nfs
# stopsrc -s portmap
# cd /etc
# rm -fr sm sm.bak state xtab rmtab
# startsrc -s portmap
# startsrc -g nfs
# exportfs -a

-------------------------

Received this error when I tried mounting:
# mount aix01:/nim/mksysb /mnt
mount: 1831-008 giving up on:
aix01:/nim/mksysb
vmount: The file access permissions do not allow the specified action.
NFS fsinfo failed for server aix01: error 7 (RPC: 1832-010 Authentication error)


In syslog files I saw this:
on NFS server: kern:err|error unix: nfs_server: weak authentication
on NFS client: NFS getattr failed for server aixnltest: error 7 (RPC: 1832-010 Authentication error)

SOLUTION:
solution is to set nfs_use_reserved_ports to 1, on the client:
nfso -p -o nfs_use_reserved_ports=1
-------------------------

Warning: umount:: RPC: 1832-018 Port mapper failure - RPC: 1832-008 Timed out

If umount is not possible beacuse NFS server is not reachable, you can force umount on client:
umount -f /nfs_mounted_dir

(It will look like it is still hanging, and after a minute you will get the same error message, BUT umount will be successful on client!!!)

-------------------------

ORA-01580: error creating control backup file /ora_backup/PWAB/ctrl.dbf2
ORA-27054: NFS file system where the file is created or resides is not mounted with correct options



nfs mount was available, but it was mounted manually it did not exist in /etc/filesystems

Solution was to add it into /etc/filesystems with these options:
smitty nfs --> Network File System --> Add a File System...

bg,hard,intr,rsize=32768,wsize=32768,vers=3,proto=tcp,sec=sys,rw


-------------------------

0042-124 c_ch_nfsexp: NFS option vers=3 is NOT supported

This came up when created mksysb backups on NIM environment.
(I have found out it is not an error, because mksysb was successful, so it can be ignoed.)

I kept looking in /etc/exports
/nim/mksysb -vers=3,sec=sys:krb5p:krb5i:krb5:dh,rw,root=aixdb1:aixdb2

After I removed -vers=3 in /etc/exports:
/nim/mksysb sec=sys:krb5p:krb5i:krb5:dh,rw,root=aixdb1:aixdb2

I received this:
0042-124 c_ch_nfsexp: NFS option sec=sys:none is NOT supported

Then I removed everything and set this, and no more errors come:
/nim/mksysb -anon=0        <--anon=0 means, unknown users will get uid of 0

-------------------------

39 comments:

Anonymous said...

what is the diffrence B/W soft mount and hard mount ?

Anonymous said...

how to do the Hard mount?

aix said...

A soft mount will try to re-transmit a number of times. This re-transmit value is defined by the retrans option. After the set number of retransmissions has been used, the soft mount gives up and returns an error.

A hard mount retries a request until a server responds. The hard option is the default value. On hard mounts, the intr option should be used to allow a user to interrupt a system call that is waiting on a crashed server.

As hard mount is the default option, a soft mount looks like this:
mount -o soft :/ /

(All these infos are on this page a little above...)

Anonymous said...

What is meant by a physical file system?

aix said...

I never heard about this relating to AIX. (Regarding z/OS there are some info at IBM site).
If you have more details, you can share with me.

Anonymous said...

This had been for AIX in place for almost 13 years now that i had worked.

Unknown said...

Hi,

I am facing an issue with NFS CLIENT Mount point(/backup).
I need to change NFS mount point(/backup) attributes, permissions but when I tried the command # SMIT CHNFSMNT this mount point is not showing.
But while executing # df -g /backup this mount point is showing the server name and everything, # mount command also it is showing.
But when I type # lsnfsmnt it is not showing the client mount point /backup.
Please suggest how to change the attributes and permissions in the situation?

Please help me this mount point is important to the application team.

aix said...

Hi, smit chnfsmnt or lsnfsmnt will show the filesystem only if you mounted as that an entry is created in /etc/filesystem. If xou go to smit nfs -> add a fs to mounting, and there you choose: Mount now, add entry to /etc/filesystems or both? : both, after that lsnfsmnt will show it and smit chnfsmnt as well.

Unknown said...

Hi,

This filesystem is mounted but the entry is not in the /etc/filesystems. This filesystem is already in use by the application team.
Is it possible to edit the info in /etc/filesystems with out unmounting the filesystem.
If it is possible please suggest how to proceed?
If not possible means please guide me how to resolve this issue, I need to change the permissions of the filesystem.

Thanks

aix said...

Hi, I would not edit manually /etc/filesystems....but probably it can work...you can test it on a test system, what happens when you nfs mount and later add it to /etc/fiesystem...it can happen that you can modify it...I don't know.
If it is a production system, I would stay on the safe side...stop application, umount filesystem, add with smit nfs (with the option /etc/filesystem, and automount if needed)....

Unknown said...

Hi,

In one NFS client while I am trying to mount the Shared mount point I am getting Pemission Denied Error.
I have checked the NFS Services, Portmapper. All services are running fine.
I also checked the using # showmount -e , the mount point is shared to particular client as well.
But, still I am getting Permission Denied.
What might be the reason??
What do I need to do to mount this shared mount point on client side successfully?
please suggest.

Thanks.

aix said...

Hi,
I would check /etc/exports again if it looks ok, then checking the permissions of the shared directory. You can check syslog on NFS server and client for more info. (A little above there is a case when this helped: nfso -p -o nfs_use_reserved_ports=1, please check that one as well.)

Unknown said...

Hi,
I was asked this question in an interview..
We have a directory shared on an NFS server. That directory is being accessed from NFS client. For some reason NFS server crashes or it goes offline. Now how to unmount the file system or unlock the hung session on NFS client?? they dont want the current putty session to be closed...

Please advise. Thanks

aix said...

Hi, probably this helps: http://aixblogs.blogspot.hu/2009/03/use-ip-alias-trick-to-solve-hung-nfs.html

Unknown said...

i was facing the issue "ORA-27054: NFS file system where the file is created or resides is not mounted with correct options"

this page helped me in fixing the issue... Thanks a lot for useful informations...This blog rocks always!!!!!

Anonymous said...

I am having this NFS, I don't know what else to check, nfs daemon are all running both server except for "gssd" , both server able to ping. /etc/hosts has been checked for both client and server . I checked the /etc/exports of the server, showmount -e . checked. everything is showing right result , but I go to muount in the client it comes up with "giving up server:/mnt ;vmount: no such file or directory, I don't what else to check or do, can someone help. PLEASE

Anonymous said...

I have also checked the nfso -p -o nfs_use_reserved_ports=1 , and for the portmap =1, I am getting really frustrated , I think there must be a very simple thing I am missing , Just don't know, can you help , PLEASE

fkhan said...

I am having a problem of different owner and group for NFS mount point on a client node. Please help

Anonymous said...

Whith showmount -a in the server (nfs), you can show the list of clients to exportfs. If you use a DNS or /etc/hosts. Check the DNS client name registry and the name in /etc/hosts file. I solved that in this way.

Anonymous said...

Also ping by IP and By name from NFS server to Client and and backwards and you must validate the resolution name.

Anonymous said...

To mount an nfs share from AIX 5.1 server on AIX 7.1 client, I had to add entry for client into server /etc/hosts.
Beats me why, as client is known by dns on server, but it worked!
Server /etc/netsvc.conf : hosts = local,bind
so should have used dns if not found in /etc/hosts. I WAS able to ping client from host before adding client to /etc/hosts.

juan said...

hi i need export an NFS read/only for a host and read/write to another host ..is it possible? i have aix5.3 tl11 nfs V3...tks

aix said...

Hi, this is written at "man exportfs" under -oOprions:
"ro=Client[:Client] Exports the directory with read-only permission to the specified Clients. Exports the directory with read-write permissions to Clients not specified in the list. A read-only list cannot be specified if a read-write list has been specified."

juan said...

tks !!!!!!

Anonymous said...

Thanks for ur technical drive .
today i faced issue were after server reboot ; one of the client server not able to do nfs mount with nfs cluster cross mounted file system .
Followed below basic steps ; issue resolved .

RPC: 1832-008 Timed out
nfsmnthelp: nbaxa056: Connection timed out
mount: retrying

ACTION TAKEN:
On the nfs client:

# showmount -e nbaxa056
export list for nbaxa056:
/wasmast
nbaxa242,nbaxa248,nbaxa249,nbaxa546,10.15.146.70,10.15.146.186,10.15.146
.1

# traceroute nbaxa056
trying to get source for nbaxa056
source should be 10.15.150.102
traceroute to nbaxa056.hlmk.boulder.mebs.ihost.com (10.15.146.105) from
10.15.150.102 (10.15.150.102), 30 hops max
outgoing MTU = 1500
1 * * *
2 * * *

On the nfs server:
# host 10.15.150.102
hangs....

# cat /etc/netsv.conf
hosts=local,bind4

# vi /etc/hosts
Added the following
10.15.150.102 nbaxa546

On the nfs client:
# mount nbaxa056:/wasmast /wasmast

Successful

Anonymous said...

And updated the /etc/netsvc.conf file
hosts=local,bind4

aix said...

Thanks for your step-by-step solution, I appreciate it :)

Tusar said...

grt

Marcel said...

congratulations


Samiindin said...

Hello,

I'm getting below error when I tried to map the nfs fileystem, please help with the resolution.

mount :/u01/ora_disk1 /devciw
mount: 1831-010 server not responding: RPC: 1832-019 Program not registered

Unknown said...

Hello,

How can restrict some NFS exports to specific users ?

Any help will be very appreciated

Unknown said...

Hello,

I have a doubt, if we give the rw permission.for that user access to change the permissions for that file.

please help me..

Unknown said...

A low-level view of the physical characteristics of a file, such as its location on a disk or its physical structure, for example, whether indexed or sequential.

Anonymous said...

Hello everyone,

I am having a problem with an nfs mount that just doesn't seem to be covered so far. I have the mount point shared out from the NFS server, and I can mount it just fine on the client. The directory that is shared is owned by an application account, and the account exists on both the client and the server. When mounted by the client, everything looks as it should in terms of the directory ownership, but the files and directories inside the share show to be owned by root:system on the client, while they show to be owned by appacct:appgrp on the server.

I'm sure I'm missing something simple, but I'm just not seeing it. Pointers would be helpful.

Thank You

Anonymous said...

Never mind. the nfs share I created was crossing LV mounts. Like I said, it was something simple, I was just too far into the weeds.

aix said...

Thanks for your follow up, and posting it as well :)

Unknown said...

Thank you for a great article.
A question on restricting exports; How would one restrict NFS exports to specific IP ranges (eg 10.0.0.0/21 or 192.168.0.0/24 etc) instead of using hostnames or unrestricted/anonymous exports please?

Unknown said...

Hi, I am unable to mount the directory through automount,
could someone help me about steps to confiruge FS in automount

abhi said...

nice post thanks for sharing this post. Keep posting like this. Check my site "send large files free
"