nfs-doctor is a small command line tool written in C to help debug NFS servers from the client side.
The idea is simple: you give one IP or hostname, and the tool checks the things that usually break in NFS: network, rpcbind, NFS versions, mountd, exports, permissions, root squash, locking, stale handles and some basic performance.
It is not magic, and it will not replace a good server side analysis. But it helps a lot to understand if the problem is network, NFS config, permissions, UID/GID mapping, or something more strange.
Today nfs-doctor can do these checks:
- test if
rpcbindTCP port111is reachable - test if NFS TCP port
2049is reachable - query the RPC service map from rpcbind (supports IPv6 fallback)
- detect registered NFS, mountd, lockd/NLM and statd/NSM services
- test NFS v2, v3 and v4 with RPC
NULLPROC(including v4.1 and v4.2 hints) - test mountd v1, v2 and v3
- optionally test RPC over UDP with
--udp - enumerate exports using mountd
- check client prerequisite daemons (nfs-client.target, rpc.gssd, nfs-idmapd)
- detect Kerberos tickets and configuration with
--krb5 - mount exports automatically under
/tmp/nfsdoctor-* - try NFS v4.2 first, then fallback to v4.1, v4, and v3
- parse and verify effective mount options from
/proc/self/mountinfo - capture RPC stats (retransmissions, auth refreshes) before and after tests
- extract deep latency metrics from
/proc/self/mountstats - run filesystem checks after mount (close-to-open consistency, special files, quotas)
- test read/traverse permission
- test directory listing
- test POSIX ACLs, NFSv4 ACLs, generic xattrs, and SELinux contexts
- test create/write/read/fsync when allowed (with configurable timeouts)
- test advanced I/O operations:
copy_file_range,fallocate, andO_DIRECT - test advisory locks with
fcntl - detect practical
root_squashbehavior - simulate UID/GID access with
prctltermination safety - simulate supplemental groups with
--groups - run metadata latency test with create/rename/unlink
- run stale file handle loop looking for
ESTALE - check for pNFS layouts and NFSoRDMA connectivity
- run external
fiobenchmarks alongside internal smoke tests - safely handle temporary files, mounts and folders using
O_NOFOLLOW - perform safe audits with
--dry-runand rate limiting (--delay-ms) - generate hierarchical JSON reports for automation
- generate standalone HTML reports with inline CSS and Base64 (
--html) - output colored text and progress bars on interactive terminals
- run Docker fixture tests for regression checks
By default the output is compact. If you want all details, use --verbose.
NFS problems are very environment dependent. The result can change because of:
- firewall rules
- server export options
- NFS version
- kernel client state
- UID/GID mapping
- root squash
- ACLs
- SELinux/AppArmor on server
- server load
- stale file handles that only happen during real use
So, if the tool says no ESTALE happened, it means only that the tool did not reproduce it during the test window. It does not mean the problem can never happen.
On Debian or Ubuntu:
sudo apt-get update
sudo apt-get install -y build-essential pkg-config libtirpc-devOn Fedora/RHEL style distros:
sudo dnf install -y gcc make pkgconf-pkg-config libtirpc-develFor live mount tests you also need NFS client tools.
Debian or Ubuntu:
sudo apt-get install -y nfs-commonFedora/RHEL style distros:
sudo dnf install -y nfs-utilsNormal build:
Clean and rebuild:
Small self-check:
Install:
Install in another prefix:
make PREFIX=/opt/nfs-doctor installUninstall:
Manual compile, if you want:
gcc -O2 -Wall -Wextra -I/usr/include/tirpc nfsdiag.c -ltirpc -o nfsdiagFull diagnostic:
sudo ./nfsdiag 192.168.1.10Verbose mode:
sudo ./nfsdiag --verbose 192.168.1.10Only network and RPC checks, without mounting anything:
./nfsdiag --no-mount 192.168.1.10Test only one export:
sudo ./nfsdiag --export /data 192.168.1.10Pass mount options:
sudo ./nfsdiag --mount-options soft,timeo=30,retrans=2 192.168.1.10Do not create/write test files:
sudo ./nfsdiag --read-only 192.168.1.10Keep the temp folder for manual inspection:
sudo ./nfsdiag --keep-temp 192.168.1.10Default output is clean and short. For example, in a healthy server it can be something like:
nfsdiag: 192.168.0.21
[OK] 1 export(s) discovered
summary: ok=13 warn=0 fail=0
If you want to see all probe steps, use:
./nfsdiag --verbose 192.168.0.21Warnings and failures always appear in normal mode. Informational and low-level OK messages only appear in verbose mode.
For automation, use JSON.
JSON to stdout, without human text mixed together:
./nfsdiag --json 192.168.1.10JSON to file, keeping stdout empty:
./nfsdiag --json=report.json 192.168.1.10The JSON includes:
- tool name
- host
- timestamp
- system information (kernel, hostname, arch)
- summary (ok, warn, fail)
- options used
- exports (hierarchical list of mount tests with performance metrics, ACLs, etc)
- global events
- recommendations
For human-readable reports that can be easily shared or attached to tickets, use HTML:
./nfsdiag --html=report.html 192.168.1.10The generated HTML file is fully standalone.
Simulate one UID/GID:
sudo ./nfsdiag --uid 1000 --gid 1000 192.168.1.10Simulate more than one identity:
sudo ./nfsdiag --uid 1000 --gid 1000 --uid 65534 --gid 65534 192.168.1.10Simulate supplemental groups:
sudo ./nfsdiag --uid 1000 --gid 1000 --groups 10,20,30 192.168.1.10This is useful because many NFS problems are not really NFS protocol problems. Many times it is UID, GID, groups, ACL, or root squash.
Change write/read test size:
sudo ./nfsdiag --bench-bytes 167772160 192.168.1.10Change metadata latency iterations:
sudo ./nfsdiag --bench-iterations 500 192.168.1.10Change stale handle loop:
sudo ./nfsdiag --stale-iterations 1000 192.168.1.10Run benchmarks using fio instead of the internal C loop (requires fio installed):
sudo ./nfsdiag --bench-type=fio 192.168.1.10The performance test is only a smoke test. It is not a replacement for full benchmarking, though enabling fio provides more accurate storage baseline metrics.
Timeout for external commands like mount and umount:
sudo ./nfsdiag --command-timeout 15 192.168.1.10Delay between testing each export (rate limiting):
sudo ./nfsdiag --delay-ms 500 192.168.1.10Simulate the tool execution without actually mounting or modifying anything:
./nfsdiag --dry-run 192.168.1.10Try to isolate live mounts in a private mount namespace:
sudo ./nfsdiag --mount-namespace 192.168.1.10This needs root or CAP_SYS_ADMIN.
Probe UDP RPC too:
./nfsdiag --no-mount --udp 192.168.1.10Force IPv4 direct TCP checks:
./nfsdiag --ipv4-only --no-mount 192.168.1.10Force IPv6 direct TCP checks:
./nfsdiag --ipv6-only --no-mount nfs-server.example.comDisable NFSv4 pseudo-root fallback:
sudo ./nfsdiag --no-nfs4-discovery 192.168.1.10The NFSv4 fallback is useful when the server is NFSv4-only and mountd is not available.
Usage: ./nfsdiag [OPTIONS] <server-ip-or-hostname>
Options:
-e, --export PATH Test only this export
-o, --mount-options OPTS Extra mount options
--no-mount Run network/RPC checks only
--keep-temp Do not remove /tmp/nfsdoctor-* after tests
--read-only Do not create/write test files
--uid UID Simulate access as UID
--gid GID GID paired with last --uid
--groups G1,G2 Supplemental groups for simulation
--timeout SEC Network/RPC timeout
--command-timeout SEC Timeout for mount/umount commands
--fs-timeout SEC Timeout for filesystem operations in benchmarks
--mount-namespace Use private mount namespace when possible
--json[=PATH] Write JSON report
--udp Probe RPC over UDP too
--ipv4-only Force IPv4 direct TCP checks
--ipv6-only Force IPv6 direct TCP checks
--no-nfs4-discovery Disable NFSv4 pseudo-root fallback
--krb5 Check Kerberos prerequisites (ticket, gssd)
--bench-iterations N Metadata latency iterations
--stale-iterations N ESTALE loop iterations
--bench-bytes BYTES Bytes used in read/write smoke test
-v, --verbose Show detailed output
-h, --help Show help
0: no warnings or failures1: warning or failure found2: usage error or local runtime error
Warnings return 1 because in automation they usually need attention.
The project has Docker fixtures to reproduce bad NFS situations.
List fixtures:
Build all fixtures:
Build one fixture:
make docker-build-read-only-exportRun automated fixture tests:
Run only one test:
make test-fixture-rpcbind-unreachableSome tests need root because they do real NFS mounts from the host. If the host cannot run kernel NFS inside Docker, the test runner skips those cases instead of failing everything.
The current fixture set includes:
rpcbind-unreachablenfs-port-unreachablerpc-map-missing-nfsmountd-unavailableempty-exportsmount-deniedpermission-deniedacl-unsupportedidentity-deniedread-only-exportroot-squashlocking-missingstale-handleslow-performance
This version has these improvements:
- fully modular C architecture
- hierarchical per-export JSON output
- timeouts for filesystem operations (
--fs-timeout) - robust FD leak prevention and
poll()migration - deep metrics via
/proc/self/mountstatsandmountinfo - RPC stats monitoring (
/proc/net/rpc/nfs) for retransmissions - client daemon prerequisite checks (
rpcbind,nfs-client.target,idmapd) - Kerberos detection and support (
--krb5,gssdchecks) - NFSv4 ACL detection (
system.nfs4_acl) - NFSv4.1 and NFSv4.2 cascading mount support
- real IPv6 RPC support
Be careful when running against production exports.
By default the tool may create hidden .nfsdoctor-* files to test write/read behavior. If you do not want this, use:
Also, UID/GID simulation requires root because the tool uses setgid, setgroups, and setuid in child processes.
Some things are impossible to guarantee from the client side:
ESTALEonly appears if the handle becomes stale during the test- SELinux/AppArmor problems can look only like generic permission denied
- ACL info depends on what the NFS client exposes
- performance numbers are only smoke-test values
- Docker NFS fixtures depend on host kernel and Docker privileges
So use this tool as a fast diagnostic helper, not as the only source of truth.