NFS - not mounting at boot, permission denied on first access

Hi folks

Looked all over, cannot find a solution to this though it seems lots of folks have experienced similar but not identical issues (and therefore solutions tried so far have yielded no positive results).

I have two major issues - not sure if I should create two threads so chose not to spam and make one. Suggestions on where to look for diagnostics and what to check welcome. I am fairly comfortable in a shell/command line - but the inner workings of where to look and what to look for are still beyond me.

System info:
Client

  • F39 WS - latest updates
  • DHCP enabled
  • /etc/fstab
    # Storage server mount 192.168.1.2:/ /mnt/storagenas nfs4 _netdev,auto 0 0

Server

  • OpenMediaVault 6.x - latest updates.
  • NFS v3 and v4 - doesn’t seem to change the issue on both versions.
  • Sample export
    /export/Data 192.168.1.0/24(fsid=a60cc014-9d8a-43ec-8a86-3012bd263d93,rw,subtree_check,insecure)

Issue #1 - /etc/fstab NFS mounts don’t mount on boot.
Manually running sudo mount /mnt/mntpoint mounts just fine.

Research suggests some kind of delay in network startup, however my understanding is _netdev takes care of that. I am not sure how to diagnose the network delay issue in logs etc.

**Issue #2 - Mountpoints show a lock icon and issue an “access denied” on first access

  • Exactly as it says on the tin, on first access I am prompted for a PW.
    ** If I provide a PW, I get an access denied and all future access attempts are denied.
  • If I ESC the prompt (i.e. don’t provide a PW) and refresh the Files window, the mount point no longer displays a lock icon and I can access the folder just fine.

Very odd. Only thing I can think of is uid/guid mapping. There’s a mismatch in guid (1000 on client, 100 on server) - however the server also has the user mapped to multiple groups, one of which is 1000.

[edits] formatting.

Could you try with nfs instead of nfs4?

1 Like

I originally tried this - or so I thought. However, I did it again on your prompt. The key difference being that I now have to mount to a specific export vs. the root export like I can in NFS4.

Immediately upon mounting the new mount point (/mnt/test let’s say) it created a symlink which I didn’t notice last time. Output as follows:
Created symlink /run/systemd/system/remote-fs.target.wants/rpc-statd.service → /usr/lib/systemd/system/rpc-statd.service.

After a reboot, all the mount points are now connected - I am not sure if this due to this symlink thing or whether the regular NFS mount is “kick-starting” the NFS4 mount point somehow. Going to disable the regular mount temporarily and reboot. BRB.

[edit] typos

OK, so the mounts are now showing up after boot even with the NFS(3?) mount removed from /etc/fstab.

So, I wonder then if - in my initial setup - I did something such that the above-mentioned symlink wasn’t created. If I interpret it correctly, it’s something to do with the mount (remove-fs.target) needing what looks like a monitoring daemon for NFS (rpc-statd.service) to be available before mounting.

Anyhow, it seems to be working now - though I wish I had something more concrete.

Issue #2 however remains - sometimes the mount points throw an “access denied” and a PW prompt, and if I escape that prompt and refresh the window I can browse the mount point with no issues… Very odd.

…aaaand as a side note, I have now figured out that the “fail to reboot” issue I am experiencing doesn’t seem to manifest if I reboot my machine soon after starting it (but that’s a separate discussion!)