The kernel version used for server/client setup is below.
Server: 5.16.13-200.fc35.armv7hl
Clients: 5.16.13-200.fc35.x86_64 (Silverblue/i3 Spin)
All systems were updated on Mar 15.
Server: Config/firewall is done and the share (/etc/exports) exported to the clients. But I need your help on how to address this message. Do I need to correct values set in the [nfsd] section of the /etc/nfs.conf configuration file?
rpc.nfsd[927]: rpc.nfsd: Unable to request RDMA service
$ sudo systemctl status nfs-server.service
[sudo] password for xx:
● nfs-server.service - NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendo>
Drop-In: /run/systemd/generator/nfs-server.service.d
└─order-with-mounts.conf
Active: active (exited) since Wed 2022-01-12 00:00:40 GMT; 2 months 0 days>
Main PID: 938 (code=exited, status=0/SUCCESS)
CPU: 185ms
Jan 12 00:00:40 offroad systemd[1]: Starting NFS server and services…
Jan 12 00:00:40 offroad rpc.nfsd[927]: rpc.nfsd: Unable to request RDMA service>
Jan 12 00:00:40 offroad systemd[1]: Finished NFS server and services.
The result was when I put IP in the exports file.
The file I created looks like this;
/test/nfs_share IP address/24(rw,sync,no_all_squash,root_squash) # server
/test/nfs_share IP address((rw,sync,no_all_squash,root_squash)) # client 1
/test/nfs_share IP address(rw,sync,no_all_squash,root_squash) # client 2
An extra step I checked with the router was DHCP which was enabled.
Still the same. I’m not sure if I made a stupid mistake in the file/missed any steps with DHCP.
Configure a static IP address in rc.conf: not done - is it necessary?
Configure dhcp server in router: DHCPv4 Server enabled in my router
Static DHCPv4 configured for server and clients
Disable nps v4: is it necessary as suggested here?
If you’re using DHCP reservations, then you should be good. NFSv4 should work between a Fedora server and client. There have been a number of people reporting that NFSv4 has not been working between Fedora and Synology/QNAP stuff recently.
Regarding showmount -e failing, it looks like rpcbind might not be running? Can you verify that rpcbind is running on both ends (systemctl status rpcbind)? If not, you can enable it with systemctl enable --now rpcbind and then try showmount -e again.
Just to get some obvious questions out of the way - is the client within that /24 submask? And is the nfs/rpcbind service enabled in firewalld (or nftables if not using firewalld) on both ends?
If so, I would run tcpdump on the server side as the client is trying to connect to see where it’s breaking down or if the client is even talking to it at all.
All in check. Still, the same message persists in the server and client.
I just want to define access for multiple NFS Clients in Export File, not an entire subnet. So I removed /24 subnet IP (I mixed up DHCP reserved IP with /24, which is my mistake). I use a single host.
I’ll put on hold running tcpdump until I complete exports file and fstab configuration.