Sshfs automount from fstab

I’m trying to setup an sshfs automount from fstab. From the command line as a non-root user I have the following mount command working just fine:

$ sshfs -o IdentityFile=~erick/.ssh/id_ed25519 erick@server:/mnt/data /mnt/data

That mounts just fine.

However when I try to move to do this from fstab and check with mount -a I get:

$ sudo mount -a
read: Connection reset by peer

After much trial and error my current fstab entry looks lke:

erick@server:/mnt/data /mnt/data sshfs user,_netdev,default_permissions,reconnect,allow_other,identityfile=/home/erick/.ssh/id_ed25519 0 0

Any help would be appreciated. Thanks

edit:

It looks like this is some issue with root initiating the ssh session as my user, if I try to mount the mount point directly, without sudo it succeeds. But when I use sudo mount /mnt/data I get the 'connection reset message above. I have the sshd logs on the server cranked up to DEBUG, and I see the ssh connection from sudo mount /mnt/data getting accepted, and then killed. It’s unclear why, from the sshd logs:

Aug 18 21:55:09 jupiter sshd[10092]: Accepted key ED25519 SHA256:HYU7n6x/jzqIfXXGk4Yp9hmWjVCplPEZGWpqIVTpw3A found at /home/erick/.ssh/authorized_keys:1
Aug 18 21:55:09 jupiter sshd[10092]: debug1: restore_uid: 0/0
Aug 18 21:55:09 jupiter sshd[10092]: Postponed publickey for erick from 192.168.1.4 port 50580 ssh2 [preauth]
Aug 18 21:55:09 jupiter sshd[10092]: Connection closed by authenticating user erick 192.168.1.4 port 50580 [preauth]
Aug 18 21:55:09 jupiter sshd[10092]: debug1: do_cleanup [preauth]
Aug 18 21:55:09 jupiter sshd[10092]: debug1: monitor_read_log: child log fd closed
Aug 18 21:55:09 jupiter sshd[10092]: debug1: do_cleanup
Aug 18 21:55:09 jupiter sshd[10092]: debug1: PAM: cleanup
Aug 18 21:55:09 jupiter sshd[10092]: debug1: Killing privsep child 10093

Replying to my own comment here… this turns out to be some issue with sudo, as evidenced from journalctl logs:

Aug 18 22:19:14 lenobot sudo[6199]: pam_systemd(sudo:session): Cannot create session: Already running in a session or user slice

Using su to switch to root and issuing the mount command works as expected. Although I still haven’t tried a full restart… that’s next.

edit: … sigh, last update tonight

this still isn’t working on reboot. it looks like there is some issue that systemd doesn’t quite like now. from journalctl:

Aug 18 22:45:40 lenobot mount[1887]: read: Connection reset by peer
Aug 18 22:45:40 lenobot systemd[1]: mnt-data.mount: Mount process exited, code=exited status=1
Aug 18 22:45:40 lenobot systemd[1]: Mounted /mnt/data.

Ok… last last update. I remembered that mounts are now all handled by systemd when I read the last entry in journalctl… so I tried manually creating a .mount entry by copying the generated entry from fstab… still no luck. I just see the following in the logs:

Aug 18 23:19:54 lenobot systemd[1]: Unmounting /mnt/data...
Aug 18 23:19:54 lenobot systemd[1]: mnt-data.mount: Mount process exited,   code=exited status=32
Aug 18 23:19:54 lenobot systemd[1]: mnt-data.mount: Failed with result 'exit-code'.
Aug 18 23:19:54 lenobot umount[6785]: umount: /mnt/data: not mounted.
Aug 18 23:19:54 lenobot systemd[1]: Unmounted /mnt/data.
Aug 18 23:19:54 lenobot systemd[1]: Mounting /mnt/data...
Aug 18 23:19:54 lenobot mount[6786]: read: Connection reset by peer
Aug 18 23:19:54 lenobot systemd[1]: mnt-data.mount: Mount process exited, code=exited status=1
Aug 18 23:19:54 lenobot systemd[1]: Mounted /mnt/data.

Sooooo… I’ve just created a hack to run the mount command as a oneshot service as my user on start up. This seems to work just fine.

[Unit]
# see https://discussion.fedoraproject.org/t/sshfs-automount-from-fstab/2951
Description=Hack oneshot service because I can't get sshfs to automount correctly

[Service]
Type=oneshot
ExecStart=mount /mnt/data
ExecStop=umount /mnt/data
RemainAfterExit=yes

[Install]
WantedBy=multi-user.target

I’m calling this fixed

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.