How to setup systemd.service files for auto backup to USB Drive

Hey Everyone :fedora:

I need to set up a sysemd.service for an rsync script I have. I want the service to run once the USB drive is mounted (this is usually on /run/media/$USER/$UUID/).

Here is my Service file :


[Unit]
Description=Backup directories to external drive
After=mount.target

[Service]
User=root
Type=oneshot
ExecStart=/usr/local/bin/rsync_backup_script.sh

[Install]
WantedBy=multi-user.target

The service file did have a SELinux context of :
unconfined_u:object_r:systemd_unit_file_t:s0
and
-rw-r--r--. 1 root root as expected.

Here is an example of the Rsync script ( tested and works as expected )


#!/bin/bash


# Destination directory on external drive
DEST_DIR="/run/media/definitive_group/5003a2b4-88c9-4e8a-acef-df00707c50f3/"

# Exclude any unwanted files or directories (optional)
EXCLUDE_PATTERN="--exclude .git"

# Run rsync with options
rsync -ahP $EXCLUDE_PATTERN "/home/$USER/Blender_Project" \
                            "/home/$USER/Bookmarks" \
                            "/home/$USER/Documents" \
                            "/home/$USER/User_Dir" \
                            "/home/$USER/GIMP" \
                            "/home/$USER/Inkscape_Projects" \
                            "/home/$USER/Pictures" \
                            "$DEST_DIR"

When the drive mounts after decryption, The script does not start. I have had numerous errors which i had resolved and caused me to alter thescript in many ways. I could post it here if needed for context. This is the base service file so anything we do now should be based off of it.

Any help is much appreciated ! Surely this will help others in the future.

I’ve never tried it, but my guess would be that you would want to create a udev rule containing something like RUN+="/usr/bin/systemctl start myservice.service"

Edit: Wow, it looks like that Arch wiki even has a link to a blog post very similar to what you are trying to do: Scripting with udev - jasonwryan.com

1 Like

Here is a udev rule I created that did not work, but I am willing to try this.

ACTION=="add", 
KERNEL=="sd[a-z][1-9]", 
{ID_FS_UUID}=="5003a2b4-88c9-4e8a-acef-df00707c50f3", 
RUN+="/usr/bin/systemctl start rsync_backup_script.service"

Note :

I have since removed those files, but am willing to try this again. It’s crucial to my new workflow because I do forget to run the script manually due to time constraints.


It’s unfortunate that it’s from 10yrs ago ! so much has changed with systemd and how distros use certain features of it. I did come across this very blog, but due to time and probably too much time spent on the previous iterations, I did not give it :100: % of my attention.

i tried this :

Also this one, which was very convoluted :

I’d try something closer to what is in that blog post:

SUBSYSTEM=="usb", ACTION=="add", ENV{ID_FS_UUID}=="5003a2b4-88c9-4e8a-acef-df00707c50f3", ENV{SYSTEMD_WANTS}="rsync_backup_script.service"

P.S.: It looks like your earlier rule was missing a ENV.

P.P.S.: I’m not sure about the pluralization of “SUBSYSTEM”. I see matches for it both ways (plural and singular) when I run grep SUBSYSTEM /usr/lib/udev/rules.d/*.

1 Like

On this webpage Systemd timers I found a description some time ago about how to do that and it just works:

Executable script in ~/bin, contents about the same as you have
Service file and timer file in ~/.config/systemd/user (user is the word user, not your username)

This way you do a backup as you, not as root.
No need to create complicated rules.

1 Like

I suspect the $USER used when the service file runs is not the same as $USER when you are testing the script. Your service file specifies the user as root so it probably is trying to use the path /home/root/Documents, etc. as the file paths to copy.

Maybe you could try using the actual user name in the script instead of $USER.

Create a udev rule that triggers the service:

..., TAG+="systemd", ENV{SYSTEMD_WANTS}="luks-backup.service"

Make the serivce run as your user:

[Service]
User=user_name

Fetch the LUKS passphrase from the GNOME keyring:

luks_open() {
	udisksctl unlock --no-user-interaction -b "${LUKS_DEV}" \
		--key-file <(echo -n "${LUKS_PASS}")
	udisksctl mount --no-user-interaction -b "${LUKS_VOL}"
}
luks_close() {
	udisksctl unmount --no-user-interaction -b "${LUKS_VOL}"
	udisksctl lock --no-user-interaction -b "${LUKS_DEV}"
}
sync_exec() {
	...
}
LUKS_DEV="/dev/disk/by-uuid/..."
LUKS_VOL="/dev/disk/by-label/..."
LUKS_PASS="$(secret-tool lookup gvfs-luks-uuid "${LUKS_DEV##*/}")"
luks_open
sync_exec
luks_close
3 Likes

Hmm, I will have to look into that today So with this setup, I would only need the udev rule, the rsync script & the systemd.service file ? Like I have now? No .timer file required.

It’s worth the try, funny enough during my web search, this never came up ! :laughing:

Another gotcha to watch out for is SELinux. I once wrote a systemd service that runs rsync to synchronize ESPs on systems with mirrored system disks (bootsync). But I found that I had to create a special rule to allow processes running as rsync_t access to the files under /boot which were labeled boot_t. FWIW, below is the SELinux type enforcement rule that I used to grant my systemd service access to boot_t files.

module bootsync 1.0;

require {
	type rsync_t;
	type boot_t;
	class lnk_file read;
	class dir { add_name create getattr open read remove_name rmdir search write };
	class file { create getattr open read rename setattr unlink write };
}

#============= rsync_t ==============

allow rsync_t boot_t:lnk_file read;
allow rsync_t boot_t:dir { add_name create getattr open read remove_name rmdir search write };
allow rsync_t boot_t:file { create getattr open read rename setattr unlink write };

I think another way around that problem (if you encounter it) is to change the label on /usr/bin/rsync to bin_t, but you would still want to create a SELinux fscontext rule so that your change would be preserved across updates of the rsync binary.

1 Like

Sorry for the confusion, That was just an edit for sake of generalization.

To note, I did test the script in both User and Root user. It worked and journalctl -u produced the rsync verbose info when run manually.

I don’t know. I’m mostly speaking theoretically. I’ve never done this myself. :slight_smile:

1 Like

This is not possible, as I and the Users have USB keys. On my machine I can enter the password or point to a cryptsetup ... --key-file to unlock it since I do not carry my keys with me, so password/key file is my solution. So triggering after the unlock and mounted to /run/media/$USER/$UUID/ still stands.

This is worth the try as well. :handshake:t5:

Theoretically my initial service file should have worked but I got a bunch of errors related to the UUID the file name and others.

I did run into that with another script, but it also had to deal with gpg and solved it by chown at the end of the script, which is probably not the way I would do that anymore. . . But it’s running flawlessly, so not going back to it right now ! :laughing:

Good to know !

1 Like

Given all the information here, I’ve got some more work to do. So, I will update this thread will my findings including errors and obstacles !

Thanks all.

1 Like

FYI, I just noticed (and corrected) an error in my earlier post (#4). I had copied the udev rule from that blog post, but the last term should be an assignment (=), not a comparison (==). Beware getting those mixed up when writing udev rules. :slightly_smiling_face:

1 Like

Nothing’s worked so far, but I have found so many rabbit holes, I think I’ll write a Quick Docs & maybe a Fedora Magazine article from this experience.


I have run into numerous issues, and approaches. I even broke down today and took a chance to see what ChatGPT/Gemini would say. . . Don’t ask ChatGPT mode 4o anything pertaining to systemd it will outright lie :laughing: , as for Gemini :stop_sign: Don’t ask it anything !

So I tried this today. . . There are issues here. The article speaks about a simple scheduled task, and my scenario requires the systemd to wait for the luks encrypted drive to be decrypted and for the mount to occur at it’s default location which is /run/media/$USER/$UUID.

So I tried to work around that, because I’m dealing with an External HDD, I wanted to wait for the decryption, mount and wait a moment before executing the script. So I included Logic in the script to account for that.

My script now looks like this :

#!/bin/bash

# Function to check if USB drive is mounted
is_usb_mounted() {
    local device=$1
    grep -qs "$device" /proc/mounts
}

# Wait until USB drive is mounted or timeout is reached
timeout=120  # Timeout in seconds
elapsed=0
while ! is_usb_mounted "/dev/disk/by-id/usb-WD_easystore_25FB_32544B4B454A4244-0:0"; do
    sleep 1
    elapsed=$((elapsed + 1))
    if [ "$elapsed" -ge "$timeout" ]; then
        echo "Timeout reached. USB drive not mounted."
        exit 1
    fi
done

# Proceed with backup
echo "USB drive mounted. Proceeding with backup..."


# backup commands


I decided to use /dev/disk/by-id/ for this go’ round, but can go back to UUID for testing. Either way, running this script and the .service file did not work. I’ll provide journalctl post later.

1 Like

Hmm, I see that /proc/mounts has fancy SELinux perms:

$ ls -Z /proc/mounts                                                           
system_u:object_r:proc_t:s0 /proc/mounts

And it looks like it is doing some sort of fancy namespace isolation:

$ ls -al /proc/mounts 
lrwxrwxrwx. 1 root root 11 Jun 12 17:16 /proc/mounts -> self/mounts

I think I’d try something more along the lines of while ! mountpoint -q "/run/media/$USER/$UUID"; do ... instead of trying to parse /proc/mounts.

Have you tried doing this with a path unit?

3 Likes

You beat me to it this morning ! yes, I have and it’s the closest I have gotten to make this work. In essence it “should” work.

From the man pages :

SYNOPSIS         top

       path.path

DESCRIPTION         top

       A unit configuration file whose name ends in ".path" encodes
       information about a path monitored by systemd, for path-based
       activation.

       This man page lists the configuration options specific to this
       unit type. See systemd.unit(5) for the common options of all unit
       configuration files. The common configuration items are
       configured in the generic [Unit] and [Install] sections. The path
       specific configuration options are configured in the [Path]
       section.

       For each path file, a matching unit file must exist, describing
       the unit to activate when the path changes. By default, a service
       by the same name as the path (except for the suffix) is
       activated. Example: a path file foo.path activates a matching
       service foo.service. The unit to activate may be controlled by
       Unit= (see below).

       Internally, path units use the inotify(7) API to monitor file
       systems. Due to that, it suffers by the same limitations as
       inotify, and for example cannot be used to monitor files or
       directories changed by other machines on remote NFS file systems.

      When a service unit triggered by a path unit terminates
       (regardless whether it exited successfully or failed), monitored
       paths are checked immediately again, and the service accordingly
       restarted instantly. As protection against busy looping in this
       trigger/start cycle, a start rate limit is enforced on the
       service unit, see StartLimitIntervalSec= and StartLimitBurst= in
       systemd.unit(5). Unlike other service failures, the error
       condition that the start rate limit is hit is propagated from the
       service unit to the path unit and causes the path unit to fail as
       well, thus ending the loop.

So the key point here is :

  • Internally, path units use the inotify(7) API to monitor file systems.
  • When a service unit triggered by a path unit terminates (regardless whether it exited successfully or failed), monitored paths are checked immediately again, and the service accordingly restarted instantly.

So my .path file should work :

[Unit]
Description=EasyStore Mount

[Path]
PathExists=/run/media/my-user/5003a2b4-88c9-4e8a-acef-df00707c50f3

[Install]
WantedBy=default.target

Buuuut. . . It doesn’t or should I say it tries too much ! i get a lot of errors for :

XFS (dm-3): Mounting V5 Filesystem 5003a2b4-88c9-4e8a-acef-df00707c50f3
Jun 12 23:55:41  (ackup.sh)[7486]: EasyStore_Rsync_Backup.service: Failed to determine supplementary groups: Operation not permitted
Jun 12 23:55:41  systemd[4249]: Starting EasyStore_Rsync_Backup.service - Backup script after USB mount...
Jun 12 23:55:41  systemd[4249]: EasyStore_Rsync_Backup.service: Main process exited, code=exited, status=216/GROUP
Jun 12 23:55:41  systemd[4249]: EasyStore_Rsync_Backup.service: Failed with result 'exit-code'.
Jun 12 23:55:41  systemd[4249]: Failed to start EasyStore_Rsync_Backup.service - Backup script after USB mount.
Jun 12 23:55:41  systemd[4249]: Starting EasyStore_Rsync_Backup.service - Backup script after USB mount...

Also :

Jun 12 23:00:07  systemd[4273]: Failed to start EasyStore_Rsync_Backup.service - Backup script after USB mount.
Jun 12 23:00:07  systemd[4273]: EasyStore_Rsync_Backup.service: Start request repeated too quickly.
Jun 12 23:00:07  systemd[4273]: EasyStore_Rsync_Backup.service: Failed with result 'exit-code'.
Jun 12 23:00:07  systemd[4273]: Failed to start EasyStore_Rsync_Backup.service - Backup script after USB mount.
Jun 12 23:00:07  systemd[4273]: EasyStore_Rsync_Backup.path: Failed with result 'unit-start-limit-hit'.

The latter I tried to fix with 2 directives called :
TriggerLimitIntervalSec=, TriggerLimitBurst=

So here I am. . . I’ll keep posting with more info as I get it, hopefully working.

Was this as a user or system service?

I need to retest, but i beileve when I added the
TriggerLimitIntervalSec=, TriggerLimitBurst=

I had moved the scripts to /etc/systemd/system where before I was trying to run them at ~/.config/systemd/user/