My question was partly asked in Can systemd services be associated with more than one timer and vice versa? by @habono and answered by @kpfleming, but I’d like to delve a bit deeper into the topic.
Say I have a bunch of service units. (Instances of the same template service, in point of fact. I don’t expect that detail should make any difference, but I’ve been wrong before.) I want them all to be launched by a single timer, so they’re going to all be part of the same systemd target.
Now, say I want those instance units to all run, serially one-by-one until they’re all complete, whenever the target is activated by the timer. What’s the {best, most-“correct”} configuration to ensure that happens?
-
After=
dependencies on the preceding unit, for all but the one that should run first? -
Conflicts=
dependencies on all of the other units in the target, for each unit? - Both?
- Something else?
Probably-unnecessary background (for the curious)
For the longest time now, my fileserver has been configured to create a separate mlocate
database for each of its mounted filesystems, stored locally in <mountpoint>/.mlocate/mlocate.db
on each one. I’ve had the system’s updatedb
runs configured to ignore those filesystems when building the main database1, an /etc/cron.daily/
job did the updatedb
scan on each of the secondary filesystems in turn, and a LOCATE_PATH
environment variable set on my user account ensured that any mlocate
queries I ran on the fileserver would scan all of the available databases.
The whole thing was originally motivated, more than a decade ago, as a couple of different simultaneous proofs-of-concept:
- For removable devices, to make them searchable when migrating them between machines despite possibly being mounted in
/media/
and not part of the main database. - For remote mounts, to detect when an SFTP-mounted remote contained one or more databases that could be discovered and used for fast searching even from a different system (or multiple different systems).
I never got the remote-discovery parts wired up to take advantage of the .mlocate/
-hosted databases from remote systems (never even completely worked out exactly how it would work, TBH), but I’ve maintained the multiple databases ever since and I’d like to continue to do so with plocate
. However, since plocate
’s updatedb
scans are run as systemd service units triggered by a systemd timer unit, I figure it’s time to retire the /etc/cron.daily/
-driven version of those mountpoint-updatedb scans, and migrate those to systemd as well.
So, my plan is to create a mount-updatedb@.service
template, with the argument being the mount point, and then create one instance for each filesystem that should contain a database. With that done, I’ll need a timer to trigger those scans. I’d rather not deal with creating/juggling N timers for N filesystems, so creating a single .timer
unit that triggers a systemd target containing all of the updatedb instances makes a lot of sense. And because some filesystems share the same physical storage device, it’s best if the scans run serially instead of in parallel.
Notes
- (Actually, I think that’s false. Initially, I was #NotAmused by what I thought was the
plocate
RPM arbitrarily overwriting my/etc/updatedb.conf
without preserving local modifications, but on further reflection I don’t think it had any local modifications. The existing restriction to skip/media
would’ve kept the mounted filesystems out of the main database, and any other configs pertaining to the mountpoint scans would be part of thecron.daily
script instead. Still, thank goodness for etckeeper, which allowed me to examine the previous config and determine that, no, it hadn’t actually been customized after all.)