I try to start a container at boot time using systemd. It follows
I have a container named “unifi”.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0b3b250e03c4 localhost/unifi:6.0.41 /opt/unifi/unifi 12 days ago Up 1 second ago 0.0.0.0:3478->3478/udp, 0.0.0.0:5514->5514/udp, 0.0.0.0:8080->8080/tcp, 0.0.0.0:8443->8443/tcp, 0.0.0.0:8843->8843/tcp, 0.0.0.0:10001->10001/udp unifi
I did create a service file.
ExecStart=/usr/bin/podman start -a unifi
ExecStop=/usr/bin/podman stop -t 2 unifi
When I start the container manually with command
systemctl start unfi-container it is working fine.
I enabled the service,
$ systemctl status unifi-container
● unifi-container.service - Unifi container
Loaded: loaded (/etc/systemd/system/unifi-container.service; enabled; vendor preset: disabled)
but after a reboot the container is not started. There’s no error message in the logs and no sign systemd even tries to start the container.
I had similar issue to when I tried to start a docker with systemd, in the end I worked round it with a couple of bash scripts; one for start-up & one for stop and called these instead in the service file.
Alternatively there is a podman systemd generator:
This file has been truncated.
% podman-generate-systemd 1
podman\-generate\-systemd - Generate systemd unit file(s) for a container or pod
**podman generate systemd** [*options*] *container|pod*
**podman generate systemd** will create a systemd unit file that can be used to control a container or pod.
By default, the command will print the content of the unit files to stdout.
Generating unit files for a pod requires the pod to be created with an infra container (see `--infra=true`). An infra container runs across the entire lifespan of a pod and is hence required for systemd to manage the life cycle of the pod's main unit.
- Note: When using this command with the remote client, including Mac and Windows (excluding WSL2) machines, place the generated units on the remote system. Moreover, make sure that the `XDG_RUNTIME_DIR` environment variable is set. If unset, set it via `export XDG_RUNTIME_DIR=/run/user/$(id -u)`._
- Note: The generated `podman run` command contains an `--sdnotify` option with the value taken from the container.
If the container does not have any explicitly set value or the value is set to __ignore__, the value __conmon__ is used.
The reason for overriding the default value __container__ is that almost no container workloads send notify messages.
Systemd would wait for a ready message that never comes, if the value __container__ is used for a container
local.target active? What does below command says?
systemctl status local.target
Did you start your container with the
--restart unless-stopped or
--restart always flag? That would be correct way of ensuring the your container starts on boot and not using systemctl for the same as your service would first require the Podman socket service to start.
I may use a similar workaround, when I’m not able to go the systemd route.
I’ve tried podman generate, but that service file was also ignored.
I’ve checked this and indeed local.target was not active. I’ve replaced it with multiuser.target and default.target. Didn’t solve my issue though.
I’ve used --restart always.
Your solution worked. The generated service used an active target. I just forgot to enable the new unit.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.