Transfer ownership of old files created/mounted by container user to new container

I was recently forced to run a podman system reset and regenerate all my images from their compose files to resolve some bugs with old images.

Two of my composes came online just fine, but two of them are having permissions errors accessing the files they previously (before the reset) created/modified/accessed just fine.

For context, I am running all of my containers as a non-root user (container, uid 1001) with the containers’ root user set to the same UID and GID via environment variables in the compose file (e.g. PGID=1001). I’m running them all via podman via podman-compose.

The two that were able to come back online fine were able to do so because they created and managed their files as the root user of their container. So when they booted up, the user they were operating as was the same as it was last time, since I was directly controlling that.

However, for the two that didn’t work, it’s because they created and managed (some of) their files as their own custom container-internal users. and the new container-local users can’t access the files of the old container-users it seems.

The confusing part of all of this is that, as far as I can tell, the new container-local users’ UIDs are identical to the old ones, since the files they are able to create (when for instance I point the composes’ bind mounts to different places to let them create fresh files instead of trying and failing to read the old ones) have the same owner:group numbers as the old ones did.

At first I thought this might be SELinux’s doing, since it seems obviously beyond the ken of regular UNIX permissions, but setenforce 0 seems to have had no effect, so this is pretty beyond me.

As it turns out, what was happening is that chowning the directories to the UIDs of the container-internal users wasn’t working because by setting the working directory of the containers to be their data directory in their systemd units, systemd was constantly resetting the ownership of those containers to the user that systemd unit was running as just before running the containers, undoing what I’d done and making it look like it didn’t work. Removing the WorkingDirectory line entirely from the systemd script and using absolute paths for everything instead (which I was already doing luckily), and then chowning everything in the data directories that the containers were having trouble accessing to be owned by the UIDs of the containers’ internal users (which all turned out to be the same by the way) resolved the problem. So:

[Unit]
Description=Syncthing podman container startup service
After=network.target

[Service]
Restart=always
User=container
-WorkingDirectory=/var/home/container/datas/syncthing/
ExecStart=/usr/bin/podman-compose -f /var/home/container/configs/syncthing/docker-compose.yml up
ExecStop=/usr/bin/podman-compose -f /var/home/container/configs/syncthing/docker-compose.yml down -v

[Install]
WantedBy=multi-user.target

and then

sudo chown -R internaluser:internalusergroup datas/syncthing/...
1 Like