Very strange. Do you have things running on the install already? Else you might want to consider doing a podman reset using podman system reset, this will give you a clean start.
Doing this will remove:
- all containers
- all pods
- all images
- all networks
- all build cache
- all machines
- all volumes
I forget what I did but I was able to make the errors stop for the other containers. I think it was a small setting I’d changed in passing. I’m going to try your podman command now
Okay I ran the bare podman and podman-compose commands again to see what would happen, and I’ve resolved the error I started out with, but now I’m back to a very annoying situation, which is that the process inside the wireguard container is not running as the user I’m telling it to run as, so it can’t access any of its configuration files:
chown: changing ownership of '/config': Operation not permitted
[wireguard] | **** Permissions could not be set. This is probably because your volume mounts are remote or read-only. ****
[wireguard] | **** The app may not work properly and we will not provide support for it. ****
[wireguard] | Uname info: Linux b3581a077f3f 6.7.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Mar 6 19:35:04 UTC 2024 x86_64 GNU/Linux
[wireguard] | **** It seems the wireguard module is already active. Skipping kernel header install and module compilation. ****
[wireguard] | **** Client mode selected. ****
chown: changing ownership of '/config': Operation not permitted
chown: changing ownership of '/config/wg_confs': Operation not permitted
chown: changing ownership of '/config/wg_confs/wg0.conf': Operation not permitted
chown: changing ownership of '/config/templates': Operation not permitted
chown: changing ownership of '/config/templates/server.conf': Operation not permitted
chown: changing ownership of '/config/templates/peer.conf': Operation not permitted
chown: changing ownership of '/config/coredns': Operation not permitted
chown: changing ownership of '/config/coredns/Corefile': Operation not permitted
[wireguard] | **** Permissions could not be set. This is probably because your volume mounts are remote or read-only. ****
[wireguard] | **** The app may not work properly and we will not provide support for it. ****
This despite passing in the right pid and gid, running it as the right user (so it isn’t the same problem as in my “chowning” problem in my other question in this form lol), setting the files and directories to be owned by those, and adding :Z to the end.
The docks for the wireguard container even say that if I have everything lined up there shouldn’t be any permissions issues:
When using volumes (-v flags), permissions issues can arise between the host OS and the container, we avoid this issue by allowing you to specify the user PUID and group PGID.
Ensure any volume directories on the host are owned by the same user you specify and any permissions issues will vanish like magic.
Sounds like it is the same problem as the other issue yes. I don’t know how docker handles UID/GID mapping, but on rootless podman the UID inside the container will be different than the one on the host.
Could you create a backup of your config directory and try the following? Start a new container and chown all config files using the root user in the container. After executing these commands try to start the client container again, it should have access.
$ podman run -it --entrypoint bash -v [config on host]:/config lscr.io/linuxserver/wireguard:latest
root@5c9c95bafd2a:/# chown -R 1001:1001 /config
Starting with a new config directory should also work.
I tried something like that tried that just a few minutes ago actually, by chowning the config directory recursively to the user id that the process inside the wireguard container is using. Somehow, it didn’t help. This seems like a SELinux issue to me at this point then right?
Using volumes instead of bind mounts will let podman handle the permission stuff. That will probably fix the problems you are having. Have you considered using volumes?
I know that they are somewhat a pain to work with when using the shell to configure stuff, but when creating a symlink to it in the user directory it shouldn’t be that much of a difference.
Does this also fail when trying to do it manually? I assume that the files where owned by the containers user on the host when you tried?
After creating a volume get the path by using podman volume inspect {volume name}. Then add the wireguard config. Make sure that the file has the same permissions as the other files in the volume.
I tried this and it resolved the permissions problems, now there’s just a different error.
I tried retracing my steps and simulating the volume’s situation with bind mounts, by creating a wireguard-config directory, chowning it to match the volume (590824:590824) and then matching its permissions to the ones in the volume (751), and then letting it populate the directory and then hardlinking the VPN config in and making its ownership and perms match exactly too, but it still says “grep wg0.conf: permission denied.”
So despite volumes being less legible and less consistent with what I’m doing for my other containers, I think I’ll stick with them and move on to dealing with the new error.
For the record, the new error is:
[wireguard] | **** Activating tunnel /config/wg_confs/wg0.conf ****
Warning: `/config/wg_confs/wg0.conf' is world accessible
[#] ip link add wg0 type wireguard
[#] wg setconf wg0 /dev/fd/63
[#] ip -4 address add 10.2.0.2/32 dev wg0
[#] ip link set mtu 1420 up dev wg0
[#] resolvconf -a wg0 -m 0 -x
>>> s6-rc: fatal: unable to take locks: Resource busy <<<
[#] wg set wg0 fwmark 51820
[#] ip -4 route add 0.0.0.0/0 dev wg0 table 51820
[#] ip -4 rule add not fwmark 51820 table 51820
[#] ip -4 rule add table main suppress_prefixlength 0
[#] iptables-restore -n
>>> iptables-restore v1.8.10 (legacy): iptables-restore: unable to initialize table 'raw' <<<
Error occurred at line: 1
Try `iptables-restore -h' or 'iptables-restore --help' for more information.
[#] resolvconf -d wg0 -f
s6-rc: fatal: unable to take locks: Resource busy
[#] ip -4 rule delete table 51820
[#] ip -4 rule delete table main suppress_prefixlength 0
[#] ip link delete dev wg0
[wireguard] | **** Tunnel /config/wg_confs/wg0.conf failed, will stop all others! ****