Connectivity problem with Podman containers

Hi,
Since few weeks, my containers can’t access to internet.
They can be accessed through the VPN or the reverse proxy, but they can’t access internet, which is very problematic for some of them (like freshrss).
The logs tell me a lot of ECONNREFUSED and ENETUNREACH, does someone have an idea?
I have tried to disable firewalld, without success.
I run Fedora IOT 40, latest update.
Thanks you!

Added f40, iot, networking

Do you mean that at the same time they can access the local network, is it?

I don’t think they can, how could I test?
I can access them remotely through the VPN, but that the other side in my understanding.
It’s like the output was filtered, if it’s make sense…
Thanks for your help!

Are these rootless containers? If so, did the networking ever work after the podman 5 update? Podman switched to pastt by default with the update to 5, so that might be related. Also, are your containers inside a network?

Am I correct to assume that only the outgoing network traffic is broken? So your services are still available but freshrss can’t fetch new news items. To verify could you $ podman run -it --rm alpine ping fedoraproject.org

Sharing your $ podman info might give us some more clues about your system.

Thanks for your help, yes I think it’s related to pacman 5.
The vast majority of my container aren’t inside a network I think.
Yes, I think all of them are rootless.
Yes, my services are still available but freshrss can’t fetch new items.

Your commande give me that :
Resolved “alpine” as an alias (/etc/containers/registries.conf.d/000-shortnames.conf)
Trying to pull docker.io/library/alpine:latest
Getting image source signatures
Copying blob ec99f8b99825 skipped: already exists
Copying config a606584aa9 done |
Writing manifest to image destination
PING fedoraproject.org (140.211.169.196): 56 data bytes
64 bytes from 140.211.169.196: seq=0 ttl=42 time=160.353 ms
64 bytes from 140.211.169.196: seq=1 ttl=42 time=159.044 ms
64 bytes from 140.211.169.196: seq=2 ttl=42 time=158.732 ms
64 bytes from 140.211.169.196: seq=3 ttl=42 time=158.862 ms
64 bytes from 140.211.169.196: seq=4 ttl=42 time=159.419 ms
64 bytes from 140.211.169.196: seq=5 ttl=42 time=160.185 ms
64 bytes from 140.211.169.196: seq=6 ttl=42 time=159.569 ms
^C
fedoraproject.org ping statistics —
7 packets transmitted, 7 packets received, 0% packet loss
round-trip min/avg/max = 158.732/159.452/160.353 ms

Podman info give me that :
host:
arch: amd64
buildahVersion: 1.36.0
cgroupControllers:

  • cpu
  • io
  • memory
  • pids
    cgroupManager: systemd
    cgroupVersion: v2
    conmon:
    package: conmon-2.1.10-1.fc40.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.10, commit: ’
    cpuUtilization:
    idlePercent: 94.11
    systemPercent: 3.06
    userPercent: 2.83
    cpus: 32
    databaseBackend: sqlite
    distribution:
    distribution: fedora
    variant: iot
    version: “40”
    eventLogger: journald
    freeLocks: 2043
    hostname: homeserver
    idMappings:
    gidmap:
    • container_id: 0
      host_id: 1000
      size: 1
    • container_id: 1
      host_id: 524288
      size: 65536
      uidmap:
    • container_id: 0
      host_id: 1000
      size: 1
    • container_id: 1
      host_id: 524288
      size: 65536
      kernel: 6.9.8-200.fc40.x86_64
      linkmode: dynamic
      logDriver: journald
      memFree: 26819633152
      memTotal: 67332608000
      networkBackend: netavark
      networkBackendInfo:
      backend: netavark
      dns:
      package: aardvark-dns-1.11.0-1.fc40.x86_64
      path: /usr/libexec/podman/aardvark-dns
      version: aardvark-dns 1.11.0
      package: netavark-1.11.0-1.fc40.x86_64
      path: /usr/libexec/podman/netavark
      version: netavark 1.11.0
      ociRuntime:
      name: crun
      package: crun-1.15-1.fc40.x86_64
      path: /usr/bin/crun
      version: |-
      crun version 1.15
      commit: e6eacaf4034e84185fd8780ac9262bbf57082278
      rundir: /run/user/1000/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
      os: linux
      pasta:
      executable: /usr/bin/pasta
      package: passt-0^20240624.g1ee2eca-1.fc40.x86_64
      version: |
      pasta 0^20240624.g1ee2eca-1.fc40.x86_64
      Copyright Red Hat
      GNU General Public License, version 2 or later
      https://www.gnu.org/licenses/old-licenses/gpl-2.0.html
      This is free software: you are free to change and redistribute it.
      There is NO WARRANTY, to the extent permitted by law.
      remoteSocket:
      exists: false
      path: /run/user/1000/podman/podman.sock
      rootlessNetworkCmd: pasta
      security:
      apparmorEnabled: false
      capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
      rootless: true
      seccompEnabled: true
      seccompProfilePath: /usr/share/containers/seccomp.json
      selinuxEnabled: true
      serviceIsRemote: false
      slirp4netns:
      executable: /usr/bin/slirp4netns
      package: slirp4netns-1.2.2-2.fc40.x86_64
      version: |-
      slirp4netns version 1.2.2
      commit: 0ee2d87523e906518d34a6b423271e4826f71faf
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.5
      swapFree: 146028879872
      swapTotal: 146028879872
      uptime: 0h 10m 23.00s
      variant: “”
      plugins:
      authorization: null
      log:
  • k8s-file
  • none
  • passthrough
  • journald
    network:
  • bridge
  • macvlan
  • ipvlan
    volume:
  • local
    registries:
    search:
  • registry.fedoraproject.org
  • registry.access.redhat.com
  • docker.io
    store:
    configFile: /var/home/admin/.config/containers/storage.conf
    containerStore:
    number: 1
    paused: 0
    running: 0
    stopped: 1
    graphDriverName: overlay
    graphOptions: {}
    graphRoot: /var/home/admin/.local/share/containers/storage
    graphRootAllocated: 3999065440256
    graphRootUsed: 349885464576
    graphStatus:
    Backing Filesystem: btrfs
    Native Overlay Diff: “true”
    Supports d_type: “true”
    Supports shifting: “false”
    Supports volatile: “true”
    Using metacopy: “false”
    imageCopyTmpDir: /var/tmp
    imageStore:
    number: 1
    runRoot: /run/user/1000/containers
    transientStore: false
    volumePath: /var/home/admin/.local/share/containers/storage/volumes
    version:
    APIVersion: 5.1.1
    Built: 1717459200
    BuiltTime: Tue Jun 4 02:00:00 2024
    GitCommit: “”
    GoVersion: go1.22.3
    Os: linux
    OsArch: linux/amd64
    Version: 5.1.1

Thanks you a lot

What is weird is, in some case, restarting the container after the boot seems to work.
For instance, homeassistant doesn’t detect any peripherals until I restart it manually.
Same things for jellyserr, maybe the containers start before internet connection is up?

Interesting, so the alpine container does have network access.

How did you configure the containers? Are your using quadlets? If not, how old are the containers? Do they come from a previous podman version or did you create them recently?

I exclusively use quadlet, they are from an older version of pacman though.
Thanks for your help, do you want me to upload a quadlet file?

If you are using quadlets the containers should be recreated on a reboot, so that should be fine. If you could share one of your quadlets that would be great. If you do please put them in a code block using markdown syntax ```[code]```.

Thanks, my freshrss quadlet is that for instance :

[Container]
AutoUpdate=registry
ContainerName=freshrss
Environment=TZ=Europe/Paris CRON_MIN=1,21,41
Image=docker.io/freshrss/freshrss
PublishPort=10.44.155.2:8585:80
Volume=/var/srv/freshrss/data:/var/www/FreshRSS/data:Z
Volume=/var/srv/freshrss/extensions:/var/www/FreshRSS/extensions:Z

[Service]
Restart=always

[Install]
WantedBy=default.target

Thanks again!

Your quadlet looks normal, don’t see anything wrong there. Often multi-user.target is also added to the WantedBy line but because default probably points to multi-users.target it shouldn’t be a problem (can verified with $ systemctl get-default).

Does the freshrss container work when restarted manually? And does $ podman exec -it freshrss ping fedoraproject.org work?

It’s late here so I will be going offline for some time.

Hello again,
It was late for me too, so I went to sleep, thanks you again for your help!
Freshrss fail to refresh the feeds, it tell me a lot of logs like that :

cURL error 7: Failed to connect to www.youtube.com port 443 after 12006 ms: Couldn't connect to server [https://www.youtube.com/feeds/videos.xml?channel_id=UCiFooXU08DIl46ZL1NQhx7A]

I will try to relaunch it and tell you.
If I type systemctl get-default, I get graphical.target, maybe the problem came from here?
I came from fedora silverblue, then I have rebased to fedora IoT, it’s probably related?
Thanks again

Also, the podman exec -it freshrss ping fedoraproject.org tell me that error : Error: crun: executable file ping not found in $PATH: No such file or directory: OCI runtime attempted to invoke a command that was not found

Added podman

That is interesting, it is probably different because you rebased but can’t confirm what the default is for IoT. I only have silverblue and coreos installs. But graphical.target should be after multi-user.target so it shouldn’t cause problems. You could try replacing it with WantedBy=multi-user.target default.target, that is what all podman documentation pages are using.

Seems like your container doesn’t have the ping binary in it. The image seems to be pretty minimal, maybe try $ podman exec -it freshrss apt update.

Hi,
Thanks you again, you mean place WantedBy=multi-user.target default.target in all the .container files?
I will restart the server and try your other command, thanks again.

[Install]
WantedBy=default.target

Becomes this:

[Install]
WantedBy=multi-user.target default.target

Also if you reboot, could you provide the boot logs? It might contain some interesting information.

Hi,
So, after reboot, entry your command podman exec -it freshrss apt update give

Ign:1 http://deb.debian.org/debian bookworm InRelease
Ign:2 http://deb.debian.org/debian bookworm-updates InRelease
Ign:3 http://deb.debian.org/debian-security bookworm-security InRelease
Err:4 http://deb.debian.org/debian bookworm Release
  Cannot initiate the connection to deb.debian.org:80 (2a04:4e42:6a::644). - connect (101: Network is unreachable) Cannot initiate the connection to deb.debian.org:80 (199.232.170.132). - connect (101: Network is unreachable)
Err:5 http://deb.debian.org/debian bookworm-updates Release
  Cannot initiate the connection to deb.debian.org:80 (2a04:4e42:6a::644). - connect (101: Network is unreachable) Cannot initiate the connection to deb.debian.org:80 (199.232.170.132). - connect (101: Network is unreachable)
Err:6 http://deb.debian.org/debian-security bookworm-security Release
  Cannot initiate the connection to deb.debian.org:80 (2a04:4e42:6a::644). - connect (101: Network is unreachable) Cannot initiate the connection to deb.debian.org:80 (199.232.170.132). - connect (101: Network is unreachable)
Reading package lists... Done
E: The repository 'http://deb.debian.org/debian bookworm Release' does not have a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
E: The repository 'http://deb.debian.org/debian bookworm-updates Release' does not have a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
E: The repository 'http://deb.debian.org/debian-security bookworm-security Release' does not have a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.

The full boot logs can be seen here : PrivateBin

I will try by adding your lines and reboot again, thanks you again.

Unfortunally, the problem is the same with your config lines, thanks you.