How to get Podman DNS plugin/container name resolution to work in Fedora CoreOS 36? Podman-plugins (podman-dnsname)

Problem

podman-compose requires the dnsname plugin for podman. Without taht it just won’t work when you want to address other containers in your pod (other issue).

Now, however, this setup does not wor. I guess the issue here is that podman-plugins-3 is installed, but one v4 would be needed for podman 4, would not it?

No config files to be found

The CNI configuration file as specified here or so is not found anywhere:

$ ls -la $XDG_RUNTIME_DIR/cni
ls: cannot access '/run/user/1001/cni': No such file or directory
$ sudo ls -la /etc/cni
ls: cannot access '/etc/cni': No such file or directory
$ ls -la /run/containers/cni
ls: cannot access '/run/containers/cni': No such file or directory

So I do need to do sth. other than layer/install podman-plugins?
Or do I manually need to do sth. with dnsmasq. It is actually installed (without me manually needing to layer it):

$ dnsmasq --version
Dnsmasq version 2.86  Copyright (c) 2000-2021 Simon Kelley
[…]

System

$ sudo rpm-ostree install podman-plugins
error: "podman-plugins" is already provided by: podman-plugins-3:4.0.2-1.fc36.x86_64. Use --allow-inactive to explicitly require it.

$ podman version
Client:       Podman Engine
Version:      4.0.2
API Version:  4.0.2
Go Version:   go1.18beta2

Built:      Thu Mar  3 15:56:09 2022
OS/Arch:    linux/amd64

$ rpm-ostree status
State: idle
AutomaticUpdatesDriver: Zincati
  DriverState: active; periodically polling for updates (last checked Sun 2022-05-29 17:27:27 UTC)
Deployments:
● fedora:fedora/x86_64/coreos/stable
                   Version: 36.20220505.3.2 (2022-05-24T16:17:13Z)
                BaseCommit: 096cc2b6fb422d0464c0a3cea26e51de9e43535fe2edd04caa5bda323b8987fb
              GPGSignature: Valid signature by 53DED2CB922D8B8D9E63FD18999F7CBF38AB71F4
           LayeredPackages: firewalld ***** podman-compose

  fedora:fedora/x86_64/coreos/stable
                   Version: 35.20220424.3.0 (2022-05-06T20:24:56Z)
                BaseCommit: cd82fc9d3489f60e9c492a7daf92c91c5240273770168a7783e1596b582be135
              GPGSignature: Valid signature by 787EA6AE1147EEE56C40B30CDB4639719867C58F
           LayeredPackages: firewalld ***** podman-compose

Podman 4.x no longer uses the dnsname plugin by default. But that can also depend on how you arrived at using Podman 4. Was it an upgrade or a new install?

Hi and welcome back to this forum. :upside_down_face:

Upgrade, but podman-plugins, which includes the dnsname plugin, is (pre)installed.
Also there is some talk that you need to install containernetworking-plugins. I need to try that out…

I rebased from Fedora Silverblue 35 to 36 to arrive at podman 4. I was having the same problem. I just uninstalled containernetworking-plugins and was able to reproduce the issue.

[robert@fedora thumbstopper]$ rpm-ostree status
State: idle
Deployments:
● fedora:fedora/36/x86_64/silverblue
                   Version: 36.20220606.0 (2022-06-06T01:47:39Z)
                BaseCommit: 421ae524f62d85d1ce9398cee5e63b661e48dbce408c290283dc7aff467d39e3
              GPGSignature: Valid signature by 53DED2CB922D8B8D9E63FD18999F7CBF38AB71F4
           LayeredPackages: arc-theme gnome-tweaks guake numix-icon-theme podman-compose
             LocalPackages: protonvpn-stable-release-1.0.1-1.noarch

  fedora:fedora/36/x86_64/silverblue
                   Version: 36.20220606.0 (2022-06-06T01:47:39Z)
                BaseCommit: 421ae524f62d85d1ce9398cee5e63b661e48dbce408c290283dc7aff467d39e3
              GPGSignature: Valid signature by 53DED2CB922D8B8D9E63FD18999F7CBF38AB71F4
           LayeredPackages: arc-theme containernetworking-plugins gnome-tweaks guake numix-icon-theme podman-compose
             LocalPackages: protonvpn-stable-release-1.0.1-1.noarch

And someone mentioned containers.conf and I found this document. I saw some paths where the containers.conf was supposed to be located and did not see it in

  • /etc/containers/containers.conf
  • $HOME/.config/containers/containers.conf

So what I did was made a new file in the home path and set the content to just this one line

network_backend="netavark"

And at first it didn’t work, but I rebooted and it did. So I was going to try to take it a step further, remove the home folder file I created, and try setting it in /etc… But I found something strange. I deleted the $HOME/.config/containers/containers.conf and rebooted my machine to try to reproduce the issue. I cannot reproduce the issue now.

My experience in this area is limited, so hopefully this is helpful. I think if /etc/containers/containers.conf existed in Silverblue’s image with that line it would fix the issue.

Hmm it does exist for me, actually. (On CoreOs though, not Silverblue!)
Edit: Ah no it’s /usr/share/containers/containers.conf that exists! That is the base for it, however.

Only with that set, which should be equal to not having that file existing (i.e. podman should use the default value):

#network_backend = “”

However, the podman doc you linked says:

The default value is empty which means that it will automatically choose CNI or netavark. If there are already containers/images or CNI networks preset it will choose CNI.

Ahhh… so maybe because I have old containers with CNI there, the upgrade could not upgrade these to the new netavark? Crazy… Though at which time is that evaluated?
The containers start at boot,/were created at boot so a reboot should make it choose netavark, should not it?
Edit: It also says images in the description! So basically only a fresh installation/podman reset chooses it, so if you have ever done anything with podman, it will continue to use CNI. After all that was kinda stated in their release post, but anyway… :thinking:

This is BTW confirmed by the suggestion here to look for what container packages are installed. And a fresh installation of Fedora 36 Workstation seems to work.


About CoreOS again: The link above clearly shows that the file comes with the containers-common package, at least it should:

$ rpm -qf  /usr/share/containers/containers.conf 
containers-common-1-56.fc36.noarch

So maybe it is better to override it at user level?

Or, as the file states in the first lines in comments, in /etc/containers/containers.conf actually. That sounds good if you want it to apply for all users on your system I guess.

So finally a TL;DR for those who just seek a solution. This is now tested and it works.

The symptom

You have network errors in between-container communication (inside of a pod or so), e.g. from a webservice like Nextcloud to the database (MySQL), after upgrading to Fedora 36.

The cause

The issue stems from the fact that Fedora 36 now uses podman 4.0, whcih switched it’s networking mode from CNI to netavark.

How hostnames of pods are resolved has fundamentally changed with that upgrade.
Podman-compose e.g. now also required the podman-dnsname/podman-plugins plugin to be installed, but that should already be the case in Fedora 36.

The solution

Please run and check podman info (shortcut: podman info | grep -i -A3 net) to see what networking mode you use. If it still says CNI, you should likely switch.

To do so, please create a file /etc/containers/containers.conf with the following content:

[network]

# Explicitly force "netavark" as to not use the outdated CNI networking, which it would not apply otherwise as long as old stuff is there.
# This may be removed once all containers were upgraded?
# see https://discussion.fedoraproject.org/t/how-to-get-podman-dns-plugin-container-name-resolution-to-work-in-fedora-coreos-36-podman-plugins-podman-dnsname/39493/5?u=rugk

# official doc:
# Network backend determines what network driver will be used to set up and tear down container networks.
# Valid values are "cni" and "netavark".
# The default value is empty which means that it will automatically choose CNI or netavark. If there are
# already containers/images or CNI networks preset it will choose CNI.
#
# Before changing this value all containers must be stopped otherwise it is likely that
# iptables rules and network interfaces might leak on the host. A reboot will fix this.
#
network_backend = "netavark"

All containers should be stopped before and restarted after changing this, but realistically please, just reboot once you have changed this! Stuff may break otherwise, as the docs explain:

Before changing this value all containers must be stopped otherwise it is likely that iptables rules and network interfaces might leak on the host. A reboot will fix this.

For more information, have a look at the configuration file template at /usr/share/containers/containers.conf or the current manpage/docs of podman online.

Why does not it switch by default?

Because of backward-compatibility. As the docs say, the auto-detection will always choose CNI unless you basically have a fresh podman installation/system:

The default value is empty which means that it will automatically choose CNI or netavark. If there are already containers/images or CNI networks preset it will choose CNI.

Addendum

In case you use podman-compose, please note you may stumble upon other issues, as the v1.0.3 of the tool does not generate a pod anymore. This is fixed in v1.0.4.

3 Likes

Thanks for chasing this down @rugk!

1 Like