Podman DNS configuration

networking
podman

#1

I have two containers. One is running pgAdmin4 and the other is running PostGIS. When I run them with docker run, I give them names pgadmin4 and postgis, and the pgadmin4 container can connect to the postgis container using its name - postgis.

Now I’m trying to run these with podman run. The containers start, and they can talk to each other by IP address, but they can’t do it by name - something inside the podman network’s DNS isn’t configured.

Here’s the podman run commands I currently have:

sudo podman run --detach --name postgis --hostname postgis --publish 5439:5432 --env-file .env postgis:latest
sudo podman run --detach --name pgadmin4 --hostname pgadmin4 --publish 8686:80 --env-file .env docker.io/dpage/pgadmin4:latest

How do I set up the podman networking so the containers can recognize each other by name?


#2

Are you running the exact same command line syntax with docker? I tried replicating with docker and I wasn’t able to … is there something important in your env file?


#3

Also, what podman versions do the two of you have?


#4

Oops - looks like when I tested it with Docker last time I used docker-compose, which is building a Docker network. It doesn’t work with Docker either with just the docker run commands. All that’s in the .env is the passwords that get set in the containers.

[znmeb@Silverblue containers]$ docker --version
Docker version 1.13.1, build 55f9e52-unsupported
[znmeb@Silverblue containers]$ podman --version
podman version 1.0.0
[znmeb@Silverblue containers]$ 

If the container networking libraries are the same in Silverblue as they are in the Fedora toolbox:

🔹[znmeb@toolbox containers]$ dnf info containernetworking-plugins
Available Packages
Name         : containernetworking-plugins
Version      : 0.7.3
Release      : 2.fc29
Arch         : x86_64
Size         : 14 M
Source       : containernetworking-plugins-0.7.3-2.fc29.src.rpm
Repo         : updates-testing
Summary      : Libraries for writing CNI plugin
URL          : https://github.com/containernetworking/plugins
License      : ASL 2.0
Description  : The CNI (Container Network Interface) project consists of a specification
             : and libraries for writing plugins to configure network interfaces in Linux
             : containers, along with a number of supported plugins. CNI concerns itself
             : only with network connectivity of containers and removing allocated resources
             : when the container is deleted.

Name         : containernetworking-plugins
Version      : 0.7.3
Release      : 2.fc29
Arch         : x86_64
Size         : 14 M
Source       : containernetworking-plugins-0.7.3-2.fc29.src.rpm
Repo         : fedora
Summary      : Libraries for writing CNI plugin
URL          : https://github.com/containernetworking/plugins
License      : ASL 2.0
Description  : The CNI (Container Network Interface) project consists of a specification
             : and libraries for writing plugins to configure network interfaces in Linux
             : containers, along with a number of supported plugins. CNI concerns itself
             : only with network connectivity of containers and removing allocated resources
             : when the container is deleted.

🔹[znmeb@toolbox containers]$ dnf info containernetworking-cni
Available Packages
Name         : containernetworking-cni
Version      : 0.7.1
Release      : 1.fc29
Arch         : x86_64
Size         : 13 M
Source       : containernetworking-cni-0.7.1-1.fc29.src.rpm
Repo         : fedora
Summary      : Libraries for writing CNI plugin
URL          : https://github.com/containernetworking/plugins
License      : ASL 2.0
Description  : The CNI (Container Network Interface) project consists of a specification
             : and libraries for writing plugins to configure network interfaces in Linux
             : containers, along with a number of supported plugins. CNI concerns itself
             : only with network connectivity of containers and removing allocated resources
             : when the container is deleted.

🔹[znmeb@toolbox containers]$ 

I’m assuming I can add a docker network command to the Docker version and get this to work, but I couldn’t find a podman network command anywhere. I’m assuming this infrastructure is somewhere in Kubernetes, however. If that’s what I need to get a LAN inside a pod then I’ll go that way.


#5

Here’s a script that works with Docker - the pgadmin4 and rstats containers can both connect to the postgis container by name:

#! /bin/bash
  
sudo docker rm --force postgis rstats pgadmin4
sudo podman rm --force postgis rstats pgadmin4
sudo docker network rm dspc
sudo docker network create --driver bridge dspc
sudo docker run --network dspc --detach --name postgis --hostname postgis --publish 5439:5432 --env-file .env \
  postgis:latest
sudo docker run --network dspc --detach --name rstats --hostname rstats --publish 8004:8004 --env-file .env \
  rstats:latest
sudo docker run --network dspc --detach --name pgadmin4 --hostname pgadmin4 --publish 8686:80 --env-file .env \
  docker.io/dpage/pgadmin4:latest

Note that I remove both the podman and docker containers at the beginning to free up all the published ports. I run podman and docker alternately, and the end goal is to support both equally.

P.S.: This is a public GitHub project - https://github.com/znmeb/data-science-pet-containers. The current development branch is https://github.com/znmeb/data-science-pet-containers/tree/decompose.


#6

It turns out that the fancy network definition tools in Docker don’t exist (yet - feature request coming) in podman. However, I can make this work by adding static IP addresses to the containers with --ip and injecting /etc/hosts files into the containers with --add-host. From man podman-run:

--ip=""

Specify a static IP address for the container, for example '10.88.64.128'.  Can only be used if no additional
CNI networks to join were specified via '--network=<network-name>', and if the container is not joining
another container's network namespace via '--network=container:<name|id>'.  The address must be within the
default CNI network's pool (default 10.88.0.0/16).