Nmcli adding alias (virtual) IPs

Hi community,

I added a bridged interface using nmcli, like this:

# Creating the bridge
nmcli connection add type bridge con-name br0 ifname br0 stp no

# Adding physical Interface to the bridge (master)
nmcli connection modify enp86s0 master br0

# Adding Autostart to the bridge
nmcli connection modify br0 connection.autoconnect yes


# Adding virtual IPs 
nmcli connection modify br0 +ipv4.addresses "192.168.11.12/24,192.168.11.13/24,192.168.11.14/24,192.168.11.15/24"


# Bringing up the interface
nmcli connection up br0

That actually works:


(Changed the MAC Addresses etc)

Now the problem is, that this are not full alias interfaces.
They don’t have an own MAC Address, they share the MAC with the physical interface.

For that reason an remote computer tries to connect to IP 192.168.11.14 and 192.168.11.11 answers. That basically also works, however it creates Problems while using Firewalls Switches or whatnot. On the network layer they all use the same mac.
At least I believe this is the problem.

So I know the “old way” creating alias interfaces worked, similar like this:

auto eth0
allow-hotplug eth0
iface eth0 inet static
    address 192.168.1.42/24
    gateway 192.168.1.1

iface eth0 inet static
    address 192.168.1.43/24

iface eth0 inet static
    address 192.168.1.44/24

# adding IP addresses from different subnets is also possible
iface eth0 inet static
    address 10.10.10.14/24

Is it possible to create “full” virtual IPs using nmcli?
Am I’m missing something?

1 Like

Hi, thank you for the very interesting read.
I will give a try.

However if you use virt-manager you can directly use the (br0) bridge in the opening post and full interfaces will appear from the guest. They actually work as full independent Interfaces. Not sure why the macvtap is needed. (Maybe it’s doing it automatically?) The Alias interfaces are not needed in this case.

However I’m using podman with pods and container.
And I managed to assign the IPs and ports to specific container.

I just noticed that I also need to allow the IP 192.168.11.11 in my firewall in order to let IPs 192.168.11.12 … 15 communicate through the firewall.

So theres is for example a DNS server on IP 192.168.11.13 answering on Port 53.
He can’t contact it’s Upstream DNS Servers if the firewall in between does not allow DNS traffic originating from IP 192.168.11.11 to that Upstream Servers.

But maybe the root cause is another problem, that I still have to figure out.

I’ll try that macvtap driver, sounds promising:

Start a VM and check from the host:

bridge link show

Correct.

Macvtab also works with Podman.

Created a script, that creates 6 macvlan bridged interfaces that survive a reboot and are bound to different pods.

 for i in {1..6}; do
  nmcli connection add type macvlan macvlan.mode bridge macvlan.parent enp86s0 ifname br${i} con-name br${i} ifname br${i}
  nmcli connection modify br${i} ipv4.addresses 192.168.11.1${i}/24 ipv4.gateway 192.168.11.254
  nmcli connection modify br${i} ipv4.method manual
  nmcli connection modify br${i} connection.autoconnect yes
  nmcli connection up br${i}
 done

All the virtual NICs do have there own MAC, so far so good.

Turns out the problem was another one, it has something to do with the routing.

ip route show
default via 192.168.11.254 dev br6 proto static metric 410 
default via 192.168.11.254 dev br5 proto static metric 411 
default via 192.168.11.254 dev br3 proto static metric 412 
default via 192.168.11.254 dev br4 proto static metric 413 
default via 192.168.11.254 dev br2 proto static metric 414 
default via 192.168.11.254 dev br1 proto static metric 415 
192.168.11.0/24 dev br6 proto kernel scope link src 192.168.11.16 metric 410 
192.168.11.0/24 dev br5 proto kernel scope link src 192.168.11.15 metric 411 
192.168.11.0/24 dev br3 proto kernel scope link src 192.168.11.13 metric 412 
192.168.11.0/24 dev br4 proto kernel scope link src 192.168.11.14 metric 413 
192.168.11.0/24 dev br2 proto kernel scope link src 192.168.11.12 metric 414 
192.168.11.0/24 dev br1 proto kernel scope link src 192.168.11.11 metric 415

There are several default routes now and the one with the lowest metric (br6 with metric 410) wins.
All the Pods that go to the firewall meanwhile have the IP “192.168.11.16”.

So it didn’t really has something to do with the MACs, it is related to routing.

Edit
When I compare the routing table using the method in the opening post, then it looks like this:

ip route show
default via 192.168.11.254 dev br0 proto dhcp src 192.168.11.11 metric 425 
192.168.11.0/24 dev br0 proto kernel scope link src 192.168.11.12 metric 425 
192.168.11.0/24 dev br0 proto kernel scope link src 192.168.11.13 metric 425 
192.168.11.0/24 dev br0 proto kernel scope link src 192.168.11.14 metric 425 
192.168.11.0/24 dev br0 proto kernel scope link src 192.168.11.15 metric 425 
192.168.11.0/24 dev br0 proto kernel scope link src 192.168.11.11 metric 425

In contrast to the routing table before, the routes have the same metric, but one default route.
That’s the route I would need to allow to the Firewall, since all Pods go to the outside world using the IP “192.168.11.11”.

I don’t really use Podman, but perhaps it is the wrong tool for your task and something like LXC should fit better, or maybe you need to study its network-related documentation more carefully.

In any case, you can use separate network namespaces for isolation, or utilize policy-based routing with custom tables and rules for selective processing of multiple default routes.

This is the nature of using a bridge. Your communication is on layer two while using the bridge. No routing is needed.
If you want to separate the services from the outside you need to port forward them to the IP’s you need while using NAT internally.

I have no idea if Podman supports this mode, but I’m pretty sure you don’t need port forwarding with libvirt using shared host bridge which doesn’t involve NAT and provides each container/VM a separate MAC visible to other hosts on the main network.

1 Like