Trunked bond0 -> 2 bridges, one as trunk, one as only native vlan1

Hi all,

I am a little bit lost, and seem to be unable to figure out how to best solve my “problem”. I am running a fedora 40 server used for virtualization. this machine has the following network setup:

  • bond0 → bonding “eno0” and “eno1” into to “bond0”.
  • bond0 → has 3 tagged vlan’s, “bond0.10”, “bond0.20” and “bond0.30”.
  • bond0.10 is then used as a port in br-ten
  • bond0.20 is then used as a port in br-twenty
  • bond0.30 is then used as a port in br-thirty

This way i can simply hook up a vm to a specific bridge to link it into a specific vlan while the guest acts as if it was talking native to a switch port. So far things are fine.

Now my confusion starts. I would like to create 2 additional bridges, say br-trunk and br-native, both sourced from bond0.

  • br-trunk would allow a vm to receive all traffic that comes in on bond0, tagged and untagged. effective linking the vm to the 4 networks as a Cisco trunk port would.
  • br-native would get only the native vlan ( vlan 1 ) from bond0. effective like an access port would be on a Switch.

Now the problems i have that confuse me:

  • 1 A single interface ( bond0 ) can not be enslaved by multiple (vlan-aware) bridges.
  • 2 The Qemu/KVM guest use random network names like vnet0, vnet1 vnetX… etc based on the order of boot up, making vlan filtering on the host (vlan aware) bridge look like a hassle.
  • 3 Naming every guest vm’s interface would mean a lot of extra work and potentially opens the door for named interface conflicts if not documented well.
  • 4 using a vlan aware bridge that would act like a mixed “br-trunk + br-native” would mean that i need to:
  • 4A Again name all the guest’s vnet interfaces to something static.
  • 4B On all the VM hosts configure the mixed bridge to be aware of all guest that could potentially connect and should be recognize by their guests named interfaces to decide what to filter / forward, and as such again create a lot of config and documentation overhead on the 3 physical hosts.

Somewhere i think that i could:

  • Just link br-trunk directly on bond0
  • create bond0.1 and link it as a slave to br-native. though then i wonder if that would not make traffic on the native vlan1 to go out of bond0 as tagged instead of untagged native traffic.

I am just feeling confused, i know there should be a solution, though I can’t see the forest for the trees. What simpler solution am i not seeing? or where is my understanding of the configuration of a vlan aware bridge combined with qemu guests lacking knowledge?

Thanks for any help,

Steven.