KVM preventing network access?

Hi Fedora community ,

today I installed KVM using this guide (note the fact that I have performed all the steps until step 5) Launch Virt Manager and Create Virtual Machine), because I wanted to see how it works.

After doing so, I noticed that I wasn’t able to connect to internet. Upon investigation, I noticed everything else was in working order. After numbers checks performed, restarting the workstation produced result. As the system booted and the units manufacturers logo appears , the next thing that appears is a prompt which offers me 4 different VM profiles (three profiles don’t provide network access) of Fedora 36 Workstation. After locating the proper profile I was able to connect to internet and post my issue. Now it this profile, I am experiencing issues using ProtonVPN ,the app only start by using the terminal and when I try to connect this is the output:

$ protonvpn
/usr/lib/python3.10/site-packages/protonvpn_nm_lib/core/connection_backend/nm_client/nm_client_mixin.py:11: Warning: g_main_context_push_thread_default: assertion 'acquired_context' failed
  nm_client = NM.Client.new(None)
/usr/lib/python3.10/site-packages/protonvpn_nm_lib/core/connection_backend/nm_client/nm_client_mixin.py:11: Warning: g_main_context_pop_thread_default: assertion 'g_queue_peek_head (stack) == context' failed
  nm_client = NM.Client.new(None)
dbus[12584]: arguments to dbus_message_iter_append_basic() were incorrect, assertion "_dbus_check_is_valid_path (*string_p)" failed in file ../../dbus/dbus-message.c line 2776.
This is normally a bug in some application using the D-Bus library.

  D-Bus not built with -rdynamic so unable to print a backtrace
Aborted (core dumped)

Step 4 of that guide is probably what caused the problem. Creating br0 that way often seems to break networking for the host. The IP addresses must match what your system sees, and not what is on that guide.

Try first doing sudo nmcli connection del br0 then see if connectivity is actually restored properly.

I do not use NAT for my VMs. I instead use ‘bridged’ and the device connected to is virbr0 which is created by default within fedora. I currently have 2 VMs active on my host and on the host I see

# nmcli connection show
NAME                   UUID                                  TYPE      DEVICE 
My-Linksys_5GHz        7a1bb133-7ca5-480a-9fb8-e2e9619755ce  wifi      wlp4s0 
virbr0                 ff8b1896-7ca6-4fdc-bf64-9aa6b0703325  bridge    virbr0 
vnet0                  ffd257d1-5715-4005-a172-052358e0395a  tun       vnet0  
vnet1                  22ec771d-dcfe-4a11-8024-61b87249cff3  tun       vnet1  

# ip addr
3: wlp4s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 68:1c:a2:06:2d:b2 brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.111/24 brd 192.168.2.255 scope global dynamic noprefixroute wlp4s0
       valid_lft 84794sec preferred_lft 84794sec
4: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 52:54:00:31:29:bb brd ff:ff:ff:ff:ff:ff
    inet 192.168.124.1/24 brd 192.168.124.255 scope global virbr0
       valid_lft forever preferred_lft forever
5: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master virbr0 state UNKNOWN group default qlen 1000
    link/ether fe:54:00:bf:71:fd brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc54:ff:febf:71fd/64 scope link 
       valid_lft forever preferred_lft forever
6: vnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master virbr0 state UNKNOWN group default qlen 1000
    link/ether fe:54:00:68:74:bd brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc54:ff:fe68:74bd/64 scope link 
       valid_lft forever preferred_lft forever

# ip route
default via 192.168.2.1 dev wlp4s0 proto dhcp src 192.168.2.111 metric 600 
192.168.2.0/24 dev wlp4s0 proto kernel scope link src 192.168.2.111 metric 600 
192.168.124.0/24 dev virbr0 proto kernel scope link src 192.168.124.1 

and on the VM I see

$ ip addr
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:68:74:bd brd ff:ff:ff:ff:ff:ff
    inet 192.168.124.136/24 brd 192.168.124.255 scope global dynamic noprefixroute enp1s0
       valid_lft 3539sec preferred_lft 3539sec
    inet6 fe80::3652:ab38:4c56:8644/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

$ ip route
default via 192.168.124.1 dev enp1s0 proto dhcp src 192.168.124.136 metric 100 
192.168.124.0/24 dev enp1s0 proto kernel scope link src 192.168.124.136 metric 100 

The only difference within the 2 VMs is the inet address assigned (which is given from the host via dhcp)

I only have to then assign a static route on the gateway router (192.168.2.1 in my case) to forward all traffic for the 192.168.124.0/24 subnet on the LAN to the IP of my host (192.168.2.111) and all hosts on the lan can connect directly to the VMs, no NAT required, and no interference with other connections on the physical LAN interface.

1 Like