Igc driver for I225-LM hangs

I’m having issues with networking, and I’ve noticed that igc driver for I225-LM is hanging non stop. I have a Intel Corporation Ethernet Controller I225-LM in NUC 11i5TNK. I do use it with VirtualBox package, and VM keeps dropping internet.
errors in log:

Dec 31 08:49:20 ve kernel: igc 0000:58:00.0 en0: Detected Tx Unit Hang#012  Tx Queue             <2>#012  TDH                  <28>#012  TDT                  <28>#012  next_to_use          <2a>#012  next_to_clean        <28>#012buffer_info[next_to_clean]#012  time_stamp           <fffc0630>#012  next_to_watch        <00000000d6f7695d>#012  jiffies              <117ac3901>#012  desc.status          <164200>
Dec 31 08:49:22 ve kernel: igc 0000:58:00.0 en0: Detected Tx Unit Hang#012  Tx Queue             <2>#012  TDH                  <28>#012  TDT                  <28>#012  next_to_use          <2a>#012  next_to_clean        <28>#012buffer_info[next_to_clean]#012  time_stamp           <fffc0630>#012  next_to_watch        <00000000d6f7695d>#012  jiffies              <117ac40c0>#012  desc.status          <164200>
Dec 31 08:49:24 ve kernel: igc 0000:58:00.0 en0: Detected Tx Unit Hang#012  Tx Queue             <2>#012  TDH                  <28>#012  TDT                  <28>#012  next_to_use          <2a>#012  next_to_clean        <28>#012buffer_info[next_to_clean]#012  time_stamp           <fffc0630>#012  next_to_watch        <00000000d6f7695d>#012  jiffies              <117ac4880>#012  desc.status          <164200>

Should be reported as a kernel driver bug? I have another same NUC and same issues, even more severe with the other one (unusable networking)

How is your VM network configured?
NAT?
Bridged?
Other?

If using bridged are you using the virbr0 device on the host or did you create your own bridge directly to the ethernet device?

While I do not use a NUC, I do use VMs a lot and have never had an ethernet issue with any of them.
I use bridged networking to the host virbr0 device.

I use it in bridge mode, but I don’t see virbr0 device. I defined it in VBox as Bridged Adapter and define en0.
ip link ls in Fedora 37 shows

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
2: en0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
3: en1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
4: wlo1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DORMANT group default qlen 1000
5: cni-podman0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode 
7: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc fq_codel state 
8: vboxnet0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
9: veth1a8485af@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master

(btw I’ve tried to disable wlo1 device - thinking it maybe some conflict, but it would change MAC address every time, and always comes up)
Ubuntu VM then shows: ip link ls

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
2: en0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default 
4: br-c5aeda159bf4: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default 
6: vethe554050@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 

en0 is the onboard device in NUC which is plugged into switch with local network, and en1 (usb giga adapter) to cable modem. I could theoretically put en1 in bridge to get directly public IP, but I have a fixed public IP so, I’m only eligible for 1 IP. I have similar setups on different machines and they all work fine.
And I also had issues with another same model NUC without VM, it just didn’t do proper NAT and forwarding, and not any errors in kernel log, so I was thinking that my USB ETH dongles were bad, and I changed 3, even tried USB-c, then I though maybe something with Fedora not able to forward network packets via USB port. (but I have older NUCs running in different locations with very old USB network cards with no problems for years. Only now I’ve noticed those “hang errors”

If you are using fedora, it by default creates a virbr0 device (or at least it does for me on all my systems). I do use qemu/libvirt and not VBox.

I just checked a cleanly installed VM with F37 and see that the virtual bridge device is not there, so it apparently is created when libvirtd is activated.

As I recall, the bridge device on the host must be created before the VM can have a connection created to it. This is with all discussions I have seen about VMs and networking issues

I remember it was the case before to have virbr0 device created even on plain installs around F30-F33 (my laptop), and I disabled it, and I may have done it on servers too, but I didn’t use any virtualizations, so I thought it’s not necessary. But my laptop has no virbr0 and I have VBox with Windows and it works normally, but it’s in a NAT mode.
So you are thinking that this "Hang " error could be caused by missing virbr0 dev, rather than buggy igc driver? I’ve ust go 2 new NUCs to setup into fedora server, so I’ll try with fresh Fedora and VBox to see how it gets setup and if errors persist.