I am using Fedora since version 35 and I have never had any strange behavior with VirtualBox until I installed Win11 VM today with VirtualBox.
Vm Setup:
RAM: 6GB
CPU:4
At some point when I start to install chrome and firefox simultaneously in my VM and the VM having still 2gb ram free, and my host with 37% of the memory also free F37 starts to lag until virtualbox explodes and everything is back to normal again.
If I have approximately 37% free ram on my host, why is this happening?
Jan 20 11:54:36 rog systemd-oomd[1141]: Killed /user.slice/user-1000.slice/user@1000.service/app.slice/app-gnome-virtualbox-9013.scope due to memory pressure for /user.slice/user-1000.slice being 60.91% > 50.00% for > 20s with reclaim activity
Jan 20 11:54:36 rog systemd[2293]: app-gnome-virtualbox-9013.scope: systemd-oomd killed 39 process(es) in this unit.
Have you tried rebooting into xorg instead of using wayland. Also your not using the nvidia drivers .You might also make sure secure boot and tpm are enabled in virtualbox .
Free RAM on the host should not impact RAM use on the VM. 6GB on the VM with 4 cpus should not have an issue with most things. It might be worth noting that in most OSes the VM assignment of RAM and CPU is not absolute but shared, so the host and other VMs have access to the same hardware on a shared basis. As such, swapping usage can present an issue if it happens a lot with both the VM and its host competing for resources.
It also may be an issue with available disk space for the VM. As with all OSes, running out of space on the virtual disk used by the VM causes problems as well.
If I understand the post correctly you have fedora 37 on the host and win11 on the VM. You also are trying to install both chrome and firefox on the VM simultaneously. It could easily become an issue with trying to install the same files (DLLs) and or updating the windows registry at the same time that causes conflicts.
Have you tried installing one at a time? What else have you tried to mitigate the problem?
I was using win11 VM and this situation hasn’t happened anymore, but I can’t understand how my system started to lag, in which case what was expected was that the VM would crash but my system would continue as normal.
In my experience, properly configured VM’s are not annoyingly slower than native installs. Poor performance usually means either host or VM is under memory pressure or has some CPU-intensive processes. Linux and Windows both have many tools to help monitor CPU and memory pressure. OOM generally means some process gets killed rather than slowing the system to the point where it is non-responsive.
One common cause of excessive memory pressure I have often encountered with new users is using when a command-line job is taking a long time, ending with many inactive jobs using memory.
With spinning disks, I have often seen a big drop in performance just before the disk fails.
Windows often has a lot of background overhead scanning for malware, etc. If this is causing OOM to kill the VM you could get into an endless loop of restarts.
The particular hyperviser in use. Each uses its own style of management, and all use some portion of host RAM and CPU time to operate.
With each VM the assigned processors and RAM are not dedicated, but instead are shared. Thus it is possible for the usage to be traded back and forth between host and VM, causing more intensive work for the host in keeping track of what is being used by each OS, adding to the workload for the host and ram usage (as well as potentially requiring more swap, etc.) After all the host is the one having the physical ram and cpus thus has to manage their utilization via the hypervisor.
I have 32 GB RAM and 12 CPUs in my host with the VMs and never have more than 2 VMs active at a time with each only assigned 4GB RAM and 2 CPUs. I have never seen the situation you describe, but can envision it easily. I also only use libvirt, qemu, and VMM to manage my VMs and do not use VirtualBox so have no experience with VB to compare.
I presently have 2 VMs active, essentially idling, with the host showing 13GB RAM used, ~6GB swap used, and ~30% cpu usage. (I am only running 8GB swap total on the host). When the VMs are not in use the host usually shows ~4GB RAM, 0 swap, and still about 27% CPU usage.
You made me curious to investigate the native stack hypervisor you are using. Ah, even though I knew it existed, I hadn’t paid attention to this! What better way to virtualize than a type 1 virtualization, I already understand why you don’t present this type of poor performance that I’m talking about in my case using VBox.
Thanks! I found an amazing video that explains the whole process using the Linux Hypervisor Setup (libvirt/qemu/kvm) stack.