First of all, you have several modifications in terms of third party repos, not just cdemu:
I consider rpmfusion quite normal, I am not sure about google-chrome (I think that can be enabled by default today), but the others are definitely not considered in our QA/testing.
A major question is why you have nvidia repos enabled if you have no nvidia card? Are you sure you have no nvidia graphics available? You might give us lspci
output, and also sudo dnf list --installed *vidia*
and glxinfo | grep evice
. I just want to verify that there is no nvidia. However, given your logs, I presume for now there is no nvidia driver involved but only amdgpu, which seems indeed indicated by the logs (if that proves right, I suggest to disable nvidia repos!).
That implies two problems your kernel has recognized, both can be linked to the problem, but do not have to:
4096 - An out-of-tree module has been loaded.
8192 - An unsigned module has been loaded in a kernel supporting module signature.
This means your kernel has (a) modification(s), likely one (or more) that is not considered in our QA/testing, and for now I presume it is not nvidia. That could be the cdemu
. It might have caused both of these taints, not sure for now.
However, given your logs, I would keep your kernel taints in mind, but I think that what you experience is a known issue (actually, it is two issues that are sometimes easy to confuse) that is currently in assessment by AMD. I assume that because of this:
mag 28 19:28:11 kernel: amdgpu 0000:0c:00.0: [drm] *ERROR* dc_dmub_srv_log_diagnostic_data: DMCUB error - collecting diagnostic data
mag 28 19:28:11 kernel: amdgpu 0000:0c:00.0: [drm] *ERROR* dc_dmub_srv_log_diagnostic_data: DMCUB error - collecting diagnostic data
mag 28 19:28:08 kernel: amdgpu 0000:0c:00.0: [drm] *ERROR* dc_dmub_srv_log_diagnostic_data: DMCUB error - collecting diagnostic data
mag 28 19:28:08 kernel: amdgpu 0000:0c:00.0: [drm] *ERROR* dc_dmub_srv_log_diagnostic_data: DMCUB error - collecting diagnostic data
mag 28 19:28:07 kernel: amdgpu 0000:0c:00.0: [drm] *ERROR* dc_dmub_srv_log_diagnostic_data: DMCUB error - collecting diagnostic data
mag 28 19:28:07 kernel: amdgpu 0000:0c:00.0: [drm] *ERROR* dc_dmub_srv_log_diagnostic_data: DMCUB error - collecting diagnostic data
mag 28 19:28:07 kernel: amdgpu 0000:0c:00.0: [drm] *ERROR* dc_dmub_srv_log_diagnostic_data: DMCUB error - collecting diagnostic data
mag 28 19:28:07 kernel: amdgpu 0000:0c:00.0: [drm] *ERROR* dc_dmub_srv_log_diagnostic_data: DMCUB error - collecting diagnostic data
mag 28 19:28:07 kernel: amdgpu 0000:0c:00.0: [drm] *ERROR* dc_dmub_srv_log_diagnostic_data: DMCUB error - collecting diagnostic data
You might read my recent 5 posts from another topic beginning here and maybe also the posts of other people there about these two AMD issue cases (keep in mind that in a few posts, people mixed up other issues with these two) to get an overview: the two issues of AMD upstream (#4141 and #4238; links are in my mentioned posts) manifest in a comparable way as you describe it. The logs are comparable, and several people have experienced the issue after 6.14.6 but not with 6.14.6.
However, your logs contain several error entries that are not yet known from others. For now, I would assume that these log entries / differences are linked to your modifications of your kernel. But given what you have in common with the others, especially the log entries with which the problem starts, I would assume for now that you have one of these two issues.
If you want to know for sure, you might try to revert the changes to get your kernel to the taint level 0 and see if that makes a difference in terms of if the error logs change. But I think for now, this is not important: I expect you have one of the two or both issues. So you might also not try for now to save time…
I would tackle your case individually once the other two issues of AMD are solved in a future kernel if the issue then remains in your case. Till then, I suggest to:
- read through the topic I mentioned and through the two AMD issue tickets as they already mentioned possibilities to temporarily mitigate the problem until it can be finally solved by a future kernel. What solves temporarily the issue for many is the kernel paramenter
amdgpu.dcdebugmask=0x10
which disables panel self-refresh (-> PSR): keep in mind that this increases the power use of your system, so you should not use that permanently but it might be a mitigation until the problem could be finally solved. Beyond, see the tickets, there is some more exchange of people about mitigations.
- check early for new updates of the kernel: each time a new kernel is there, remove whatever mitigation you have in place (e.g.,
amdgpu.dcdebugmask=0x10
), and then check if the then-new kernel works again without mitigation
- keep watching the AMD tickets: if the tickets get solved in a new kernel, and this new kernel does not work for you (or still needs the mitigation), then let us know here to get deeper into your case!!
Let’s hope for the best, these AMD issues are currently a mess since 6.14 
But in the meantime, you might also check what I mentioned above about nvidia - check for sure if there is something of nvidia contained or installed, and if not, remove the repo.
One thing about nvidia: it is normal in any case that one package with nvidia in its name is installed: nvidia-gpu-firmware.noarch 20250509-1.fc42 updates
→ so if that is the only output about nvidia given the commands I mentioned above (the dnf command in this case), then it is ok and you should be able to just remove the nvidia repo… But if you don’t know how you came to have the nvidia repo enabled, I would indeed check the lspci
and glxinfo | grep evice
if something of nvidia is there
Even if inactive, just to know about it