Prior kernels work without error. The debug 6.8.10 kernel reports that acpi_os_execute_deferred is hogging the cpu. (I installed and tried the debug kernel after the image above was generated.)
I have already tried deleting and reinstalling the 6.8.10 kernel with no change. What can I do next to get more debug info about the acpi start-up process to find the issue?
The inxi -Fzxx is barely legible. In the meantime, stay on kernel 6.8.9, as it look slike there could be issues with 6.8.10 not even able to retrieve logs for it either.
I will try to remember to format code correct. I’m not a frequent poster, so I sometimes forget.
journalctl -b -1 returns the previous boot of 6.8.9. There are NO journals of 6.8.10 and 6.8.10 never boots far enough for me to grab dmesg output for 6.8.10. Here’s the output under 6.8.9:
[ 0.974890] ahci: probe of 0000:00:0e.0 failed with error -22
[ 2.191478] pci 10000:e0:1c.4: bridge window [io size 0x1000]: failed to assign
[ 2.191480] pci 10000:e0:1d.0: bridge window [io size 0x1000]: failed to assign
[ 7.736716] thermal thermal_zone6: failed to read out thermal zone (-61)
How can I get to the log data before the system reaches multi-user? I have a rsyslogd server configured on my network if there’s a boot option that lets me configure it at that point. The 6.8.10 log is not present in any of the text log files nor the journal files.
I tried dnf remove kernel\* followed by dnf install kernel earlier today with no change.
The first image in the thread is the kernel messages when the system hangs, which seems to be the tail end of i915 setup. I’m wanting to get to the underlying error so it can be fixed. I already know how to boot an older kernel.
“Edit: Did as requested and watched the boot up. It gets to the initrd.target complete then waits indefinitely at the device. I’m thinking the boot image is corrupt in some way.”
I don’t have any other bits of information right now. I’m not on 6.8.10 and wont be for a few days so I can’t test.
There are 2 bridge windows that the system reports as being larger than there is reserved addressing in the OS. It appears that Dell used the same I/O address for PCI bridge to busses e1 and e2. I think this may be here it’s hanging up, though I can’t be certain.
Here’s the equivalent portion of booting under 6.8.9:
May 22 13:07:17 eric-pc2 kernel: hpet_acpi_add: no address or irqs in _CRS
May 22 13:07:17 eric-pc2 kernel: ahci: probe of 0000:00:0e.0 failed with error -22
May 22 13:07:17 eric-pc2 kernel: wmi_bus wmi_bus-PNP0C14:02: WQBC data block query control method not found
May 22 13:07:17 eric-pc2 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System...
May 22 13:07:17 eric-pc2 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
May 22 13:07:17 eric-pc2 systemd[1]: Reached target network.target - Network.
May 22 13:07:17 eric-pc2 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
May 22 13:07:17 eric-pc2 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook...
May 22 13:07:17 eric-pc2 systemd[1]: Starting plymouth-start.service - Show Plymouth Boot Screen...
May 22 13:07:17 eric-pc2 kernel: vmd 0000:00:0e.0: PCI host bridge to bus 10000:e0
May 22 13:07:17 eric-pc2 kernel: pci_bus 10000:e0: root bus resource [bus e0-ff]
May 22 13:07:17 eric-pc2 kernel: pci_bus 10000:e0: root bus resource [mem 0x8c000000-0x8dffffff]
May 22 13:07:17 eric-pc2 kernel: pci_bus 10000:e0: root bus resource [mem 0x6025102000-0x60251fffff 64bit]
May 22 13:07:17 eric-pc2 kernel: pci 10000:e0:1c.0: [8086:09ab] type 00 class 0x088000 conventional PCI endpoint
May 22 13:07:17 eric-pc2 kernel: pci 10000:e0:1c.0: Adding to iommu group 6
May 22 13:07:17 eric-pc2 kernel: pci 10000:e0:1c.4: [8086:a0bc] type 01 class 0x060400 PCIe Root Port
May 22 13:07:17 eric-pc2 kernel: pci 10000:e0:1c.4: PCI bridge to [bus e1]
May 22 13:07:17 eric-pc2 kernel: pci 10000:e0:1c.4: bridge window [io 0x0000-0x0fff]
May 22 13:07:17 eric-pc2 kernel: pci 10000:e0:1c.4: bridge window [mem 0x8c000000-0x8c0fffff]
May 22 13:07:17 eric-pc2 kernel: pci 10000:e0:1c.4: PME# supported from D0 D3hot D3cold
May 22 13:07:17 eric-pc2 kernel: pci 10000:e0:1c.4: PTM enabled (root), 4ns granularity
May 22 13:07:17 eric-pc2 kernel: pci 10000:e0:1c.4: Adding to iommu group 6
May 22 13:07:17 eric-pc2 kernel: pci 10000:e0:1d.0: [8086:a0b0] type 01 class 0x060400 PCIe Root Port
May 22 13:07:17 eric-pc2 kernel: pci 10000:e0:1d.0: PCI bridge to [bus e2]
May 22 13:07:17 eric-pc2 kernel: pci 10000:e0:1d.0: bridge window [io 0x0000-0x0fff]
May 22 13:07:17 eric-pc2 kernel: pci 10000:e0:1d.0: bridge window [mem 0x8c100000-0x8c1fffff]
May 22 13:07:17 eric-pc2 kernel: pci 10000:e0:1d.0: PME# supported from D0 D3hot D3cold
May 22 13:07:17 eric-pc2 kernel: pci 10000:e0:1d.0: PTM enabled (root), 4ns granularity
May 22 13:07:17 eric-pc2 kernel: pci 10000:e0:1d.0: Adding to iommu group 6
May 22 13:07:17 eric-pc2 kernel: pci 10000:e0:1c.4: Primary bus is hard wired to 0
May 22 13:07:17 eric-pc2 kernel: pci 10000:e1:00.0: [1344:5411] type 00 class 0x010802 PCIe Endpoint
May 22 13:07:17 eric-pc2 kernel: pci 10000:e1:00.0: BAR 0 [mem 0x8c000000-0x8c003fff 64bit]
May 22 13:07:17 eric-pc2 kernel: pci 10000:e1:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 10000:e0:1c.4 (capable of 63.012 Gb/s with 16.0 GT/s PCIe x4 link)
May 22 13:07:17 eric-pc2 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System.
May 22 13:07:17 eric-pc2 kernel: pci 10000:e1:00.0: Adding to iommu group 6
May 22 13:07:17 eric-pc2 kernel: pci 10000:e0:1c.4: PCI bridge to [bus e1]
May 22 13:07:17 eric-pc2 kernel: pci 10000:e0:1d.0: Primary bus is hard wired to 0
May 22 13:07:17 eric-pc2 systemd[1]: Received SIGRTMIN+20 from PID 467 (plymouthd).
May 22 13:07:17 eric-pc2 kernel: pci 10000:e2:00.0: [1344:5411] type 00 class 0x010802 PCIe Endpoint
May 22 13:07:17 eric-pc2 kernel: pci 10000:e2:00.0: BAR 0 [mem 0x8c100000-0x8c103fff 64bit]
May 22 13:07:17 eric-pc2 kernel: pci 10000:e2:00.0: 15.752 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x2 link at 10000:e0:1d.0 (capable of 63.012 Gb/s with 16.0 GT/s PCIe x4 link)
May 22 13:07:17 eric-pc2 kernel: pci 10000:e2:00.0: Adding to iommu group 6
May 22 13:07:17 eric-pc2 kernel: pci 10000:e0:1d.0: PCI bridge to [bus e2]
May 22 13:07:17 eric-pc2 kernel: pci 10000:e0:1c.4: Primary bus is hard wired to 0
May 22 13:07:17 eric-pc2 kernel: pci 10000:e0:1d.0: Primary bus is hard wired to 0
May 22 13:07:17 eric-pc2 kernel: pci 10000:e0:1c.4: bridge window [mem 0x8c000000-0x8c0fffff]: assigned
May 22 13:07:17 eric-pc2 kernel: pci 10000:e0:1d.0: bridge window [mem 0x8c100000-0x8c1fffff]: assigned
May 22 13:07:17 eric-pc2 kernel: pci 10000:e0:1c.4: bridge window [io size 0x1000]: can't assign; no space
May 22 13:07:17 eric-pc2 kernel: pci 10000:e0:1c.4: bridge window [io size 0x1000]: failed to assign
May 22 13:07:17 eric-pc2 kernel: pci 10000:e0:1d.0: bridge window [io size 0x1000]: can't assign; no space
May 22 13:07:17 eric-pc2 kernel: pci 10000:e0:1d.0: bridge window [io size 0x1000]: failed to assign
May 22 13:07:17 eric-pc2 kernel: pci 10000:e1:00.0: BAR 0 [mem 0x8c000000-0x8c003fff 64bit]: assigned
May 22 13:07:17 eric-pc2 kernel: pci 10000:e0:1c.4: PCI bridge to [bus e1]
May 22 13:07:17 eric-pc2 kernel: pci 10000:e0:1c.4: bridge window [mem 0x8c000000-0x8c0fffff]
May 22 13:07:17 eric-pc2 kernel: pci 10000:e2:00.0: BAR 0 [mem 0x8c100000-0x8c103fff 64bit]: assigned
May 22 13:07:17 eric-pc2 kernel: pci 10000:e0:1d.0: PCI bridge to [bus e2]
May 22 13:07:17 eric-pc2 kernel: pci 10000:e0:1d.0: bridge window [mem 0x8c100000-0x8c1fffff]
May 22 13:07:17 eric-pc2 kernel: pci 10000:e1:00.0: VMD: Default LTR value set by driver
May 22 13:07:17 eric-pc2 kernel: pci 10000:e2:00.0: VMD: Default LTR value set by driver
May 22 13:07:17 eric-pc2 kernel: pcieport 10000:e0:1c.4: can't derive routing for PCI INT A
May 22 13:07:17 eric-pc2 kernel: pcieport 10000:e0:1c.4: PCI INT A: no GSI
May 22 13:07:17 eric-pc2 kernel: pcieport 10000:e0:1c.4: PME: Signaling with IRQ 160
May 22 13:07:17 eric-pc2 kernel: pcieport 10000:e0:1d.0: can't derive routing for PCI INT A
May 22 13:07:17 eric-pc2 kernel: pcieport 10000:e0:1d.0: PCI INT A: no GSI
May 22 13:07:17 eric-pc2 kernel: pcieport 10000:e0:1d.0: PME: Signaling with IRQ 161
May 22 13:07:17 eric-pc2 kernel: vmd 0000:00:0e.0: Bound to PCI domain 10000
May 22 13:07:17 eric-pc2 kernel: nvme nvme1: pci function 10000:e1:00.0
May 22 13:07:17 eric-pc2 kernel: nvme nvme0: pci function 10000:e2:00.0
May 22 13:07:17 eric-pc2 kernel: pcieport 10000:e0:1c.4: can't derive routing for PCI INT A
May 22 13:07:17 eric-pc2 kernel: nvme 10000:e1:00.0: PCI INT A: no GSI
May 22 13:07:17 eric-pc2 kernel: pcieport 10000:e0:1d.0: can't derive routing for PCI INT A
May 22 13:07:17 eric-pc2 kernel: nvme 10000:e2:00.0: PCI INT A: no GSI
May 22 13:07:17 eric-pc2 kernel: nvme nvme1: allocated 64 MiB host memory buffer.
May 22 13:07:17 eric-pc2 kernel: nvme nvme0: allocated 64 MiB host memory buffer.
May 22 13:07:17 eric-pc2 kernel: nvme nvme1: 8/0/0 default/read/poll queues
May 22 13:07:17 eric-pc2 kernel: nvme nvme0: 8/0/0 default/read/poll queues
May 22 13:07:17 eric-pc2 kernel: nvme1n1: p1 p2 p3 p4 p5 p6 p7 p8
It looks like your boot process is waiting for a partition with some UUID, but it’s not appearing. I had the same issue. It turns out the UUID for my swap partition didn’t match the UUID on the kernel command line after “resume=”. For whatever reason, kernel 6.8.9 was fine with it, but 6.8.10 wasn’t. There are several places for the swap UUID: the swap partition itself, /etc/fstab, /etc/default/grub, /etc/kernel/cmdline and /boot/grub2/grub.cfg. Make all those files use the same UUID.
I looked deeper. The kernel is not to blame. Kernel 6.8.9 had systemd 255.4 baked into its initrd. Kernel 6.8.10 has systemd 255.6. The newer systemd starts systemd-hibernate-resume.service, the old systemd doesn’t. I believe that systemd-hibernate-resume.service looks for the resume partition specified on the kernel command line. If that partition doesn’t exist, the service waits for it indefinitely and prevents further boot process.
Running sudo dracut --regenerate-all --force rebuilds initrd images for all images. Now I can select Linux 6.8.9, and systemd-hibernate-resume.service is started as seen in the kernel log.
I resized my swap partition long time ago and updated /etc/fstab, but forgot to update other files.
I m facing kind of same issue in my Desktop (Intel i7-14700K, NVIDIA RTX 4070, XPG GAMMIX S11 Pro 2TB NVME hard disk). In kernel 6.8.10, the disk just doesn’t get mounted/recognized and it keeps waiting. But I tried 6.8.9 and 6.8.8, both of which works fine. They had the same UUID for the disks in the boot menu as 6.8.10. So something seems specifically wrong with 6.8.10.
Ahm… Running sudo dracut --regenerate-all --force broke all the kernels (6.8.10, 6.8.9 and 6.8.8). So it seems dracut config generation is broken? But I found a workaround. Editing the boot entry and removing resume=xxxx config fixes the boot issue. Even 6.8.10 boots. So I think its something to do with dracut and resume=xxx entry.
Ok. @proski is somewhat right here. In the past i had a swap partition. But I deleted it and commented out the swap partition my /etc/fstab. However, the problem is, running sudo dracut --regenerate-all --force does not remove the resume=xxx entry from the boot entry. Even though I removed that partition from /etc/fstab. Do I need to update somewhere else to remove the resume=xxx entry?
Had to remove the resume=xxx entry in the GRUB_CMDLINE_LINUX variable in /etc/default/grub and then run sudo grub2-mkconfig. That removed the resume=xxx from the boot entries and everything works fine. Probably the latest systemd hibernation service waiting endlessly for the resume partition caused the breakage.
I removed resume= and resume_offset= from /etc/kernel/cmdline and ran sudo dracut --regenerate-all --force to fix it. I had never gotten hibernate to work anyway.