Help with failed kernel upgrades - failure of initrd-switch-root.service

Hi

For the last couple of kernel upgrades, I have been unable to boot my new kernel installations.
I receive the following fail messages:
Failed to switch root: Specified switch root path ‘/sysroot’ does not seem to be an OS tree. os-release file is missing.
and
Failed to start initrd-switch-root.service - switch Root.

I have looked at this thread: https://discussion.fedoraproject.org…tch-root/75275 which seem to be the same issue.
However,
My /sysroot is empty
I have no /boot folder
I have no /etc/default folder
I have no /etc/kernel folder

My last working installation was FC38 6.4.7, which is unfortunately also broken now.
My latest installed kernel is FC38 6.5.10

Any help to get this fixed will be appreciated.
Thank you :slight_smile:

Which fedora are you running?
Kinoite, Silverblue, Workstation, or ??

I am running a workstation Fedora

This seems to imply you are running one of the immutable variants and not the vanilla Workstation release. You also fail to tell us which version, which kernel, etc.

Even the one line output from the command uname -a would be more informative than what has been posted so far. The output of inxi -Fzxx would be even better.

Hi Jeff, thank you for looking into this.
uname - a gives this:
Linux localhost.localdomain 6.5.10-200.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Nov 2 19:59:55 UTC 2023 x86_64 GNU/Linux

The inxi command isn’t installed, it returns “command not found”

When it shows command not found it should also suggest installing the appropriate package for that command. The inxi command is in the inxi package.

The 6.5.10 kernel is behind the latest in F38. You have not stated which spin of fedora you are using so we still cannot tell exactly what the problem is.

Try using sudo dnf upgrade --refresh then post that command and all the following output so we can actually see the errors. Of course if you are using a spin that uses ostree then the dnf command won’t work.

Hi Jeff
I am sure that I am using a standard installation of Fedora workstation.
Normally, sudo and dnf would run on my setup, but in the emergency mode I have access to, both sudo and dnf responds with “command not found”

Ok, Sorry for misunderstanding the issue.

Do you have an install media usb device that may be used to boot to a live environment where repairs may be done?

If you do then please boot from the usb and we may attempt repairs that way. What I would like to see from the live system is lsblk -f. Now that I understand you are in emergency mode it is easier to plan the repair.

Hi Jeff, sorry for not explaining in more detail.
Here is an image of the lsblk -f output.

Thank you

Ok, from that I am not 100% certain but it appears that:

  1. sda is your installed system and that you may be booting in legacy (MBR) mode.
    Using LVM and ext4 for the file systems.
  2. sdb is showing no partitions.
  3. sdc appears to be the live media.

Please show the result of ls /dev/mapper

That is correct
Result of ls /dev/mapper


[liveuser@localhost-live ~]$ ls /dev/mapper
control  fedora_localhost--live-home  fedora_localhost--live-root  fedora_localhost--live-swap  live-base  live-rw
[liveuser@localhost-live ~]$

What is the output of sudo vgscan & sudo lvscan

What happens if you try sudo mount /dev/mapper/fedora_localhost--live-root /mnt

[liveuser@localhost-live ~]$ sudo vgscan
Found volume group “fedora_localhost-live” using metadata type lvm2
[liveuser@localhost-live ~]$ sudo lvscan
ACTIVE ‘/dev/fedora_localhost-live/swap’ [<7.86 GiB] inherit
ACTIVE ‘/dev/fedora_localhost-live/home’ [406.90 GiB] inherit
ACTIVE ‘/dev/fedora_localhost-live/root’ [50.00 GiB] inherit
[liveuser@localhost-live ~]$ sudo mount /dev/mapper/fedora_localhost–live-root /mnt
[liveuser@localhost-live ~]$ cd /mnt
[liveuser@localhost-live mnt]$ ls
afs bin boot dev etc home lib lib64 lost+found media mnt opt proc root run sbin srv sys @System.solv tmp usr var
[liveuser@localhost-live mnt]$

since the mount works then the following may allow a recovery.

cat /mnt/etc/fstab should show how things are supposed to be mounted.

I now suspect there may be corruption in the file system that gets mounted at /boot.

[liveuser@localhost-live mnt]$ cat /mnt/etc/fstab

/etc/fstab
Created by anaconda on Sun Oct 21 22:21:36 2018

Accessible filesystems, by reference, are maintained under ‘/dev/disk’
See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info

/dev/mapper/fedora_localhost–live-root / ext4 defaults 1 1
UUID=2aa88d9c-fff0-40c8-994a-60b135851084 /boot ext4 defaults 1 2
/dev/mapper/fedora_localhost–live-home /home ext4 defaults 1 2
/dev/mapper/fedora_localhost–live-swap swap swap defaults 0 0
[liveuser@localhost-live mnt]$

As i suspected, /dev/sda1 mounts at /boot.
What happens if you do

  1. su then
  2. mount /dev/sda1 /mnt/boot

If that fails then we need to do an fsck on that file system with fsck.ext4 /dev/sda1

No errors when mounting

[liveuser@localhost-live mnt]$ su
[root@localhost-live mnt]# mount /dev/sda1 /mnt/boot
[root@localhost-live mnt]# cd boot
[root@localhost-live boot]# ls
config-6.4.7-200.fc38.x86_64 initramfs-0-rescue-39503a44741241208ace6fcd826a0b7c.img memtest86+x64.bin System.map-6.5.7-200.fc38.x86_64
config-6.5.10-200.fc38.x86_64 initramfs-6.4.7-200.fc38.x86_64.img symvers-6.4.7-200.fc38.x86_64.xz System.map-6.5.8-200.fc38.x86_64
config-6.5.7-200.fc38.x86_64 initramfs-6.5.10-200.fc38.x86_64.img symvers-6.5.10-200.fc38.x86_64.xz vmlinuz-0-rescue-39503a44741241208ace6fcd826a0b7c
config-6.5.8-200.fc38.x86_64 initramfs-6.5.7-200.fc38.x86_64.img symvers-6.5.7-200.fc38.x86_64.xz vmlinuz-6.4.7-200.fc38.x86_64
efi initramfs-6.5.8-200.fc38.x86_64.img symvers-6.5.8-200.fc38.x86_64.xz vmlinuz-6.5.10-200.fc38.x86_64
extlinux loader System.map-6.4.7-200.fc38.x86_64 vmlinuz-6.5.7-200.fc38.x86_64
grub2 lost+found System.map-6.5.10-200.fc38.x86_64 vmlinuz-6.5.8-200.fc38.x86_64
[root@localhost-live boot]#

Please us the preformatted text button </> when posting text. It retains the formatting as seen on screen where the block quote does not.
This shows mine.

$ ls /boot
config-6.5.10-300.fc39.x86_64                            symvers-6.5.10-300.fc39.x86_64.xz
config-6.5.11-300.fc39.x86_64                            symvers-6.5.11-300.fc39.x86_64.xz
config-6.5.12-300.fc39.x86_64                            symvers-6.5.12-300.fc39.x86_64.xz
efi                                                      System.map-6.5.10-300.fc39.x86_64
grub2                                                    System.map-6.5.11-300.fc39.x86_64
initramfs-0-rescue-c50eedc1bcb245299bc96b1173897ac2.img  System.map-6.5.12-300.fc39.x86_64
initramfs-6.5.10-300.fc39.x86_64.img                     vmlinuz-0-rescue-c50eedc1bcb245299bc96b1173897ac2
initramfs-6.5.11-300.fc39.x86_64.img                     vmlinuz-6.5.10-300.fc39.x86_64
initramfs-6.5.12-300.fc39.x86_64.img                     vmlinuz-6.5.11-300.fc39.x86_64
loader                                                   vmlinuz-6.5.12-300.fc39.x86_64
lost+found

On yours I see
initramfs for rescue, 6.4.7, 6.5.7, 6.5.8, & 6.5.10
config for 6.4.7, 6.5.7, 6.5.8, & 6.5.10
vmlinuz for rescue, 6.4.7, & 6.5.10
symvers for 6.5.7, 6.5.8, & 6.5.10
System.map for 6.5.7, & 6.5.8

Assuming the default system config for kernels:
There should be 3 entries each for symvers, config, & System.map
There should be 4 entries each for initramfs & vmlinuz

This clearly shows that the upgrade was somehow corrupted so it will be necessary to complete it.
The following steps should work.
Starting from where you are.

  1. for i in proc sys run dev ; do mount -o bind /$i /mnt/$i ; done
  2. chroot /mnt
  3. mount -a
  4. dnf upgrade --refresh

Wait until that competes then a couple more minutes before rebooting.
Rebooting will involve exiting the chroot environment with exit then perform a normal reboot and let us know the results if there is failure at any step. Especially let us know if there is success.

Also, please post the output of ls /boot/efi while in the chroot environment.

Sorry, for the miss in formatting.
dnf upgrade ran with success, you can see the bottom of it below.
Rebooting now.

Installed:
  kernel-6.5.12-200.fc38.x86_64                   kernel-core-6.5.12-200.fc38.x86_64               kernel-devel-6.5.12-200.fc38.x86_64      kernel-modules-6.5.12-200.fc38.x86_64     
  kernel-modules-core-6.5.12-200.fc38.x86_64      kernel-modules-extra-6.5.12-200.fc38.x86_64     

Complete!
[root@localhost-live /]# ls /boot/efi
EFI  mach_kernel  System
[root@localhost-live /]#

Reboot done, results in the same situation as the other kernels, with failure of initrd-switch-root.service, and only having access to emergency mode …