Failed to log in graphical session after upgrade #f37 —> #f39

,

which python should return /usr/bin/python.
When it returns /home/xuser/.pyenv/shims/python for you, it means that you have set up a local python environment. I don’t do much with python, so I don’t know exactly what you need to do. You probably ran a script (or added a script into your $HOME/.profile) to set up the python environment. I think that you need to comment it out or not run it so that you are running the python that came with the Fedora distribution. Depending on how you configured /etc/sudoers, sudo could be exporting your local python environment. You could try sudo su - root. The dash in su clears the environment.

No this is not the reason. as you can see the shebang names an explicit python.

$ head /usr/bin/dnf
#!/usr/bin/python3

This is wild guess. try selinux auto relabelling:

touch /.autorelabel
reboot

Does that fix it?

Your python environment might still be setting a number of environment variables that tell python to search inside your local environment instead of under wherever the Fedora python distribution lives under /usr or /lib.

If you suspect that some packages are not installed correctly, you could try running rpm -Va.

I agree.
Maybe you set up the path within either ~/.bashrc or ~/.bash_profile and would need to remove it from there.
To see exactly what is in your users path you could do echo $PATH
If it shows the /home/xuser/.pyenv/shims part in the path before the /usr/bin part it could explain both the output of the which command and the failure of the dnf command.

yes, I have /home/xuser/.pyenv/shims before /usr/bin in my $PATH

how to fix it properly?

I added /usr/bin in the beginning of $PATH, but it has no effect, I got the same error

The PATH is not used to find the python as the shebang is explicitly set to /usr/bin/python3.
But if you set other PYTHON* env vars then they could break DNF.

What do you see output from env | grep PYTHON?

In this answer I assume you are probably using the default bash shell.

As I mentioned, look at the content of ~/.bashrc and/or ~/.bash_profile to find out if the path may be set within those files.
This can be done with the command less ~/.bashrc or cat ~/.bash_profile to see the content.
Usually a user makes a change there to add specific changes to the default $PATH the system provides.

Once you know where the added path entry is made then it is relatively simple to change that, by editing the script that adds the entry.
We could make an explicit suggestion on how to change it if you were to post the output of cat <filename> once you locate exactly which script file adds the entry.

It may be one of the 2 files I already named, or it may be in /etc/bashrc, /etc/profile, or one of several files under /etc/profile.d/.

In any case the line that alters the $PATH variable and adds that portion would contain as text something like PATH=/home/xuser/.pyenv/shims:$PATH so it could be found using grep with something like grep home/xuser/.pyenv/shims <filename> where you replace <filename> with the actual name of the file you are checking. You could even use a wildcard “*” for searching within multiple files and it would return the name of all the files where that string was located.

How, exactly, are you running dnf?

Are you running from root account or from your user account via sudo?
If via sudo I cannot see how env var will be passed to dnf to break it.

As a test I did this:

$ PYTHONPATH=qqq sudo env
HISTSIZE=1000
HOSTNAME=fender.chelsea.private
LANG=en_GB.UTF-8
LS_COLORS=rs=0:di=34;01:ln=32:mh=00:pi=47;31:so=47;31:do=47;31:bd=47;31:cd=47;31:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=47;31:tw=47;34:ow=47;34:st=47;34:ex=35:*.sh=35:*.py=33:*.ml=36:*~=37:
TERM=xterm-256color
LC_ALL=en_GB.UTF-8
MAIL=/var/spool/mail/barry
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
LOGNAME=root
USER=root
HOME=/root
SHELL=/bin/bash
SUDO_COMMAND=/usr/bin/env
SUDO_USER=barry
SUDO_UID=1000
SUDO_GID=1000

Nothing that PYTHONPATH is not in the root environment.

Did you try the selinux /.autorelabel ?

outputs nothing

I know a lot of people are commenting, and it can be overwhelming but I think it’s best to just use a LiveUSB, mount your broken / to /mnt .

Use the LiveUSB’s dnf --installroot=/mnt/<your-broken-system> to fix your install.

You can run

  1. dnf --installroot=/run/media/liveuser/fedora_localhost-live/ clean all to clear out any cached packages.
  2. dnf --installroot=/mnt/<your-broken-system> distro-sync to ge the machine back to a usable state.

what exactly should I do?

sudo touch /.autorelabel
then reboot

I did. reboot
got this for some time

and got error screen “oh no…” as usually.
in tty

dnf or sudo dnf still gives the same error

@hamrheadcorvette should it be fedora39 liveusb or f37 is also suitable for this?

It can involve kernel or GRUB related packages, so better mount all system partitions.

Hi @vgaetera!
I’m sorry, I’m a bit confused.

This is my partitions:

Can anybody tell me what exactly should I do?

First, if you want to fix your F37, I would use it’s live image to boot with since you are likely to have what is needed for that version on your system.
Create the image using media writer if you can, and then boot from it.
In the live system mount all of your partitions as @vgaetera (read the link he supplies) notes and follow the instructions given by @hamrheadcorvette to begin repairing your installation of F37 first then upgrade to F39 after F37 is working. At least that is the way I see you getting to the desired end result of being upgraded to F39, with the least pain.

how I can identify the /boot and the /root partition?

ok, I used lsblk

according to manual by the link I should do this:

Restoring the bootloader using the Live disk. LVM case

  1. mount /root
mkdir -p /mnt/root && mount /dev/nvme0n1p3 /mnt/root
  1. mount /boot
mount  /dev/nvme0n1p2 /mnt/root/boot
  1. Mount system processes and devices into the /root filesystem.
mount -o bind /dev /mnt/root/dev
mount -o bind /proc /mnt/root/proc
mount -o bind /sys /mnt/root/sys
mount -o bind /run /mnt/root/run
  1. On UEFI systems, bind the efivars directory and mount the EFI system partition (e.g. /dev/sda1).
mount -o bind /sys/firmware/efi/efivars /mnt/root/sys/firmware/efi/efivars
mount /dev/nvme0n1p1 /mnt/root/boot/efi
  1. Change your filesystem to the one mounted under /mnt/root.
chroot /mnt/root/
  1. Re-install GRUB2 and re-generate the GRUB2 configuration file.
dnf reinstall shim-* grub2-efi-* grub2-common
grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg
  1. Sync and exit the chroot
sync && exit
  1. reboot

then

to clear out any cached packages.

dnf --installroot=/run/media/liveuser/fedora_localhost-live/ clean all 

to get the machine back to a usable state.

dnf  --installroot=/mnt/root distro-sync 

2 things.

  1. Please post text that you copy and paste within the preformatted text tags using the </> button on the tool bar. Images cannot be searched nor text from them copied so others may not find important information without reading in detail your post.

  2. The command lsblk -f shows more detail, including UUIDs, file system type, and mount points. Once you have identified the mount point then looking at the way the file system is mounted in /etc/fstab gives the options to use. In your case it appears the root file syntem is on /dev/nvme0n1p3 so to mount it on /mnt the command is likely
    sudo mount -t btrfs -o subvol=root,compress=zstd:1 /dev/nvme0n1p3 /mnt

For my system I see this:

$ cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Thu Nov  9 23:30:56 2023
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
UUID=cc672c16-525d-4da7-89ba-9ce023e77b31 /                       btrfs   subvol=root,compress=zstd:1 0 0
UUID=cc7a0826-6f46-4016-9b5d-d4a0fe52938e /boot                   ext4    defaults        1 2
UUID=1879-5F54          /boot/efi               vfat    umask=0077,shortname=winnt 0 2
UUID=cc672c16-525d-4da7-89ba-9ce023e77b31 /home                   btrfs   subvol=home,compress=zstd:1 0 0



$ lsblk -f
NAME   FSTYPE FSVER LABEL  UUID                                 FSAVAIL FSUSE% MOUNTPOINTS
sr0                                                                            
zram0                                                                          [SWAP]
vda                                                                            
├─vda1 vfat   FAT32        1879-5F54                             581.4M     3% /boot/efi
├─vda2 ext4   1.0          cc7a0826-6f46-4016-9b5d-d4a0fe52938e  614.4M    30% /boot
└─vda3 btrfs        fedora cc672c16-525d-4da7-89ba-9ce023e77b31   42.9G    10% /home
                                                                               /

Note that instead of the device name fstab uses the UUID for mounting and that is the preferred way. You would then replace /dev/nvme0n1p3 with UUID=<your uuid here> if you chose that naming method.

I find the easiest way to mount the entire installed system under /mnt on the live system booted from usb is to use a chroot environment. This is the example I would use based on the outputs I show for my vm above. You would modify the mount in step 2 according to your own system.

open a terminal then

  1. su
  2. mount UUID=cc672c16-525d-4da7-89ba-9ce023e77b31 -t btrfs -o subvol=root,compress=zstd:1 /mnt
  3. for p in sys proc dev run ; do mount -o bind /$p /mnt/$p ; done
  4. chroot /mnt
  5. mount -a

Once this is done you are effectively in an environment equivalent to having booted your normal system directly and can do what is required for recovery. The exit command gets you back to the live usb environment when done.

Note that step 4 you gave above is eliminated by my process.
Step 6 is wrong. It is based on the way it would have been done with Fedora 32 and earlier. The command should be grub2-mkconfig -o /boot/grub2/grub.cfg. If you have already done the step 6 commands recovery will be necessary.

Step 8 and the following are also wrong. After you have rebooted in step 8 you would not have anything mounted at /mnt/root so the dnf commands would not work. Those dnf commands would need to be run after exiting from the chroot environment but before rebooting.