Update not working on F41 (6.12.9-200.fc41.x86_64)

Hi,

I am updating my system and the process is in a loop.
Using kernel “6.12.9-200.fc41.x86_64”

I see 2 items that need to be updated in the “software” app Fedora platform & Freedesktop SDK.

I hit the update All button and the process runs. only to have the items reappear shortly after the refresh.

in terminal if I run “sudo dnf update” the process is clean and shows “Complete!” at the end.

Completed this process on my framework laptop without issue.

Is there a way i can resolve this ?

Please use the Terminal and check with pkcon:

  1. pkcon refresh
  2. pkcon update
  3. If still issues, pkcon repair
  4. try update agan and give feedback.

man pkcon is documenting more options

Here is what I tried so far

  1. sudo dnf update --refresh
  2. sudo dnf update --refresh
  3. sudo dnf clean all
  4. sudo rpm --rebuilddb
  5. sudo dnf history list — Shows no issues

No change to the situation on the App but the command line shows “nothing to do”

$ pkcon refresh
Refreshing cache [=========================]
Finished [=========================]
$ pkcon update
Getting updates [=========================]
Finished [=========================]
No packages require updating to newer versions.
$ pkcon repair
[=========================]
Finished [=========================]
[=========================]
Waiting for authentication [=========================]
Finished [=========================]

No change App still refeshes and shows 2 updates required.

These looks like flatpak packages, which you can update running

flatpak update

The Software add should handled that when you ask it to do the updates. Which you said you did do.

1 Like

flatpak update
Looking for updates…

    ID                                   Branch          Op          Remote           Download
  1. [✗] org.fedoraproject.Platform f41 u fedora 29.7 MB / 755.0 MB
  2. [✓] com.getpostman.Postman stable u flathub 150.5 MB / 153.6 MB
  3. [✗] org.freedesktop.Sdk 24.08 u flathub 10.1 MB / 616.8 MB

Error: Error reading from file descriptor: Input/output error
Error: Error pulling from repo: While pulling runtime/org.freedesktop.Sdk/x86_64/24.08 from remote flathub: Opening content object ba8545f0547e5b51c9ef8fa93f600065e90af0f481b408b8149da3681691ac77: Opening content object ba8545f0547e5b51c9ef8fa93f600065e90af0f481b408b8149da3681691ac77: Couldn’t find file object ‘ba8545f0547e5b51c9ef8fa93f600065e90af0f481b408b8149da3681691ac77’

This could mean you have some faulty disk device. You can run journalctl -k -p warning to check.

hub 12-0:1.0: config failed, hub doesn’t have any ports! (err -19)
Jan 16 06:57:57 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 16 11:58:01 kernel: nvidia: loading out-of-tree module taints kernel.
Jan 16 11:58:01 kernel: nvidia: module license ‘NVIDIA’ taints kernel.
Jan 16 11:58:01 kernel: Disabling lock debugging due to kernel taint
Jan 16 11:58:01 kernel: nvidia: module license taints kernel.
Jan 16 11:58:01 kernel:
Jan 16 11:58:01 kernel: NVRM: loading NVIDIA UNIX x86_64 Kernel Module 565.77 Wed Nov 27 23:33:08 UTC 2024
Jan 16 11:58:01 kernel: nvidia_uvm: module uses symbols nvUvmInterfaceDisableAccessCntr from proprietary module nvidia, inheriting taint.
Jan 16 11:58:17 kernel: xhci_hcd 0000:66:00.0: xHC error in resume, USBSTS 0x401, Reinit
Jan 16 11:58:22 kernel: Bluetooth: hci0: HCI Enhanced Setup Synchronous Connection command is advertised, but not supported.
Jan 16 12:20:42 kernel: BTRFS error (device nvme0n1p4): zstd decompression failed, error 20 root 256 inode 301225 offset 0
Jan 16 12:20:42 kernel: BTRFS error (device nvme0n1p4): zstd decompression failed, error 20 root 256 inode 301225 offset 0
Jan 16 12:21:51 kernel: BTRFS error (device nvme0n1p4): zstd decompression failed, error 20 root 256 inode 301225 offset 0
Jan 16 12:37:57 kernel: apple 0005:05AC:024F.0009: unknown main item tag 0x0
Jan 16 12:38:16 kernel: apple 0005:05AC:024F.000A: unknown main item tag 0x0
Jan 16 12:46:40 kernel: apple 0005:05AC:024F.000B: unknown main item tag 0x0

lloks like btrfs has an issue !

1 Like

after booting into rescue ran a check on the partition andf no issues but the problem persists.

Error: Error reading from file descriptor: Input/output error
Error: While trying to checkout f204b2a59b67831c67f7174fda5ac0cc40c438e5211dfdb9d0ac6c0ad893e025 into /var/lib/flatpak/runtime/org.freedesktop.Sdk/x86_64/24.08/.f204b2a59b67831c67f7174fda5ac0cc40c438e5211dfdb9d0ac6c0ad893e025-Af6kH9: Opening content object ba8545f0547e5b51c9ef8fa93f600065e90af0f481b408b8149da3681691ac77: Couldn’t find file object ‘ba8545f0547e5b51c9ef8fa93f600065e90af0f481b408b8149da3681691ac77’

checked the directory and the file doesn’t exist journalctl says
Jan 16 15:39:22 kernel: BTRFS info (device nvme0n1p4 state M): use zstd compression, level 1
Jan 16 15:43:31 kernel: BTRFS error (device nvme0n1p4): zstd decompression failed, error 20 root 256 inode 301225 offset 0
Jan 16 15:43:31 kernel: BTRFS error (device nvme0n1p4): zstd decompression failed, error 20 root 256 inode 301225 offset 0
Jan 16 15:44:21 kernel: BTRFS error (device nvme0n1p4): zstd decompression failed, error 20 root 256 inode 301225 offset 0

Looks like a hardware issue causing a btrfs issue. It looks like you have an NVMe drive that’s on its way out. I would backup whatever data you can and replace it ASAP!!

Wow,

The drive is pretty new

That’s actually not too uncommon. It’s more common that they fail very early or very late. They tend to last weeks to a couple of months or for years. Should be covered under warranty though, right?

I had a Samsung 970 Pro die after 1 year and 2 weeks of barely any use - powered on, as it was installed but not actually used as it was empty. The equivalent of an engine throwing a rod after 10 miles.

Sometimes the componentry just fails. The SD card in my R-Pi is running pi-hole and is absolutely fine after 6 years of constant use. Others go through a card per year.

Electronics lottery…

1 Like

Yup. We’ve had to RMA a number of brand new server grade NVMe’s that died within a couple of weeks to months as well where others from the same order are chugging along fine 2 years later. There’s an expectant bell-curve with SSD failures.