Metadata I/O error in xfs_imap_to_bp, len 32 error 5, F34

I came in to see a Fedora 34 workstation at a black screen with this message:

[198.810300] XFS (dm-0): metadata I/O error in "xfs_imap_to_bp + 0x3e/ox50 [xfs]" at daddr 0x3339300 len 32 error 5

I recently reimaged the entire system via Clonezilla as there were different XFS errors. Should I format the drive or dd it? If so what parameters?

Just a note. Clonezilla does not do a reinstall. It does a restore of what you previously cloned from “somewhere”.

You do not say if the system has been updated, where the copy you restored came from, or anything else helpful. For all we know the image could be 5 months old, and came from a different machine which could introduce errors by itself.

I would suggest, that since it seems the system does boot from that cloned image, you should first do a full upgrade.

sudo dnf clean all
sudo dnf upgrade
sudo dnf distro-sync

Then if you get the same error again we know you are fully upgraded and there is a known point to work from

The error you report may be file system related or memory related with the address given and pointing to xfs.

I’ll have to see if I can even boot it to a desktop as now after a hard boot this is what’s on the console. Indeed I used an image from a working workstation about a month ago. The linked thread shows the previous errors.

That error linked in the earlier thread on the same machine/drive?

If it is then you definitely have an issue with the drive, and it has not been fixed. If not then you still have an issue with the drive that needs fixed.

If it is a file system error then fsck can likely fix it.

I suspect however that it is drive failure which can be checked by using smartmontools and running “smartctl -av /dev/sda” The output should give a fair evaluation of the status of the drive.

You can boot to the live install media then install smartmontools and use smartctl there to check it out. Fsck would also need to be run from the live media as well

Aftet booting from a rescue disk don’t I have to mount certain shares and chroot to be able to run dnf?

No, it will install to the virtual disk that runs the live session. The limitation is that it will only be a virtual install and must have adequate space in system RAM. Most systems can handle that much and more with a live system running.

On Fedora apparently this syntax does not work: INVALID ARGUMENT TO -v: /dev/sda

[liveuser@localhost-live ~]$ smartctl --all /dev/sda
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.11.12-300.fc34.x86_64] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

Smartctl open device: /dev/sda failed: Permission denied
[liveuser@localhost-live ~]$ sudo smartctl --all /dev/sda
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.11.12-300.fc34.x86_64] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Device Model:     TOSHIBA MQ01ACF050
Serial Number:    58UNTMIPT
LU WWN Device Id: 5 000039 881b0cd72
Firmware Version: AV003D
User Capacity:    500,107,862,016 bytes [500 GB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    7200 rpm
Form Factor:      2.5 inches
Device is:        Not in smartctl database [for details use: -P showall]
ATA Version is:   ATA8-ACS (minor revision not indicated)
SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Mon Sep 13 09:26:54 2021 EDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x00)	Offline data collection activity
					was never started.
					Auto Offline Data Collection: Disabled.
Self-test execution status:      (   0)	The previous self-test routine completed
					without error or no self-test has ever 
					been run.
Total time to complete Offline 
data collection: 		(  120) seconds.
Offline data collection
capabilities: 			 (0x5b) SMART execute Offline immediate.
					Auto Offline data collection on/off support.
					Suspend Offline collection upon new
					command.
					Offline surface scan supported.
					Self-test supported.
					No Conveyance Self-test supported.
					Selective Self-test supported.
SMART capabilities:            (0x0003)	Saves SMART data before entering
					power-saving mode.
					Supports SMART auto save timer.
Error logging capability:        (0x01)	Error logging supported.
					General Purpose Logging supported.
Short self-test routine 
recommended polling time: 	 (   2) minutes.
Extended self-test routine
recommended polling time: 	 (  99) minutes.
SCT capabilities: 	       (0x003d)	SCT Status supported.
					SCT Error Recovery Control supported.
					SCT Feature Control supported.
					SCT Data Table supported.

SMART Attributes Data Structure revision number: 128
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000b   100   098   050    Pre-fail  Always       -       0
  3 Spin_Up_Time            0x0027   100   100   001    Pre-fail  Always       -       1749
  5 Reallocated_Sector_Ct   0x0033   100   100   050    Pre-fail  Always       -       0
  9 Power_On_Hours          0x0032   033   033   000    Old_age   Always       -       26947
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       147
191 G-Sense_Error_Rate      0x0032   100   100   000    Old_age   Always       -       0
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       63
193 Load_Cycle_Count        0x0032   001   001   000    Old_age   Always       -       1483388
194 Temperature_Celsius     0x0022   100   100   000    Old_age   Always       -       34 (Min/Max 18/52)
199 UDMA_CRC_Error_Count    0x0032   200   200   000    Old_age   Always       -       0
240 Head_Flying_Hours       0x0032   080   080   000    Old_age   Always       -       8052
241 Total_LBAs_Written      0x0032   100   100   000    Old_age   Always       -       2705620819
242 Total_LBAs_Read         0x0032   100   100   000    Old_age   Always       -       1330457384
254 Free_Fall_Sensor        0x0032   100   100   000    Old_age   Always       -       0

SMART Error Log Version: 1
ATA Error Count: 1301 (device log contains only the most recent five errors)
	CR = Command Register [HEX]
	FR = Features Register [HEX]
	SC = Sector Count Register [HEX]
	SN = Sector Number Register [HEX]
	CL = Cylinder Low Register [HEX]
	CH = Cylinder High Register [HEX]
	DH = Device/Head Register [HEX]
	DC = Device Command Register [HEX]
	ER = Error register [HEX]
	ST = Status register [HEX]
Powered_Up_Time is measured from power on, and printed as
DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
SS=sec, and sss=millisec. It "wraps" after 49.710 days.

Error 1301 occurred at disk power-on lifetime: 26880 hours (1120 days + 0 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 41 80 00 d3 07 40  Error: UNC at LBA = 0x0007d300 = 512768

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  60 20 80 00 d3 07 40 00      00:03:32.309  READ FPDMA QUEUED
  ef 10 02 00 00 00 a0 00      00:03:32.298  SET FEATURES [Enable SATA feature]
  27 00 00 00 00 00 e0 00      00:03:32.297  READ NATIVE MAX ADDRESS EXT [OBS-ACS-3]
  ec 00 00 00 00 00 a0 00      00:03:32.297  IDENTIFY DEVICE
  ef 03 45 00 00 00 a0 00      00:03:32.296  SET FEATURES [Set transfer mode]

Error 1300 occurred at disk power-on lifetime: 26880 hours (1120 days + 0 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 41 78 00 d3 07 40  Error: UNC at LBA = 0x0007d300 = 512768

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  60 20 78 00 d3 07 40 00      00:03:29.679  READ FPDMA QUEUED
  ef 10 02 00 00 00 a0 00      00:03:29.667  SET FEATURES [Enable SATA feature]
  27 00 00 00 00 00 e0 00      00:03:29.667  READ NATIVE MAX ADDRESS EXT [OBS-ACS-3]
  ec 00 00 00 00 00 a0 00      00:03:29.667  IDENTIFY DEVICE
  ef 03 45 00 00 00 a0 00      00:03:29.666  SET FEATURES [Set transfer mode]

Error 1299 occurred at disk power-on lifetime: 26880 hours (1120 days + 0 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 41 98 00 d3 07 40  Error: UNC at LBA = 0x0007d300 = 512768

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  60 20 98 00 d3 07 40 00      00:03:27.053  READ FPDMA QUEUED
  ef 10 02 00 00 00 a0 00      00:03:27.046  SET FEATURES [Enable SATA feature]
  27 00 00 00 00 00 e0 00      00:03:27.046  READ NATIVE MAX ADDRESS EXT [OBS-ACS-3]
  ec 00 00 00 00 00 a0 00      00:03:27.045  IDENTIFY DEVICE
  ef 03 45 00 00 00 a0 00      00:03:27.045  SET FEATURES [Set transfer mode]

Error 1298 occurred at disk power-on lifetime: 26880 hours (1120 days + 0 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 41 f8 00 d3 07 40  Error: UNC at LBA = 0x0007d300 = 512768

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  60 20 f8 00 d3 07 40 00      00:03:24.439  READ FPDMA QUEUED
  ea 00 00 00 00 00 a0 00      00:03:24.369  FLUSH CACHE EXT
  61 20 38 80 40 71 40 00      00:03:24.368  WRITE FPDMA QUEUED
  61 08 30 08 f0 6e 40 00      00:03:24.368  WRITE FPDMA QUEUED
  61 20 28 c0 ed fb 40 00      00:03:24.368  WRITE FPDMA QUEUED

Error 1297 occurred at disk power-on lifetime: 26880 hours (1120 days + 0 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 41 10 00 d3 07 40  Error: UNC at LBA = 0x0007d300 = 512768

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  60 20 10 00 d3 07 40 00      00:03:21.753  READ FPDMA QUEUED
  ef 10 02 00 00 00 a0 00      00:03:21.753  SET FEATURES [Enable SATA feature]
  27 00 00 00 00 00 e0 00      00:03:21.752  READ NATIVE MAX ADDRESS EXT [OBS-ACS-3]
  ec 00 00 00 00 00 a0 00      00:03:21.752  IDENTIFY DEVICE
  ef 03 45 00 00 00 a0 00      00:03:21.751  SET FEATURES [Set transfer mode]

SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Short offline       Completed without error       00%     26383         -
# 2  Short offline       Completed without error       00%     25715         -
# 3  Short offline       Completed without error       00%         0         -

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

Every error displayed there shows the same block and there are over 1300 errors in the smart log.

My suggestion is to immediately replace that drive.

Once you have the new drive to replace it you can use ddrescue to copy the device if you are unable to access the file system and copy the data. Ddrescue will allow making an image of the device while skipping the blocks that are non-recoverable and you can use it for the whole device or for a single partition.

You might also consider running badblocks on that device to see the extent of the failure.

Yes, I made an error in that smartctl command. I was thinking of -v for verbose as in most commands and smartctl uses the -v differently. (That is what I get for not checking the man page before posting.)

Yes ordering a new drive just seeing if I can buy some time on this. But this doesn’t look promising:

sudo xfs_repair /dev/mapper/fedora_localhost--live-root
Phase 1 - find and verify superblock...
superblock read failed, offset 54999908352, size 131072, ag 2, rval -1

fatal error -- Input/output error

And badblocks so far found 24 bad blocks.

I also tried to remove those bad blocks but I’m getting this as it’s LVM and XFS:

e2fsck -l ../badblocks.txt /dev/mapper/fedora_localhost--live-home 
e2fsck 1.45.6 (20-Mar-2020)
ext2fs_open2: Bad magic number in super-block
e2fsck: Superblock invalid, trying backup blocks...
e2fsck: Bad magic number in super-block while trying to open /dev/mapper/fedora_localhost--live-home

The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>
 or
    e2fsck -b 32768 <device>

/dev/mapper/fedora_localhost--live-home contains a xfs file system labelled 'home'

So is my syntax wrong?

from dmesg:

[Mon Sep 13 16:31:34 2021] ata1: EH complete
[Mon Sep 13 16:35:08 2021] ata1.00: exception Emask 0x0 SAct 0x8000000 SErr 0x0 action 0x0
[Mon Sep 13 16:35:08 2021] ata1.00: irq_stat 0x40000008
[Mon Sep 13 16:35:08 2021] ata1.00: failed command: READ FPDMA QUEUED
[Mon Sep 13 16:35:08 2021] ata1.00: cmd 60/00:d8:00:60:3b/01:00:27:00:00/40 tag 27 ncq dma 131072 in
                                    res 41/40:00:00:60:3b/00:01:27:00:00/40 Emask 0x409 (media error) <F>
[Mon Sep 13 16:35:08 2021] ata1.00: status: { DRDY ERR }
[Mon Sep 13 16:35:08 2021] ata1.00: error: { UNC }
[Mon Sep 13 16:35:08 2021] ata1.00: configured for UDMA/100
[Mon Sep 13 16:35:08 2021] sd 0:0:0:0: [sda] tag#27 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=2s
[Mon Sep 13 16:35:08 2021] sd 0:0:0:0: [sda] tag#27 Sense Key : Medium Error [current] 
[Mon Sep 13 16:35:08 2021] sd 0:0:0:0: [sda] tag#27 Add. Sense: Unrecovered read error - auto reallocate failed
[Mon Sep 13 16:35:08 2021] sd 0:0:0:0: [sda] tag#27 CDB: Read(10) 28 00 27 3b 60 00 00 01 00 00
[Mon Sep 13 16:35:08 2021] blk_update_request: I/O error, dev sda, sector 658202624 op 0x0:(READ) flags 0x0 phys_seg 33 prio class 0
[Mon Sep 13 16:35:08 2021] ata1: EH complete

You should never try to remove bad blocks from the system. They are marked bad because they are not usable and the system has already relocated any available data to a new location. Marking them as bad prevents the system from attempting to use them again.

Sorry I misspoke I meant marking them as bad to not be used.

The system automatically marks blocks as bad when needed. However, there is a limit to how many can be marked and I would guess that limit has already been exceeded simply by looking at the number of smart errors reported and the repeated errors on the same block from your log.

You also cannot mark blocks bad unless the partition of interest has been mounted for writing.

The more you run it the more damage occurs and the less likely that data recovery will be successful. I suggest just power it off and wait for the replacement, unless you have a drive with enough space for an image of that partition. If you have the space then start ddrescue on it and let it run to create the image while waiting for the new drive.