Need to figure out how to use badblocks properly | File Recovery |

Anyone know how to correctly use badblocks? I tried using badblocks sdb on a USB Drive (both unmounted and not) but it says it can’t find it regardless. I need to see if I can’t fix the drive, because the files inside are starting to turn into 0 byte files ever since I switched to fedora from ArchLinux. I do not know how big the blocks are, or how many there are.

SMART says the drive is fine. Other error checking tools say it’s fine. This is the USB Drive in question: https://www.amazon.com/dp/B0BPN3H914

I bought it at the end of December 2023.

1 Like

Yes !

This might be a different problem, but before we go using badblocks I have to ask

  • Is the USB encrypted ?
  • what File System are you using on the USB ?
  • Can you use a LiveUSB, and mount the ‘bad’ drive there and do some inspections ?

badblocks can potentially destroy the drive if not used with the proper flags, but it can also save a file of the badblocks for you to report to your filesystem, if your filesystem supports that.


I will be on/off today as I also have some computer issues, but will check in from time to time.

2 Likes

Which means I’ve probably already destroyed it, since pretty much the first few dozen or so badblocks threads haven’t said what flags do what when I searched google for this problem. I used badblocks with no flags whatsoever, so no progress indicator showed, it just asked for sudo password and after hitting enter it just sat there with the terminal cursor with no progress. I canceled it after about 10-20 minutes.

  • The USB Drive is not encrypted, I never encrypt drives.
  • ext4, it was made on the Arch System.
  • The drive was fine in a Live Environment. I’ve done these tests on Fedora Live and EndeavourOS. The files never did this zero byte issue while using Arch Linux for the past month or two.

And the badblocks help doesn’t show all of the flags:

ian@fedora:~$ badblocks -help
Usage: badblocks [-b block_size] [-i input_file] [-o output_file] [-svwnfBX]
       [-c blocks_at_once] [-d delay_factor_between_reads] [-e max_bad_blocks]
       [-p num_passes] [-t test_pattern [-t test_pattern [...]]]
       device [last_block [first_block]]

Since after searching even more now, I found out about the -n flag, which you can see isn’t on here. (badblocks doesn’t accept --)

Not necessarily. . . We can use forensic tools or file retrieval utilities to recover the files the disk. So we’re not technically in panic mode yet.

Hmm, ok. . . That’s still not bad yet. What you will have to do is get a LiveUSB and mount the drive there so you can inspect the drive.

  1. :ballot_box_with_check: Drive not encrypted helps here a bit
  2. :ballot_box_with_check: That I remember ext4 can accept a report from badblocks if we get that far.
  3. :yellow_circle: What do you mean by this? You can see the files, and open them ? Prove they are not corrupted?
1 Like

What do you mean by this? You can see the files, and open them ? Prove they are not corrupted?

I can see the files, they are zero byte empty files. Opening them shows nothing in them. The files still have their filenames from before they were zero bytes. Such as Data.txt and the like.

That is very strange. . . i’ll have to do some digging.

badblocks would have done worse damage to the device than just zeroing out the contents of a file. It would have acted like dd in a way where the files is overwritten. ( Including files names. . . )

I’ll need to look into that a bit more.

I was told on hexchat to make an Arch Live ISO, but since I’m on Fedora, I don’t really have access to the Arch Linux repository to use that program.

I have the feeling that Fedora actually has some sort of performance feature of ext4 enabled that I’m not aware of. I’ve been poking around and noticed that ext4 can have such a feature enabled, where it doesn’t write files immediately.

This wouldn’t make sense though, since a lot of the files that have become zero bytes, are files I haven’t touched in a long while.

My other USB is a Ventoy stick I made, so I simply can put ISOs of distros on it. I don’t like tiling window managers, and continuously using the terminal makes me annoyed sometimes. So it’d have to be a GUI-based distro like EndeavourOS.

What were they hoping to achieve on the Arch iso that couldn’t be achieved on the Fedora one? If it’s ext4 related, sorry but I’m not buying that.

This is turning more so into a file recovery issues than a corrupted drive.

You can install testdisk & QPhotoreq to inspect the drive and look aorund and see what can be recovered.

They both do more than just image recovery, you can find just aboiut anything that isn’t overwritten by the drive.

1 Like

Do you know what this is actually named? No google results.

Both are in Fedora repos btw.

So, what do I do on testdisk once I get here?

The text file there is one of the zero byte files I want to recover. It only became zero byte today. So, as you can see, I haven’t touched it in a while. To get here, I ran testdisk /dev/sdb and following the menus, it just put me here, with no warning or idea of what to do.

1 Like

It looks like you highlighted the file you want to recover, C will copy it to a new location.
You can also go back and run it across the whole device, you just need to point it to a directory to save the files it has found for you.


I need to step a way for a bit.

After copying it, the file is still zero bytes in the copied location. I decided to try QPhotoRec after, and it seems like it just wants to grab all of the files that had once been deleted on the drive. This doesn’t seem like it would help, if the files are already there, but 0 bytes.

It also looks like it’s just restoring random non-deleted files in general that haven’t been zero-byted.

Unless someone knows better; I would claim that running badblocks on SSD/USB is not needed. Even on HDD I cannot recall the last time it would have be useful.

The key data you need is in the SMART data (but to truely understand it you need an NDA with the maker).
There is likely enough clues in the SMART data to show that a SSD is failing.

Once it starts failing you need stop using it and backup as much data as you can that you do not already have backedup.

If you share the output of smartctl -a we could check for any warning signs.
Here is some example output:

=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

SMART/Health Information (NVMe Log 0x02)
Critical Warning:                   0x00
Temperature:                        34 Celsius
Available Spare:                    100%
Available Spare Threshold:          10%
Percentage Used:                    0%
Data Units Read:                    107,237,185 [54.9 TB]
Data Units Written:                 19,815,493 [10.1 TB]
Host Read Commands:                 950,818,904
Host Write Commands:                249,559,675
Controller Busy Time:               16,036
Power Cycles:                       1,245
Power On Hours:                     539
Unsafe Shutdowns:                   61
Media and Data Integrity Errors:    0
Error Information Log Entries:      0
Warning  Comp. Temperature Time:    0
Critical Comp. Temperature Time:    0
Temperature Sensor 1:               34 Celsius
Temperature Sensor 2:               53 Celsius

Notice the Available Spare is at 100% so the controller did not need detect any failing flash memory cells.

Also you can use the Data Units Written to estimate the life of the device so long as you know the Endurance specification. (I will not buy SSD that do not publish that figure).

1 Like

I’m guessing I need to mount it?

/dev/sdb: Unknown USB bridge [0x154b:0x1007 (0x110)]
Please specify device type with the -d option.

smartctl does not need to the device mounted it works at a very low level.

I see that error on USB sticks as well. I think that means there is no SMART data accessible.

Why did you think that SMART did not find any errors?

Because Windows/BIOS said that was the case when I last checked a few months back.

badblocks without arguments only reads the drive and is file system agnostic, so it does not know how to create zero-byte files. For sure is the case with Fedora, but I assume with Arch too, that a prematurely detached USB drive without proper unmounting could contain incomplete or zero-sized files, but even then the journalling should keep ext4 consistent. However, 1TB is a big one. With much available RAM memory. it could take some time to unmount. But if existing and correctly closed files with size >0 become zero-sized, that’s very strange. Then it could indeed be, but unlikely, a difference in ext4 features, but very unlikely if Fedora Live does not show zero size files.

Fedora with desktop uses “udisks2” to mount/unmount an USB drive from GUI, arch probably the same but not sure.

Two issues: that was months ago and that is GO/NO-GO test in the BIOS that is of limited value when a device is in the process of failing.

Crystal Disk was the one in Windows that said it was fine however. That was right before I moved to arch.

Regardless, how do I retest for SMART then?