Need help, BTRFS drive failing to mount

thank you very much for offering your time jakfrost
i would like to ask for one last piece of advice instead
what are your thoughts on using ext4 on my secondary drive instead? (as I have clearly demonstrated my lack of understanding of btrfs) would that be safer?

I was a Mint user before coming to Fedora so as you can probably assume, I had a lot of hand holding with that distro. (It’s why I’m on Fedora Cinnamon)

I am reluctant to ever recommend ext4 since it is not a 64bit filesystem and does not have the capability to access more than 4TB. Plus it is NOT a CoW (copy on write) filesystem. I would stick with BTRFS but I would also make sure to back up my data externally if possible, I use a 4TB seagate BuP for this and it has been a boon. You may be able to use the rescue command for your files, the info should still be there if it was successfully working up to now. So don’t format yet, we’re not done

I did not even know that btrfs isn’t supposed to let me copy files immediately after formatting a drive. So I’m surprised to hear that I needed to do an extra step even though I managed to put files in it.

I don’t think it would stop you really, it’s actually a volume and a valid place to store data. You could try to temp mount it using the mount command from a terminal, maybe boot from a live usb install of fedora? I was more thinking Fedora didn’t stop you.

I found a fedora 38 live usb but unfortunately I may have to do this tomorrow instead.

I tried it before bed anyway. The live environment shows the same error popup when trying to mount through the disks utility.

This is not true!

However it is much more mature and stable.

I wonder what fs you used before btrfs became default a few Fedora releases ago.

Also compare the amount of trouble users are having with btrfs compared to ext4 by looking at the numbers of posts for each tag. #btrfs topics vs. #ext4 topics

From the dev of ext4 as per Wikipedia for ext4 …

In 2008, the principal developer of the ext3 and ext4 file systems, Theodore Ts’o, stated that although ext4 has improved features, it is not a major advance, it uses old technology, and is a stop-gap. Ts’o believes that Btrfs is the better direction because “it offers improvements in scalability, reliability, and ease of management”.[

And yes it can do more than 4TB, I sit corrected on that, but I stand by my recommendation

Not true.
I have a raid array containing 8TB with LVM & ext4. I have another 8TB HDD with one partition of ext4.
Please verify facts you claim.

# gdisk -l /dev/sdb
GPT fdisk (gdisk) version 1.0.9

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sdb: 15628053168 sectors, 7.3 TiB
Model: ST8000VN004-2M21
Sector size (logical/physical): 512/4096 bytes
Disk identifier (GUID): 56C1AAA0-7D38-45DF-8129-0AC4469DBD23
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 15628053134
Partitions will be aligned on 2048-sector boundaries
Total free space is 2014 sectors (1007.0 KiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048     15628053134   7.3 TiB     8300  Linux filesystem

AS I said above …

Btrfs is detected some corruption events (this is just a simple counter that ticks up every time it sees any corruption, so there could be many duplicates). And then it looks like a single device with dup profile for metadata block groups - i.e. two copies of metadata, both copies transid verify fails so both copies are bad for some reason. We’d need to look at the journal for previous boots to see if there’s any hint what went wrong earlier, causing the corruption events, and if they could relate to what’s going on now.

For now I’d say, run btrfs check --readonly and post those results. btrfs rescue is hard to use but it’s safe because it’s a read-only tool that scrapes data out of the file system and onto a new file system.

In terms of btrfs vs ext4, I’d say they tend to have the same failure rates but ext4 is easier to fix because its metadata is in known locations, so the repair tool can make a lot of assumptions. Whereas on Btrfs, there’s no fixed location for anything.

So just don’t make any repairs or any other changes for now until we understand the problem first. You can try using mount -o ro,rescue=all this is also safe but comes with some caveats like, data checksumming is disabled so it is possible any files you copy out are corrupt, you’d have to check them because Btrfs won’t warn about it with rescue=all mount option enabled.

I regret to inform you that I already got it replaced under warranty.

I already have a completely blank drive and I will probably make it ext4 simply because the problem I had seemed to be very specific to btrfs.

As for backups, I found my old Linux Mint drive that was last used in April 2022. It had a lot of my files but I still suffered a decent loss.

I changed the notifs for this thread to direct mentions only because people started pointless arguments that were not attempting to solve my problem.

It only seems that way because the errors are Btrfs. If the same problem affects ext4 metadata you’ll get ext4 errors.

The Btrfs messages you posted show prior corruption events detected. Again we’d need to see the earlier messages to have an idea what was going on.

If a problem affects your data, only Btrfs will report the problem because only Btrfs checksums your data, and verifies checksum matching on every read. Ext4 doesn’t checksum your data, it only checksums the file system metadata.

File system metadata is less than 5% of what gets written to a drive, the rest is your data. So your data is a huge target for random sources of corruption. This is why Btrfs is more likely to complain, because only Btrfs is checksumming everything.

1 Like