Consultant usually using fedora on a external hard disk

Hello community,

Hope you all stay safe and you and your families well.

I work as a consultant in IT, and usually use Fedora installed in an external hd installation where I have my labs and tools so I can be more independent of my employers laptop with Windows 10.

I have being using btrfs, but the intense use of disk gives me worries and one of the partition of my current disk have give me problems and smartctl alerts. I am currently clonning it with dd and in the process to rescue data…not ssd.

I would like to receive advice, in your experience, about which devices do you recommend me to have a portable hd for fedora and which format for partitions do you recommend.

My best whises an kind regards,

Luis

Since btrfs is relatively new to the everyday use (fedora began using it only with fedora 33) I would suggest going back to ext4 which has a history of years of use and reliability. In fact, as yet btrfs is not recommended for enterprise level storage.

I have used the linux ext file systems from the beginning (ext, ext2, ext3 & ext4) as they were developed and made available, and to my knowledge have never had a file system hiccup that was not caused by a power interruption of some sort.

1 Like

I think it was fedora 33 that it defaulted btrfs for desktop(workstation).

Use ext4 or xfs. Both are reliable and well tested. xfs is the default for Fedora’s enterprise counterparts (Red Hat & CentOS). ext4 was the Fedora default for long (and still is in many distributions).

What exactly do you mean with devices? Do you mean recommendations for portable hard drive products?

If I understand correctly, OP is asking about a USB drive to use for running Fedora on. If you’re not doing incredibly many writes (something like recording videos every day, editing and deleting them), then an external SSD is your best choice. The speed difference in day to day use is huge, you can drop it as much as you want and it won’t break, it uses less battery from your laptop and it’s lighter.

As for which to buy, these days they’re pretty much all fine. Decide on a size (keep in mind SSDs run best when not 100% full, so aim for 80% max), don’t buy the cheapest, and you should be fine.

Then, make sure to have a backup because nothing else can truly protect you (e.g. your SSD is stolen/lost/burns in a car crash).

Hope this helps!

I fixed my error. Thanks

ext4 or xfs or reseir4

What do you mean by intense use? And what kind of wear are you concerned about? If you’re worried about write amplification, you can use one of the compression options. By default we’re using zstd:1 which is the least amount of compression, with a computationally cheap estimator to ensure the effort is worth it, or bail out. If you’re worried about head seeks, fewer writes also means fewer seeks but also Btrfs tends to accumulate small file writes into sequential writes, and thus fewer seeks.

The animations on this page are old but the write behavior of the file systems hasn’t changed much. You could create maps of your own workload for comparison.

Btrfs isn’t perfect, but your advice leaves out that it’s a net reduction in data integrity because ext4 doesn’t checksum data. Considering the poster reports the drive has issues reported by smartctl, and data is a significantly larger target - being more than 90% of the blocks on a volume, hardware induced problems are much more likely to show up in data. And right now only Btrfs catches these kinds of problems on Fedora.

This is not correct. SUSE Enterprise Linux has shipped Btrfs by default for about 8 years now. Facebook has millions of instances using Btrfs every day. They consider it reliable.

No file system can work around drive firmware or hardware defects. We’ve got quite a lot of evidence that Btrfs is stable on stable hardware, and does quite a good job of unambiguously reporting not only that there’s corruption, but the likely reason for the corruption.

What I’ve seen triaging Btrfs bug reports, overwhelmingly aren’t Btrfs bugs, but hardware issues. The most common problem I’ve seen are bit flips cause by faulty memory, the user replaced the RAM, and the problem went away. And they lost no data. Not least of which is they were informed of the problem early on and could take action, rather than this pernicious form of corruption festering for years, replicating into all copies and backups.

Yes. Fedora 33 for Workstation edition and desktop spins. Fedora 35 for Cloud edition.

There is a somewhat different data recovery strategy with Btrfs than ext4 or XFS. You really want to take advantage of options that don’t modify the file system. File system repair, a.k.a. fsck, will make irreversible changes to the file system. On hardware that you suspect may be failing, this is folly. My suggestion is:

Start with: mount -o ro,rescue=all which will use backup root trees, skip over corrupt trees, and ignore checksum errors. You can selectively use these options, for example, if you want to reject copying corrupt blocks. But typically if you don’t have up to date backups, you want as much data as you can before the hardware fails, so you may be willing to accept the risk.

You can safely use btrfs check without options. it’s a read-only command. And you can share it in support forums (Fedora or upstream).

Use ddrescue for making a clone of a drive. It focuses on getting the easy to read blocks, so that most of you data is copied as fast as possible, then goes back and tries the blocks that have errors. It also doesn’t spend too much time in any one area of the drive’s addressable range. The manual has more information on strategy.

btrfs restore is a pretty ugly duckling of a tool, but very capable.

btrfs check with either --repair and --init-extent-tree` are very heavy hammers and have non-zero chance of making things worse. Since Btrfs has no specific locations for metadata, the tool can’t make assumptions about what kinds of file system metadata should be in particular locations, and therefore infer repairs. Consider this command a last resort, and best to ask an expert user or Btrfs developer prior to using it.

One of the best things about Btrfs, is making replication cheap, so you can have frequent backups, and avoid disaster recovery. The btrfs send receive workflow does not need deep traversal on either the source or target to know what files have changed and need updating. It also doesn’t need to resend files you’ve merely renamed or moved into another directory. It’s not magic, but if you don’t understand how it works, it might be indistinguishable from magic because it’s that fast. The rate of data transfer isn’t where Btrfs excels, it’s in the nearly instant computation of what has changed. In particular if you have many files with not many changes, Btrfs makes it cheap enough you could do incremental replication very frequently (minutes apart).

1 Like

Thank you very much you all for taking the time to answer.

Thank you @chrismurphy !
It is my sdb5 from the usb external drive that I have as historical data, not as the day by day data like /opt and /home subvolumes in /dev/sdb4.

I have another 2TB usb external drive to use to rescue as much as I can.

I share with you some info and I have done a little of research and I am cloning the disk with ddrescue and not with dd,that failed. I will rise a message in another specific channel. It is going to be a tough job to get the data I guess.

A BIG thank you for your help.


I could not find any way to rescue the data using btrfs rescue commands to another disk because some HW disk problem in the beginning of the sdb5 partition.

I opted… to do a mkfs.btrfs /dev/sdb5 in the hope that I could later do some kind of later recovery of the disk recovering the data. I need now to have a good cutover plan to rescue the files and folders of that partition or assume the data is lost …

Please, If you know any tools (parted, etc ) I can get the btrfs partition fs to recompose and know that the data files and folders are still there somewhere it would be very appreciated.

# dmesg -w
[29497.245096] sd 3:0:0:0: [sdb] tag#5 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
[29497.245108] sd 3:0:0:0: [sdb] tag#5 Sense Key : Hardware Error [current] 
[29497.245115] sd 3:0:0:0: [sdb] tag#5 Add. Sense: Internal target failure
[29497.245121] sd 3:0:0:0: [sdb] tag#5 CDB: Read(10) 28 00 c3 d4 c2 c0 00 01 00 00
[29497.245124] critical target error, dev sdb, sector 3285500608 op 0x0:(READ) flags 0x80700 phys_seg 32 prio class 0
[29497.245178] sd 3:0:0:0: [sdb] tag#6 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
[29497.245183] sd 3:0:0:0: [sdb] tag#6 Sense Key : Hardware Error [current] 
[29497.245188] sd 3:0:0:0: [sdb] tag#6 Add. Sense: Internal target failure
[29497.245192] sd 3:0:0:0: [sdb] tag#6 CDB: Read(10) 28 00 c3 d4 c3 c0 00 01 00 00
[29497.245196] critical target error, dev sdb, sector 3285500864 op 0x0:(READ) flags 0x80700 phys_seg 32 prio class 0
[29500.266248] sd 3:0:0:0: [sdb] tag#4 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
[29500.266306] sd 3:0:0:0: [sdb] tag#4 Sense Key : Hardware Error [current] 
[29500.266318] sd 3:0:0:0: [sdb] tag#4 Add. Sense: Internal target failure
[29500.266330] sd 3:0:0:0: [sdb] tag#4 CDB: Read(10) 28 00 c3 d4 c3 00 00 00 08 00
[29500.266343] critical target error, dev sdb, sector 3285500672 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
[29500.266366] Buffer I/O error on dev sdb5, logical block 1419360, async page read
[29502.954926] sd 3:0:0:0: [sdb] tag#8 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
[29502.954983] sd 3:0:0:0: [sdb] tag#8 Sense Key : Medium Error [current] 
[29502.954994] sd 3:0:0:0: [sdb] tag#8 Add. Sense: Unrecovered read error
[29502.955005] sd 3:0:0:0: [sdb] tag#8 CDB: Read(10) 28 00 c3 d4 c3 00 00 00 08 00
[29502.955013] critical medium error, dev sdb, sector 3285500672 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
[29502.955035] Buffer I/O error on dev sdb5, logical block 1419360, async page read

Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Seagate Mobile HDD
Device Model:     ST2000LM007-1R8174
Serial Number:    ZDZ5S906
LU WWN Device Id: 5 000c50 0b4865aa8
Firmware Version: SBK2
User Capacity:    2,000,398,934,016 bytes [2.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    5400 rpm
Form Factor:      2.5 inches
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-3 T13/2161-D revision 3b
SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Sat Feb 19 10:12:55 2022 CET
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
See vendor-specific Attribute list for marginal Attributes.

General SMART Values:
Offline data collection status:  (0x00)	Offline data collection activity
   				was never started.
   				Auto Offline Data Collection: Disabled.
Self-test execution status:      (   0)	The previous self-test routine completed
   				without error or no self-test has ever 
   				been run.
Total time to complete Offline 
data collection: 		(    0) seconds.
Offline data collection
capabilities: 			 (0x71) SMART execute Offline immediate.
   				No Auto Offline data collection support.
   				Suspend Offline collection upon new
   				command.
   				No Offline surface scan supported.
   				Self-test supported.
   				Conveyance Self-test supported.
   				Selective Self-test supported.
SMART capabilities:            (0x0003)	Saves SMART data before entering
   				power-saving mode.
   				Supports SMART auto save timer.
Error logging capability:        (0x01)	Error logging supported.
   				General Purpose Logging supported.
Short self-test routine 
recommended polling time: 	 (   1) minutes.
Extended self-test routine
recommended polling time: 	 ( 331) minutes.
Conveyance self-test routine
recommended polling time: 	 (   2) minutes.
SCT capabilities: 	       (0x3035)	SCT Status supported.
   				SCT Feature Control supported.
   				SCT Data Table supported.

SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
 1 Raw_Read_Error_Rate     0x000f   073   043   006    Pre-fail  Always       -       18988119
 3 Spin_Up_Time            0x0003   098   097   000    Pre-fail  Always       -       0
 4 Start_Stop_Count        0x0032   098   098   020    Old_age   Always       -       2615
 5 Reallocated_Sector_Ct   0x0033   079   079   036    Pre-fail  Always       -       13344
 7 Seek_Error_Rate         0x000f   087   060   045    Pre-fail  Always       -       502656073
 9 Power_On_Hours          0x0032   095   095   000    Old_age   Always       -       5134 (132 66 0)
10 Spin_Retry_Count        0x0013   100   100   097    Pre-fail  Always       -       0
12 Power_Cycle_Count       0x0032   098   098   020    Old_age   Always       -       2303
184 End-to-End_Error        0x0032   100   100   099    Old_age   Always       -       0
187 Reported_Uncorrect      0x0032   001   001   000    Old_age   Always       -       1296
188 Command_Timeout         0x0032   100   098   000    Old_age   Always       -       124556018153
189 High_Fly_Writes         0x003a   100   100   000    Old_age   Always       -       0
190 Airflow_Temperature_Cel 0x0022   076   039   040    Old_age   Always   In_the_past 24 (0 14 39 24 0)
191 G-Sense_Error_Rate      0x0032   100   100   000    Old_age   Always       -       95
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       439
193 Load_Cycle_Count        0x0032   059   059   000    Old_age   Always       -       82679
194 Temperature_Celsius     0x0022   024   061   000    Old_age   Always       -       24 (0 12 0 0 0)
197 Current_Pending_Sector  0x0012   084   083   000    Old_age   Always       -       1368
198 Offline_Uncorrectable   0x0010   084   083   000    Old_age   Offline      -       1368
199 UDMA_CRC_Error_Count    0x003e   200   200   000    Old_age   Always       -       0
240 Head_Flying_Hours       0x0000   100   253   000    Old_age   Offline      -       4479 (25 46 0)
241 Total_LBAs_Written      0x0000   100   253   000    Old_age   Offline      -       40850124752
242 Total_LBAs_Read         0x0000   100   253   000    Old_age   Offline      -       29522172864
254 Free_Fall_Sensor        0x0032   100   100   000    Old_age   Always       -       0

SMART Error Log Version: 1
ATA Error Count: 1296 (device log contains only the most recent five errors)
   CR = Command Register [HEX]
   FR = Features Register [HEX]
   SC = Sector Count Register [HEX]
   SN = Sector Number Register [HEX]
   CL = Cylinder Low Register [HEX]
   CH = Cylinder High Register [HEX]
   DH = Device/Head Register [HEX]
   DC = Device Command Register [HEX]
   ER = Error register [HEX]
   ST = Status register [HEX]
Powered_Up_Time is measured from power on, and printed as
DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
SS=sec, and sss=millisec. It "wraps" after 49.710 days.

Error 1296 occurred at disk power-on lifetime: 5126 hours (213 days + 14 hours)
 When the command that caused the error occurred, the device was active or idle.

 After command completion occurred, registers were:
 ER ST SC SN CL CH DH
 -- -- -- -- -- -- --
 40 51 00 ff ff ff 0f  Error: UNC at LBA = 0x0fffffff = 268435455

 Commands leading to the command that caused the error were:
 CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
 -- -- -- -- -- -- -- --  ----------------  --------------------
 60 00 20 ff ff ff 4f 00      00:52:15.938  READ FPDMA QUEUED
 60 00 20 ff ff ff 4f 00      00:52:15.921  READ FPDMA QUEUED
 60 00 20 ff ff ff 4f 00      00:52:15.897  READ FPDMA QUEUED
 60 00 20 ff ff ff 4f 00      00:52:15.871  READ FPDMA QUEUED
 60 00 20 ff ff ff 4f 00      00:52:15.852  READ FPDMA QUEUED

Error 1295 occurred at disk power-on lifetime: 5126 hours (213 days + 14 hours)
 When the command that caused the error occurred, the device was active or idle.

 After command completion occurred, registers were:
 ER ST SC SN CL CH DH
 -- -- -- -- -- -- --
 40 51 00 ff ff ff 0f  Error: UNC at LBA = 0x0fffffff = 268435455

 Commands leading to the command that caused the error were:
 CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
 -- -- -- -- -- -- -- --  ----------------  --------------------
 60 00 20 ff ff ff 4f 00      00:52:15.282  READ FPDMA QUEUED
 60 00 20 ff ff ff 4f 00      00:52:15.258  READ FPDMA QUEUED
 61 00 08 ff ff ff 4f 00      00:52:15.245  WRITE FPDMA QUEUED
 61 00 08 ff ff ff 4f 00      00:52:15.245  WRITE FPDMA QUEUED
 61 00 08 ff ff ff 4f 00      00:52:15.244  WRITE FPDMA QUEUED

Error 1294 occurred at disk power-on lifetime: 5126 hours (213 days + 14 hours)
 When the command that caused the error occurred, the device was active or idle.

 After command completion occurred, registers were:
 ER ST SC SN CL CH DH
 -- -- -- -- -- -- --
 40 51 00 ff ff ff 0f  Error: UNC at LBA = 0x0fffffff = 268435455

 Commands leading to the command that caused the error were:
 CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
 -- -- -- -- -- -- -- --  ----------------  --------------------
 60 00 20 ff ff ff 4f 00      00:52:14.861  READ FPDMA QUEUED
 61 00 08 ff ff ff 4f 00      00:52:14.849  WRITE FPDMA QUEUED
 61 00 08 ff ff ff 4f 00      00:52:14.849  WRITE FPDMA QUEUED
 61 00 08 ff ff ff 4f 00      00:52:14.848  WRITE FPDMA QUEUED
 61 00 08 ff ff ff 4f 00      00:52:14.848  WRITE FPDMA QUEUED

Error 1293 occurred at disk power-on lifetime: 5126 hours (213 days + 14 hours)
 When the command that caused the error occurred, the device was active or idle.

 After command completion occurred, registers were:
 ER ST SC SN CL CH DH
 -- -- -- -- -- -- --
 40 51 00 ff ff ff 0f  Error: UNC at LBA = 0x0fffffff = 268435455

 Commands leading to the command that caused the error were:
 CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
 -- -- -- -- -- -- -- --  ----------------  --------------------
 60 00 20 ff ff ff 4f 00      00:52:14.596  READ FPDMA QUEUED
 60 00 20 ff ff ff 4f 00      00:52:14.582  READ FPDMA QUEUED
 60 00 20 ff ff ff 4f 00      00:52:14.568  READ FPDMA QUEUED
 60 00 20 ff ff ff 4f 00      00:52:14.554  READ FPDMA QUEUED
 60 00 20 ff ff ff 4f 00      00:52:14.540  READ FPDMA QUEUED

Error 1292 occurred at disk power-on lifetime: 5125 hours (213 days + 13 hours)
 When the command that caused the error occurred, the device was active or idle.

 After command completion occurred, registers were:
 ER ST SC SN CL CH DH
 -- -- -- -- -- -- --
 40 51 00 ff ff ff 0f  Error: UNC at LBA = 0x0fffffff = 268435455

 Commands leading to the command that caused the error were:
 CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
 -- -- -- -- -- -- -- --  ----------------  --------------------
 60 00 08 ff ff ff 4f 00      01:35:51.600  READ FPDMA QUEUED
 25 d5 08 ff ff ff 4f 00      01:35:49.208  READ DMA EXT
 b0 d5 01 c0 4f c2 00 00      01:35:49.195  SMART READ LOG
 b0 d5 01 00 4f c2 00 00      01:35:49.195  SMART READ LOG
 ef 03 46 d8 c3 d4 00 00      01:35:49.182  SET FEATURES [Set transfer mode]

SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Conveyance offline  Completed: read failure       90%      5122         3552221256
# 2  Extended offline    Completed: read failure       90%      5070         3510252048
# 3  Short offline       Completed: read failure       90%      5062         3510252048
# 4  Short offline       Completed without error       00%      3307         -
# 5  Extended offline    Aborted by host               90%      1113         -

SMART Selective self-test log data structure revision number 1
SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
   1        0        0  Not_testing
   2        0        0  Not_testing
   3        0        0  Not_testing
   4        0        0  Not_testing
   5        0        0  Not_testing
Selective self-test flags (0x0):
 After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

Disk /dev/sda: 119.24 GiB, 128035676160 bytes, 250069680 sectors
Disk model: SAMSUNG SSD PM85
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x9be88da1

Device     Boot   Start       End   Sectors   Size Id Type
/dev/sda1  *       2048   2099199   2097152     1G 83 Linux
/dev/sda2       2099200 250068991 247969792 118.2G 83 Linux


Disk /dev/sdb: 1.82 TiB, 2000398933504 bytes, 3907029167 sectors
Disk model: Mobile Drive    
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 04546D5B-999B-4619-8540-DCC4D21196D8

Device          Start        End    Sectors    Size Type
/dev/sdb1        2048    1230847    1228800    600M EFI System
/dev/sdb2     1230848    3327999    2097152      1G Linux filesystem
/dev/sdb3     3328000 1155362815 1152034816  549.3G Linux filesystem
/dev/sdb4  1155362816 3274145791 2118782976 1010.3G Linux filesystem
/dev/sdb5  3274145792 3907028991  632883200  301.8G Linux filesystem

SuSE still recommends xfs for critical disc storage. BTRFS is intended only for the system partitions where a crash is not that critical but could be fixed by a simple reinstallation in worst cases.
Citation:
XFS is the default file system for data partitions in SUSE Linux Enterprise Server.
Source: SLES 15 SP1 | Storage Administration Guide | Overview of File Systems in Linux

It’s comparable for Facebook. They use it to easily clone git repos very fast. The latter are working partitions. Not for critical data.

But hardware/firmware and a file system can be aligned to and tested with each other for years or even decades to avoid such issues. Your point may be right but it is still a point not in favor of BTRFS, which is young and not as well tested in all possible production constellations as xfs/ext4. And if one has such issues as you described, it is an issue and a (temporary) loss of data. It does not help if the firmware can be blamed rather than btrfs. This is why it is not recommended for critical enterprise storage (Red Hat, SuSE, I also only know Facebook as production use case but only for working partitions, which are mostly even temporary).

No, fsck can be used without making any changes to the file system, working only read-only. But you are right, the user has to know about it and that this is a separate option that has to be added to the command line before execution. By default, changes will be made.

1 Like

This drive is failing. Your best bet is ddrescue to make an image, and then see if you can work on the image, or better a copy of the image (use an overlay on ext4, or just do a normal cp on XFS or Btrfs to make a reflink copy). There is some chance of recovery, default mkfs.btrfs enables duplicate copies of metadata blocks, if the media defects don’t affect both copies. As for data, btrfs can at least tell you which blocks are missing/corrupt.

Once you have an image of the drive, you can loopback mount it, use mount -o ro, and check for kernel messages. We need to see the kernel’s btrfs messages to get an idea of why the mount is failing, to know what options to try for recovery.

I opted… to do a mkfs.btrfs /dev/sdb5 in the hope that I could later do some kind of later recovery of the disk recovering the data.

Uhh? mkfs will make recovery significantly harder if not impossible, because it writes a new empty file system over the old one.

Hi Chris,

It gives me that it will be like this, although the data is there. A little proof of concept with
I’ve finished ddrescue by cloning the disk and I’m running a testdisk to see how the partitions are updated and deleted.

Update:
I first created a GPT partition table in /dev/sdc as  /dev/sdc hs also GPT using gparted.
Both disks and their partitions unmount
#ddrescue -f -r3 /dev/sdb /dev/sdc mapfile
wait 24hours...
 /dev/sdc with GParted shows "Unknown (PMBR)" and no partitions are shown

# parted -l
...
Error: The backup GPT table is corrupt, but the primary appears OK, so that will be used.*

I must repeat the whole process in other way

The data is there and the files weren’t marked for deletion although a /dev/sdb5 doesn’t show anything right now.

I don’t know if btrfs could have an index of the files somewere in the disk that I can restore or some mechanism to recreate the directory of files and foders from the surface of the disk or something like. Despite having made the mistake of doing the mkfs, I hope I do not have to go to more tough solutions like photoreq or foremost.

I close this thread, Thank you all!