I resized my Fedora KDE partition from 250 GB to 200 GB using another’s distro (that I wanted to try) live CD. However when restarting my laptop, Fedora got into emergency mode requesting a root password which I do not have. I have tried adding some breakpoints and/or opening a shell to the boot line but I would always be prompted for the root password.
Using Fedora KDE live cd I was able to try to use fsck in order to repair it. Also I tried resizing it back the partition. Both attemps with no luck.
[root@localhost-live mnt]# btrfs check --repair /dev/nvme0n1p6
enabling repair mode
WARNING:
Do not use --repair unless you are advised to do so by a developer
or an experienced user, and then only after having accepted that no
fsck can successfully repair all types of filesystem corruption. Eg.
some software or hardware bugs can fatally damage a volume.
The operation will start in 10 seconds.
Use Ctrl-C to stop it.
10 9 8 7 6 5 4 3 2 1
Starting repair.
Opening filesystem to check...
Checking filesystem on /dev/nvme0n1p6
UUID: 452a6d25-d9b9-4451-909e-46d5362f4aab
[1/7] checking root items
Fixed 0 roots.
[2/7] checking extents
ERROR: block device size is smaller than total_bytes in device item, has 214748364800 expect
>= 268703891456
ERROR: errors found in extent allocation tree or chunk allocation
[3/7] checking free space tree
[4/7] checking fs roots
[5/7] checking only csums items (without verifying data)
[6/7] checking root refs
[7/7] checking quota groups skipped (not enabled on this FS)
found 121452212224 bytes used, error(s) found
total csum bytes: 116926416
total tree bytes: 903757824
total fs tree bytes: 663191552
total extent tree bytes: 87048192
btree space waste bytes: 197923466
file data blocks allocated: 1975234543616
referenced 129401790464
fdisk info:
[root@localhost-live mnt]# fdisk -l
Disk /dev/nvme0n1: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: WD_BLACK SN750 SE 1TB
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 01A2E97F-FC58-4813-8F55-657A835534CB
Device Start End Sectors Size Type
/dev/nvme0n1p1 2048 534527 532480 260M EFI System
/dev/nvme0n1p2 534528 567295 32768 16M Microsoft reserved
/dev/nvme0n1p3 567296 378038271 377470976 180G Microsoft basic data
/dev/nvme0n1p4 378038272 1216899071 838860800 400G Microsoft basic data
/dev/nvme0n1p5 1426614272 1428711423 2097152 1G Linux filesystem
/dev/nvme0n1p6 1428711424 1848141823 419430400 200G Linux filesystem
GPT PMBR size mismatch (4703651 != 15131635) will be corrected by write.
Disk /dev/sda: 7.22 GiB, 7747397632 bytes, 15131636 sectors
Disk model: DataTraveler 2.0
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 026728CF-CAC9-4762-8E88-708161D9B0D3
Device Start End Sectors Size Type
/dev/sda1 64 4679143 4679080 2.2G Microsoft basic data
/dev/sda2 4679144 4702987 23844 11.6M EFI System
/dev/sda3 4702988 4703587 600 300K Microsoft basic data
Disk /dev/loop0: 2.14 GiB, 2295857152 bytes, 4484096 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/loop1: 7.81 GiB, 8390705152 bytes, 16388096 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/loop2: 32 GiB, 34359738368 bytes, 67108864 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/live-rw: 7.81 GiB, 8390705152 bytes, 16388096 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/live-base: 7.81 GiB, 8390705152 bytes, 16388096 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/zram0: 8 GiB, 8589934592 bytes, 2097152 sectors
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
lsblk info:
[root@localhost-live mnt]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
loop0 7:0 0 2.1G 1 loop
loop1 7:1 0 7.8G 1 loop
├─live-rw 253:0 0 7.8G 0 dm /
└─live-base 253:1 0 7.8G 1 dm /run/media/liveuser/Anaconda
loop2 7:2 0 32G 0 loop
└─live-rw 253:0 0 7.8G 0 dm /
sda 8:0 1 7.2G 0 disk
├─sda1 8:1 1 2.2G 0 part /run/initramfs/live
├─sda2 8:2 1 11.6M 0 part
└─sda3 8:3 1 300K 0 part
zram0 252:0 0 8G 0 disk [SWAP]
nvme0n1 259:0 0 931.5G 0 disk
├─nvme0n1p1 259:1 0 260M 0 part
├─nvme0n1p2 259:2 0 16M 0 part
├─nvme0n1p3 259:3 0 180G 0 part
├─nvme0n1p4 259:4 0 400G 0 part
├─nvme0n1p5 259:5 0 1G 0 part
└─nvme0n1p6 259:6 0 200G 0 part
I used Manjaro live media and it had both KDE Partition Manager and GParted. I ended up using one of the two for the resizing (shrinking). I did not unfortunately make any backup.
In other case, I wouldn’t be having any second thoughts and would have gone ahead reinstalling the OS but since I’ve been thinking of using Linux most of the time, there were some files that I moved from Fedora btrfs partition to a ntfs partition but those file where came not readeable from Windows. That was actually my failsafe. But after that moving, I went directly to resize the Fedora partition whitout testing those files I moved first.
I believe both those tools can properly resize a btrfs filesystem.
Did you already reuse the space you freed? If not, you might be able to extend the partition again with out resizing the filesystem. If you already put a filesystem on it that probably wont work.
No, sir. I have not used that freed space.
How would I be able to exend it back? I have tried using KDE Partition Manager to do so (as per the screenshot above) but the operation failed.
That image is not one for files/folders located on a linux system. That looks like the main btrfs volume which contains the subvolumes used for operation.
Now you should try mounting the root file system separately using something like mount -t btrfs -o subvol=root,compress=zstd:1 UUID=<partition UUID> /mnt then you should be able to actually see the root file system with ls /mnt and not just the main volume content.
Notice that the display there shows folders for root and home, which are the 2 subvolumes fedora creates by default. Once you can mount and view the content of each of those 2 subvolumes then you are making even more progress.
Keeping in mind that the Fedora system keeps booting into emergency mode (as per the screen picture above) and I’m getting above screenshot from Live CD, should I go ahead and try to mount /root in /mnt as you suggested? And if so, what should I do next?
I personally would NEVER attempt any kind of repair (and very little troubleshooting) using a gui tool. The file manager is not intended for that purpose and does not manage repairs.
It may be that the home file system is failing the file system check and forcing the system into emergency mode.
Try booting next time but press the e key for edit at the grub menu and remove rhgb quiet from the line that begins with linux then continue booting with F10. This should show messages on the screen during the boot and probably will show what is failing before it enters emergency mode.
[FAILED] Failed to mount sysroot.mount /sysroot. See 'systemctl status sysroot.mount' for details.
[DEPEND] Dependency failed for initrd-root-fs.target Initrd Root File System.
[DEPEND] Dependency failed for initrd-parse-etc.service - Mountpoints Configured in the Real Root.
Do you still see the same linux boot error after running chkdsk? The “dirty” bit in NTFS does not mean the filesystem is corrupt, only that the system did not shutdown cleanly, so there is potential for corruption.
Are you mounting the nfts filesystem at boot time in linux (e.g., via /etc/fstab)? You could try adding noauto to the list of options for that volume.
And the same errors showed as per the above screen shot.
[FAILED] Failed to mount sysroot.mount /sysroot. See 'systemctl status sysroot.mount' for details.
[DEPEND] Dependency failed for initrd-root-fs.target Initrd Root File System.
[DEPEND] Dependency failed for initrd-parse-etc.service - Mountpoints Configured in the Real Root.
I don’t recall reading and did not note an earlier comment about resizing of any file system before the problem occurred.
Lets back up and start over.
What was done immediately before the errors began?
a. If it was resizing of an ntfs file system, was that done using windows to resize the native file system or was it done using linux?
b. Was resizing of the btrfs file system done at the same time?
If the ntfs file system was resized was windows booted to access that modified file system then fully shutdown before again booting linux?
Your errors and problems may have been triggered by improper changes in file systems.
What happens if you comment out that ntfs mount in fstab then boot?
At this point I would comment out the ntfs file system from fstab, then focus on getting fedora to boot properly before introducing another variable such as the ntfs drive/partition. I would also very carefully revert any file system changes made that may have triggered the problem. Note that writing to a modified file system such as appears to have been done with
Thanks for your continued support and patience. I do appreciate that. I am really sorry about not mentioning resizing the nfts partition earlier. I thought it did not matter for the Fedora system.
So, answering your first question, I resized both the ntfs and btrfs partitions at the same time using KDE Partition Manager or GParted from the Manjaro live media. I also tried moving the ext4 partition but that failed.
And answering question two, after that resizing I did not exactly boot into Windows before booting Fedora. However the file system has been fully shut down since then.
And as of now, I think I have reverted back all changes, being the file system as it was, before meeting this issue.
Also, after commenting out the ntfs file system in fstab, the error for btrfs partition is still coming:
[ OK ] Finished systemd-fsck-root.service - File System Check on /dev/disk/by-uuid/452a6d25-d9b9-4451-909e-46d5362f4aab. Mounting sysroot.mount /sysroot...
[ 32.0359361 BTRFS info (device nume0n1p6): using crc32c (crc32c-intel) checksum algorithm [ 32.0359701 BTRFS info (device nume0n1p6): using free space tree
[ 32.0374591 BTRFS error (device nume@n1p6): device total_bytes should be at most 214748364800 but found 268703891456
[ 32.8374781 BTRFS error (device nume8n1p6): failed to read chunk tree: -22 [ 32.037774] BTRFS error (device nume@n1p6): open_ctree failed
[FAILED] Failed to mount sysroot.mount - /sysroot.
See 'systemctl status sysroot.mount' for details.
[DEPEND] Dependency failed for initrd-root-fs.target Initrd Root File System.
[DEPEND] Dependency failed for initrd-parse-etc.service - Mountpoints Configured in the Real Root.
[ OK ] Stopped target basic.target Basic System.
The boot messages suggest the partition revert didn’t actually happen so we need to see the current partition map:
sudo fdisk -l /dev/nvme0n1
And then the super block for the btrfs:
sudo btrfs insp dump-s /dev/nvme0n1p6
The btrfs dev_item.total_bytes value needs to be equal to or smaller than the partition bytes. To find partition bytes, use the value in the sector column multiplied by 512. If the partition is smaller than the btrfs dev_item.total_bytes then that explains the problem and the partition needs to be resized before the btrfs will mount again.
Whatever tool did this has a bug. It should always fail safe. Somehow it did not shrink or confirm the shrink actually completed before it changed the partition map. Shrink on btrfs is not an offline process, the file system has to be mounted, and it involves migrating chunks from the part of the file system that’s going away, to the part of the file system remaining. This can take a long time. It just depends on how much data is in the portion of the partition being removed.
This seems to indicate that the partition may have been resized without the necessary file system resize as well (which probably should have been done first but might be possible now).
If it is impossible to do a recovery using a fedora live system (F38) and recover the btrfs partition then a new install might be required.