I was doing a couple of things and among them: deleting a partition on an external hard drive. The computer seemed to stuck and I did a hard restart. Afterwards all I could get was the “emergency mode”. I tried to run the command about fixing the filesystem but after that, I get the same page but the emergency more is no more accessed, because it gets stuck with “cannot open access to console, the root account is locked”.
Attempt 1
I find this guide Fedora Guide but there the first step of mounting the root directory as the root directory fails.
sudo mount /dev/nvme0n1p9 /mnt -o subvol=root
mount: /mnt: wrong fs type, bad option, bad superblock on /dev/nvme0n1p9, missing codepage or helper program, or other error.
dmesg(1) may have more information after failed mount system call.
and when I looked for dmesg the error I’m getting is:
ext4: Unknown parameter 'subvol'
the good/hopeful part is that the root partition is totally accessible by mounting it from the Disks application. So, for example I can access it via its automatically mounted location: /run/media/liveuser/d1911bf2-8c60-409a-821e-c641b1660ada
I don’t know what I should be doing.
Unofficial Attempt
I also found this guide: github gist and following that, I removed the last line from /etc/fstab but nothing changed.
Options all go before the device name on a mount command at the cli, as does the file system type.
Try that as sudo mount -o subvol=root,compress=zstd:1 -t btrfs /dev/nvme0n1p9 /mnt
You may also need to use the UUID instead of device name. That can be gotten with lsblk -f and would be formed as sudo mount -o subvol=root,compress=zstd:1 -t btrfs UUID=<your-uuid-here> /mnt
It appears that guide may have an error in the command syntax.
You can learn more with man mount which shows
mount [-fnrsvw] [-t fstype] [-o options] device mountpoint
For anyone else with a similar issue here is my latest discovery/guess:
Previously I had moved the partitions around in order to reduce some of swap for the root partition, however, apparently the Fedora didn’t understood or something and it ended up creating a 8 GB ZRAM.
I have absolutely no idea why, where that number comes from, etc.
Anyways, the ZRAM and the fact that [probably] some software[s] produce bugs or make the OS freeze and I have to reboot means that there are probably some writes in that ZRAM that are stuck or leaking or whatnot. So, those memory segments are causing the fsck to fail while rebooting the system.
Justification
I checked the tools available for determining the health of NVMe SSD, and though it is said to be experimental, they didn’t produce anything.
And fsck from a bootable USB would simply solve the second boot after freezing and restarting.
My impression was that upon restart RAM is going to start empty or be treated as empty but perhaps ZRAM because running a layer of compression fails to deliver a clean hard-drive to the OS.
Is the bug of ZRAM? CPU leakage by another app causing the ZRAM to be unable to compress stuff when writing, etc?
Mystery continues
PS. My vocabulary perhaps isn’t on par with what is actually happening, but I think you can use your imagination how someone with lesser knowledge about OS design might describe how it works; In other words: feel free to correct me / edit the piece.