So I’ve run into a pretty big issue. I moved over from arch to fedora as I liked the idea of having a more stable and reliable system. While on arch I set up a raid array using mdadm and mounted it using fstab. Luckily fedora immediately recognized ithe array when I did the switch and everything seemed fine. The problem I ran into however was after I added it to fstab. Now whenever I boot it puts me into emergency mode, gives me no terminal access as it says root is locked and traps me in a loop of hit enter to continue, which then just prompts the same dialogue again. I’ve made sure I’m using the right UUID and mount point by using the
mount -a command to test, and while in the system it worked fine. I know it’s the fstab file because the instant I remove the line everything works again and I’m able to boot into the system.
Now my question is, has anyone here ran into this issue, and if so, how did you manage to solve it?
Here is my fstab file, the forth line is the one I added.
The IDs of the drives in question, gotten using
/dev/md127: LABEL="storage" UUID="e321733b-5982-4443-9081-31a69ca7f0e8" BLOCK_SIZE="4096" TYPE="ext4"
/dev/sdb1: UUID="0fb75119-f0bf-ff16-2adb-fa0ec7c7673e" UUID_SUB="1835d63b-2193-1326-f0cb-0b4ece3b1cda" LABEL="NaClPC:storaid" TYPE="linux_raid_member" PARTLABEL="primary" PARTUUID="e1872135-37c8-4e69-a945-c8d832b2ea3d"
/dev/sdc1: UUID="0fb75119-f0bf-ff16-2adb-fa0ec7c7673e" UUID_SUB="44f60c02-b2b3-077e-6aa8-a58935b798ad" LABEL="NaClPC:storaid" TYPE="linux_raid_member" PARTLABEL="primary" PARTUUID="4e9f42b8-d7ac-42fd-8b9e-59378f428908"
/dev/sda1: UUID="0fb75119-f0bf-ff16-2adb-fa0ec7c7673e" UUID_SUB="6c39fe96-fb5d-6ef0-18bd-32fb101dee08" LABEL="NaClPC:storaid" TYPE="linux_raid_member" PARTLABEL="primary" PARTUUID="7f74ef7a-2ab4-4d37-8c68-c36f82b3a0e3"
Remove the line so that you can login normal and then you have to set a password for your root account.
If you done so:
sudo passwd root
you have a root pw to log in in the emergency mode.
To verify the UUIDs you can use
lsblk -f and ensure that your line in question is actually the file system on the raid array. Also as you stated, once the line is entered in /etc/fstab you should ensure that file system is not mounted then use
mount -a to ensure it mounts properly before you attempt to reboot.
I am not sure the options you have on that line in fstab are correct. The file system is set to mount at boot, but the ‘user’ option says that the ownership will be given to the user mounting it, which at boot is by default the root user. Maybe adding the option ‘noauto’ to that line would be all that is required.
So after doing
sudo passwd root and login in during emergency mode, it would seem the error has to do with a missing dependency, though it is unclear on what dependency I’m missing exactly. Again, like I said above, it mounts while I’m in the system. It is only failing to mount on booting up.
Sorry about the bad images, but this is what I have to work with.
That shows the /dev/md127 array does not properly activate.
What is the output of
You said that you are able to mount the device from the command line.
What is the output of
According to the earlier posts you seem to be trying to directly mount the raw raid array with that entry in fstab.
Have you created a partition on that array, formatted the file system within that partition, then tried mounting that partition?
In general, every raid array must be treated exactly like a physical device: → which implies that it normally would be partitioned and formatted with a file system in the partition then that partition be used for the system.
I use raid 5 for my /home.
On that array I used the array as LVM, with the array being the PV, a VG created on the PV, and multiple LVs created within that VG. This separated the actual data from the physical raid array.
Have you regenerated your initramfs since adding the /etc/mdadm.conf file? If not, you may need to do so so that that file will get packaged in the initramfs. Without it, the RAID will not assemble during early boot.
$ sudo dracut -f
You can use
lsinitrd to verify that the file is (or isn’t) there.
$ sudo lsinitrd | grep mdadm.conf
Edit: This is just a first-time problem. In the future, when new kernels are installed, they should pick up /etc/mdadm.conf automatically.
/dev/md127 is the partition that is spread across the raid array and is the mount point for said array. I’m not sure how your array was made or works, but mine was made using
mdadm and this is how arrays using it are structured.
I just noticed that your screenshot shows that the array is assembled, but it is failing a filesystem check. Does changing the last value on your fstab line to “0” workaround the problem?
Edit: The problem might be related to this bug: https://webdisk.geomedia.ma/?page=001-forum.ssjs&sub=usenet_linuxdek&thread=66092
setting it to “0” did seem to do the trick, though having it as that isn’t optimal. I would love if this bug got fixed for Fedora as currently the one you linked is for Debian.
It looks like there is an open Red Hat bugzilla report for this here: 2166734 – e2fsprogs-1.47.0 is available