Fedora 35 doesn't boot with / on raid

Hi there!

I test specific installation method for our project based on Fedora 35 and get problem with installation on UEFI system with / partition on RAID1.

Installation Process

  1. The installation process starts from Packer. Packer installs F35 to VM using simple kickstart. Then it creates rootfs.tar.gz from / except some filesystems (e.g /proc or /dev).

  2. The second step is partitioning some dedicated server, mounting filesystems and extracting created on first step rootfs.tar.gz to new root.

  3. After that I install grub2 in chroot, generating /etc/fstab and reboot the server.


I get error when I was trying to install F35 on server with UEFI with / on RAID 1 about impossibility of switch root operation.

Firstly, there is not installed mdadm in initramfs. Okay, I rebuilt initramfs and now I can use mdadm in initramfs.

Secondly, I checked if I can assembly my RAID1 with /. Yes, I can. Also I added /etc/mdadm.conf with RAID.

Unfortunately nothing helps me.


So, I have few questions:

  • what should I do to make RAID configurated automaticly while boot processing in initramfs?
  • how can I correctly build initramfs? I use simple dracut --force before create rootfs in Packer, but it doesn’t add mdadm to initramfs.

P.S idk what do you want to see from logs. Please, ask me and I give what you want. But I attach some files to clear the situation.

blkid /dev/md0 - blkid - Pastebin.com
/run/initramfs/rdsosreport.txt - rdsosreport.txt - Pastebin.com

/etc/fstab (in installed system) - fstab - Pastebin.com
/boot/grub2/grub2.cfg (in installed system) - grub2.cfg - Pastebin.com


I don’t know Packer (but looks interesting from their documentation) and I’m not sure, what you exactly are doing.

But I know we have currently a bug with installing Fedora Server on raid in a UEFI boot system.

A full features Server system has
P1: efi-partition, mounted at /boot/efi
P2: Raid member partition, formatted as XFS (or ext4) and mounted at /boot
P3: LVM partition on raid
- Logical Volume therein, mounted at /

I suppose you will not use the LVM, but put everything on a XFS formatted Standard Partition.So you would manually create:
P1: efi-partition, mounted at /boot/efi
P2: std-partition on raid, formatted as XFS, mounted at /

You can use Anaconda to partition it that way.

The problem is, as soon as /boot and/or / is on a raid partition, Fedora will not boot, simply because of a bug. We are in the process to analyze the details. I don’t know how long we need to fix it.

Probably the best way for you is to install it without a raid for the time being and migrate to a raid when everything is fixed.


I found that Fedora was unable to be installed to a raid back in the version 20’s. What I choose to do, was install the OS on a single drive, and then have all the data on a raid. Loosing the OS drive doesn’t cost anything. It’s just a matter of updating all the pointers to the raid drives.

As to why Fedora would not boot in a raid environment, it was found that Fedora could not determine which raid disk it wanted to boot from. Thus a conflict appeared, and was - to my knowledge - never resolved.

Thank you all for reply!

@pboy are you Fedora developer? I mean I can give you more information. Just ask me c:

I’m a member of the Server Working Group that maintains and supports the Server Edition. And I am currently engaged in analyzing just this bug and getting a fix. The bug is tied to a GPT partitioning and its special partitions (biosboot resp. efi), by the way. With MBR, sw raid works completely without problems.

Thanks for your offering. I’ll ask as soon as we need more information. Would it be possible that you could test the bug fix with your special setup, when we have it ready?

Sure. I would help you what I can. But it would be greate if you ping me by email or telegram.

Thanks! I can’t find you in Fedora accounts, so I don’t know an email address and I don’t use Telegram. But I’ll answer to this thread and address you directly, so you’ll get a notification.

1 Like

@pboy I found that this problem has not only in Fedora. In AlmaLinux too. May be it helps you.

@melosmania Just a question: Do you have an UEFI boot system or a BIOSboot system?

1 Like

I’ve found this problem on UEFI boot system

Hm, could you go through

and look and probably comment which of the 3 installation variants you used and/or what problems you exactly you’ve had?


1 Like

Testcase 1 - Ok. OS has installed but / mountpoint is not mdadm raid 1. / is lvm. System has startup without problem.

Testcase 2 - Ok. System has installed but / mountpoint is not raid 1. This is lvm over raid 1. System has startup without problem.

Testcase 3 - Ok. System has installed but / mountpoint is not raid 1. This is lvm over raid 1. System has startup without problem.

I have a little another problem. System has not booted up with clean raid 1.

Please give more detail.
Are you using btrfs on raid 1?
Are you using ext4 on raid 1?
Is /boot on plain ext4 or inside the raid 1 array?

Grub cannot access raid. Only the installed kernel .initramfs and .vmlinuz kernel can do that because the kernel has to load the mdadm modules before it can access the raid.

Thanks! Yes, / is LVM over raid. That is expected and in no way a problem. The problem is that the /boot/efi partition is of type mdadm, too, instead of efi. Fedora takes it into account, but in dual boot configurations with other systems, even with Fedora Workstation, it may fail.

If it is about the aforementioned test cases, it is Fedora Server, and everything but efi is XFS om lvm on mdraid.

You should have 3 raid partitions:
sdx1 mounted at /boot/efi
sdx2 mounted a7 /boot
sdx3 mounted at /

Are all three partitions affected or “just” one of them?

I have working Ubuntu with /boot on raid 1.

Problem has found with ext4 on raid1.

After the system has booted up in rescue shell I not see md devices, but can assmble them by hands and mount it.

After booting then assembling and mounting the raid partitions you may attempt a recovery by using dracut --force to rebuild the kernel and initramfs modules to support your config. It may or may not work, but is the only way I know to attempt recovery when the boot kernel does not support your config.

I would recommend you do that by booting to the live media which should automatically activate the raid arrays, then use a chroot environment instead of doing it from within the emergency recovery shell.

Are there ways to include mdadm modules to initrafs before it will be neccessary? I create rootfs by Hashicorp Packer then extract it to ready to use root mountpoint.