I test specific installation method for our project based on Fedora 35 and get problem with installation on UEFI system with / partition on RAID1.
Installation Process
The installation process starts from Packer. Packer installs F35 to VM using simple kickstart. Then it creates rootfs.tar.gz from / except some filesystems (e.g /proc or /dev).
The second step is partitioning some dedicated server, mounting filesystems and extracting created on first step rootfs.tar.gz to new root.
After that I install grub2 in chroot, generating /etc/fstab and reboot the server.
Error
I get error when I was trying to install F35 on server with UEFI with / on RAID 1 about impossibility of switch root operation.
Firstly, there is not installed mdadm in initramfs. Okay, I rebuilt initramfs and now I can use mdadm in initramfs.
Secondly, I checked if I can assembly my RAID1 with /. Yes, I can. Also I added /etc/mdadm.conf with RAID.
Unfortunately nothing helps me.
Summary
So, I have few questions:
what should I do to make RAID configurated automaticly while boot processing in initramfs?
how can I correctly build initramfs? I use simple dracut --force before create rootfs in Packer, but it doesn’t add mdadm to initramfs.
P.S idk what do you want to see from logs. Please, ask me and I give what you want. But I attach some files to clear the situation.
I don’t know Packer (but looks interesting from their documentation) and I’m not sure, what you exactly are doing.
But I know we have currently a bug with installing Fedora Server on raid in a UEFI boot system.
A full features Server system has
P1: efi-partition, mounted at /boot/efi
P2: Raid member partition, formatted as XFS (or ext4) and mounted at /boot
P3: LVM partition on raid
- Logical Volume therein, mounted at /
I suppose you will not use the LVM, but put everything on a XFS formatted Standard Partition.So you would manually create:
P1: efi-partition, mounted at /boot/efi
P2: std-partition on raid, formatted as XFS, mounted at /
You can use Anaconda to partition it that way.
The problem is, as soon as /boot and/or / is on a raid partition, Fedora will not boot, simply because of a bug. We are in the process to analyze the details. I don’t know how long we need to fix it.
Probably the best way for you is to install it without a raid for the time being and migrate to a raid when everything is fixed.
I found that Fedora was unable to be installed to a raid back in the version 20’s. What I choose to do, was install the OS on a single drive, and then have all the data on a raid. Loosing the OS drive doesn’t cost anything. It’s just a matter of updating all the pointers to the raid drives.
As to why Fedora would not boot in a raid environment, it was found that Fedora could not determine which raid disk it wanted to boot from. Thus a conflict appeared, and was - to my knowledge - never resolved.
I’m a member of the Server Working Group that maintains and supports the Server Edition. And I am currently engaged in analyzing just this bug and getting a fix. The bug is tied to a GPT partitioning and its special partitions (biosboot resp. efi), by the way. With MBR, sw raid works completely without problems.
Thanks for your offering. I’ll ask as soon as we need more information. Would it be possible that you could test the bug fix with your special setup, when we have it ready?
Thanks! I can’t find you in Fedora accounts, so I don’t know an email address and I don’t use Telegram. But I’ll answer to this thread and address you directly, so you’ll get a notification.
Please give more detail.
Are you using btrfs on raid 1?
Are you using ext4 on raid 1?
Is /boot on plain ext4 or inside the raid 1 array?
Grub cannot access raid. Only the installed kernel .initramfs and .vmlinuz kernel can do that because the kernel has to load the mdadm modules before it can access the raid.
Thanks! Yes, / is LVM over raid. That is expected and in no way a problem. The problem is that the /boot/efi partition is of type mdadm, too, instead of efi. Fedora takes it into account, but in dual boot configurations with other systems, even with Fedora Workstation, it may fail.
After booting then assembling and mounting the raid partitions you may attempt a recovery by using dracut --force to rebuild the kernel and initramfs modules to support your config. It may or may not work, but is the only way I know to attempt recovery when the boot kernel does not support your config.
I would recommend you do that by booting to the live media which should automatically activate the raid arrays, then use a chroot environment instead of doing it from within the emergency recovery shell.
Are there ways to include mdadm modules to initrafs before it will be neccessary? I create rootfs by Hashicorp Packer then extract it to ready to use root mountpoint.