Raid 5 missing 2 drives

I made a software raid 5 array with 4 drives. After rebooting the pc the raid can only see 2 drives and the array says inactive

$ cat /proc/mdstat 
Personalities : [raid0] 
md1 : active raid0 sde[0] sdf[1]
      11720780800 blocks super 1.2 512k chunks
      
md0 : inactive sdb[1](S) sda[0](S)
      7813772976 blocks super 1.2

note that md0 is supposed to be the raid 5 array and was working fine before the reboot.

here is what I get when listing all drives, notice that sdc and sdd are supposed to belong to the raid 5 array.

$ lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
NAME          SIZE FSTYPE            TYPE  MOUNTPOINT
sda           3.6T linux_raid_member disk  
└─md0           0B                   md    
sdb           3.6T linux_raid_member disk  
└─md0           0B                   md    
sdc           3.6T                   disk  
sdd           3.6T                   disk  
sde           5.5T linux_raid_member disk  
└─md1        10.9T ext4              raid0 /mnt/md1
sdf           5.5T linux_raid_member disk  
└─md1        10.9T ext4              raid0 /mnt/md1
.......

I created the array twice so far with the same result.

$ sudo mdadm --create --verbose /dev/md0 --level=5 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd

and then I format it

$ sudo mkfs.ext4 -F /dev/md0

I always wait until the array is fully built before I reboot. Can anyone tell me what I am doing wrong here?

The device /dev/md0 is the raw device. Although it is possible to put the filesystem onto the raw device as you do (sudo mkfs.ext4 -F /dev/md0) I see two issues there.

  1. I always create a partition within the raw device and put the file system within that partition. As such I have never in 30 years with linux seen a need to use the -F option.
  2. With a hard disk creating a file system that way is not really an issue, but with a raid array using multiple disks it may be an issue since the array is identified by the beginning sectors of each physical device. A partition table separates the file system identification from the array identification but it is possible that a file system on the raw array device may not be separated that way.

In fact, on my home system I have an SSD for the OS and my /home is on a raid 5 array of 3 drives.
This is what sudo fdisk -l shows for those drives.

$  sudo fdisk -l

Disk /dev/sdb: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: ST4000VN008-2DR1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Disk /dev/sdd: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: ST4000VN008-2DR1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Disk /dev/sde: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: ST4000VN008-2DR1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Disk /dev/md127: 7.28 TiB, 8001302822912 bytes, 15627544576 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 524288 bytes / 1048576 bytes

Disk /dev/mapper/fedora_raid1-home: 6 TiB, 6597069766656 bytes, 12884901888 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 524288 bytes / 1048576 bytes

df shows this

# df
Filesystem                     1K-blocks       Used  Available Use% Mounted on
devtmpfs                            4096          0       4096   0% /dev
tmpfs                           16344700     144968   16199732   1% /dev/shm
tmpfs                            6537880       2316    6535564   1% /run
/dev/mapper/fedora-root        153708344   63946688   82384320  44% /
tmpfs                           16344700      32996   16311704   1% /tmp
/dev/sda2                         981197     193285     732616  21% /boot
/dev/sda1                         255724      14392     241332   6% /boot/efi
/dev/mapper/fedora_raid1-home 6390519720 5141456396  926924396  85% /home
tmpfs                            3268940        272    3268668   1% /run/user/1000

(/dev/sdc is an 8TB backup disk that is currently not mounted)
/dev/mapper/fedora_raid1-home is an LVM LV that encompasses most of the /dev/md127 raid array (the raid array is the PV for that entire VG)

It is my understanding that you should have a partition on the raid array (of whatever type you choose) for the linux file system.

$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] 
md127 : active raid5 sdd[1] sde[3] sdb[0]
      7813772288 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
      bitmap: 4/30 pages [16KB], 65536KB chunk

unused devices: <none>

Thanks a lot for your response.
So if I understood correctly the problem is with the way I format the drive, I need to create a partition under the raid array? because I have another array of Raid 0 and didn’t need to create a partition for it to work.

Fedora has Btrfs default file system that incorporates LVM and RAID utilities not needing anything else. For speed RAID 0 is preferable, RAID 10 easier than RAID 5 for mirror backup configuration. Disk striping and parity techniques need similar storage device performance, RAID 0 identical devices otherwise generating waste.

Thank you. However, you are answering a question I did not ask. I am using Btrfs on my main drive. And I need Raid 5 not any other type of Raid.

I think the problem is now solved thanks to @computersavvy reply I created new GPT partition table on each drive and added a primary drive so /dev/sda1 under /dev/sda and all the other 3 similarly…

Then when I create the Raid array I use these new partitions instead of the whole drive. I don’t understand why that’s required for Raid 5 and not Raid 0 but it works fine now. I wanted to leave this here in case someone has the same problem.

The comment by @marko23 about raid 0 above leaves out the fact that raid 0 results in multiple failure points with no redundancy. Raid 5, which you asked about does have the redundancy.

I am not sure what you describe above.
I always am able to take a new drive and immediately add it as a member of an array. If using a used drive I do dd if=/dev/null of=/dev/sdX bs=1M count=1 to wipe out any partition table data or raid array data that may be on the drive in the first 1M bytes of the device then add it to the array.

It seems residual data in that area may interfere with properly configuring it as a raid member.

What I read in your description is that you partitioned the drives, then created the raid array on those partitions. That does certainly work, but was not what I intended. I meant to create the array on the bare devices, then make a partition on that array device and format the file system within that partition.

I omitted to mention the possible need to wipe out the previous array data before creating the new array and for that I apologize.

I have experimented RAID configurations for fun … RAID 0 is really fast as you read and write in a single clock cycle with no redundancy as a single failure blows up everything. Disk striping and parity configuration needs an array of similar storage devices, mixing slow hard drive and SSD ends up with the array rejecting drives while keeping data … highly sophisticated method to use with care.