The device /dev/md0 is the raw device. Although it is possible to put the filesystem onto the raw device as you do (sudo mkfs.ext4 -F /dev/md0) I see two issues there.
I always create a partition within the raw device and put the file system within that partition. As such I have never in 30 years with linux seen a need to use the -F option.
With a hard disk creating a file system that way is not really an issue, but with a raid array using multiple disks it may be an issue since the array is identified by the beginning sectors of each physical device. A partition table separates the file system identification from the array identification but it is possible that a file system on the raw array device may not be separated that way.
In fact, on my home system I have an SSD for the OS and my /home is on a raid 5 array of 3 drives.
This is what sudo fdisk -l shows for those drives.
(/dev/sdc is an 8TB backup disk that is currently not mounted)
/dev/mapper/fedora_raid1-home is an LVM LV that encompasses most of the /dev/md127 raid array (the raid array is the PV for that entire VG)
It is my understanding that you should have a partition on the raid array (of whatever type you choose) for the linux file system.
Thanks a lot for your response.
So if I understood correctly the problem is with the way I format the drive, I need to create a partition under the raid array? because I have another array of Raid 0 and didn’t need to create a partition for it to work.
Fedora has Btrfs default file system that incorporates LVM and RAID utilities not needing anything else. For speed RAID 0 is preferable, RAID 10 easier than RAID 5 for mirror backup configuration. Disk striping and parity techniques need similar storage device performance, RAID 0 identical devices otherwise generating waste.
I think the problem is now solved thanks to @computersavvy reply I created new GPT partition table on each drive and added a primary drive so /dev/sda1 under /dev/sda and all the other 3 similarly…
Then when I create the Raid array I use these new partitions instead of the whole drive. I don’t understand why that’s required for Raid 5 and not Raid 0 but it works fine now. I wanted to leave this here in case someone has the same problem.
The comment by @marko23 about raid 0 above leaves out the fact that raid 0 results in multiple failure points with no redundancy. Raid 5, which you asked about does have the redundancy.
I am not sure what you describe above.
I always am able to take a new drive and immediately add it as a member of an array. If using a used drive I do dd if=/dev/null of=/dev/sdX bs=1M count=1 to wipe out any partition table data or raid array data that may be on the drive in the first 1M bytes of the device then add it to the array.
It seems residual data in that area may interfere with properly configuring it as a raid member.
What I read in your description is that you partitioned the drives, then created the raid array on those partitions. That does certainly work, but was not what I intended. I meant to create the array on the bare devices, then make a partition on that array device and format the file system within that partition.
I omitted to mention the possible need to wipe out the previous array data before creating the new array and for that I apologize.
I have experimented RAID configurations for fun … RAID 0 is really fast as you read and write in a single clock cycle with no redundancy as a single failure blows up everything. Disk striping and parity configuration needs an array of similar storage devices, mixing slow hard drive and SSD ends up with the array rejecting drives while keeping data … highly sophisticated method to use with care.