I made a good experience with Samsung Evo SSDs so i think its pretty safe to run it in Raid 0.
I got a second Samsung Evo 970 Plus 1TB. Is it possible to install Fedora Silverblue onto it using Raid 0?
Do i have to reinstall or can i do it from the desktop?
Are you attempting to RAID0 2 “new” drives, or do you have 1 new drive and your current drive to create a RAID0 config?
I currently have 2 Samsung EVO 860’s (250GBx2) in RAID0 for / , and 3 HDD’s for /home.
To create the RAID0 on the SSD’s I had to first create the RAID0, then reinstall my OS. You can do this 3 ways:
You can do the RAID0 through your Motherboard BIOS *if supported
You can do this through the Fedora Installer
You can do this with a LiveUSB of Fedora and create the RAID0 on the drives from the command line by mounting the drives and going through the commands for setting up the RAID0
One 1TB Evo 970 Plus is mounted directly on my motherboard, the other one is on a PCIe M.2 Card in my empty Pcie. x16 slot. But im having issues.
I saved my important files and wiped the first 1TB SSD. Im booting the installation USB stick and select Advanced Custom (Blivet-GUI), click on done and create. Create a boot partition on one of the SSDs and then create a raid 0. Im getting an error. I thought maybe i can’t boot from the Raid 0, perhaps the PCIe card?
I then installed an additional evo 860 250GB Sata SSD, then created the Boot partition there and the / partition on ride 0 and it still fails at boot.
Tomorrow my PCIe M.2 Raid card is arriving i will be trying that one, maybe its the PCIe.
Something weird to report. I created the Raid 0 out of the two evo 970 Drives but tried to install everything: Silverblues Boot, swap and / directory onto the evo 860 250GB drive and it still got me an error.
So im thinking it has nothing to do with the raid. Perhaps the fact of having created a Raid drive is a problem or the fact of having configured the drives manualy.
I just ordered the ASUS Hyper M.2 X16 and two more 1TB SSDs and i will report back if it works. I wish i had hardware raid tho.
I just tried to install it by booting from the USB not UEFI section of the installation USB and then it lets me install the boot loader as Bios Boot instead of EFI. Then i get this error.
Before you can create the RAID create a 2MB BIOSBoot, /boot , /boot/efi on 1 drive
Then create the RAID0 as you did in the picture. (not sure about that much /swap but it’s your call). My build is slightly different, but I do have a RAID0 although both are SATA, and not PCIe. When drives have problems booting, partitioning the 2MB BIOSBoot partition on the first drive solved my problem. (It does not show up on the picture.
What File System are you using??? What size files are you reading/writing? You’re now in rarefied air my friend. File Systems will handle that speed differently. XFS is great for this type of performance because of how it handles read/writes. Highly parallel.
Also, Take into consideration that these drives will perform differently when they reach that buffer size. So when the DRAM on the drive is reached it will drop performance until it’s left cache and allocated. So
the “Ideal” situation is when you have identical drives to hopefully scale performance, but with mismatching drives expect performance to be sporadic. You’re only as fast as your slowest component.
AMD 8 Core, 3.2Ghz, 64Gb ddr4 3.2Ghz, 2x evo 970 plus 1Tb, Rtx 2080 S. Using XFS and Raid 0. What i will do is destroy the raid 0 array and then test each SSD separately.
I will plug one SSD directly into my MB and one in the Asus Hyper card in case its something with the pcie lane distribution. It shouldn’t tho because the asus m.2 pcie card is pcie x16 3.0 and should have enough bandwidth.
Edit:
Just deleted my raid and benchmarked them separately. Each evo 970 plus is writing with only 780MB/s while read speeds are still at 3.4GB/s
I figured it out. I had to rearrange the Evo 970 Plus SSDs on the Raid card the position seem to make a difference, after that the SSDs would show all up in BIOS.
I had to change a bunch of settings and change my second PCIe x16 slot to 4x4 instead of 1x16,
I think that’s called quad/bifurcation.
Then after saving and rebooting the bios an option showed up to enable raid, after that and rebooting a new option appeared called RAIDXpert2, i then enabled it and created a Raid 0 but that would cause my installation to fail too. The documentation reads that it will give you the option after reboot to press ctr R to go inside the raid creating menu but that’s not necessary and it didn’t show up for me either.
I then deleted my array and left it enabled and created the Raid 0 array in the fedora installation and it all works now.
Also i read up about using 4x1TB Evo 970 Plus drives per ASUS hyper PCIe card.
I doesn’t look good for my CPU. Looks like i have to upgrade to a new 64 Core Threadripper system to get enough PCIE lanes at the end of the month to get 4 SSDs working per PCIe slot
So your bandwidth capped. it’s cool though, you just need more lanes. if I’m not mistaken, Threadrippers have more than enough lanes. “P” series EPYC also have all the lanes you need at an affordable price, and slot in same socket as 3rd gen Threadripper.