I had a 4-drive e-sata connected to a Fedora 31 server, with three 1.5 TB and one 2 TB drive. I created a RAID1 following this excellent tecmint tutorial. I used --raid-devices=4
. Well that doesn’t automatically create a 2-drive partition that is mirrored. It shows only 1.4 TB is available. From df -h
:
/dev/md0 1.4T 425G 880G 33% /esata
then:
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 1.4T 0 disk
└─sda1 8:1 0 1.4T 0 part
└─md0 9:0 0 1.4T 0 raid1
sdb 8:16 0 1.4T 0 disk
└─sdb1 8:17 0 1.4T 0 part
└─md0 9:0 0 1.4T 0 raid1
sdd 8:48 0 1.4T 0 disk
└─sdd1 8:49 0 1.4T 0 part
└─md0 9:0 0 1.4T 0 raid1
sde 8:64 0 4.9T 0 disk
├─sde1 8:65 0 2M 0 part
├─sde2 8:66 0 476M 0 part /boot
└─sde3 8:67 0 3.3T 0 part
sdf 8:80 0 59.8G 0 disk
└─sdf1 8:81 0 59.8G 0 part
sdg 8:96 0 1.8T 0 disk
└─sdg1 8:97 0 1.8T 0 part
└─md0 9:0 0 1.4T 0 raid1
sr0 11:0 1 1024M 0 rom
And:
cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sda1[0] sdg1[3] sdd1[2] sdb1[1]
1465005464 blocks super 1.2 [4/4] [UUUU]
bitmap: 0/11 pages [0KB], 65536KB chunk
unused devices: <none>
and:
mdadm -E /dev/sd[a-b]1 /dev/sdg1 /dev/sdd1
/dev/sda1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 88b9fcb6:52d0f235:849bd9d6:c079cfc8
Name : ourserver:0 (local to host ourserver)
Creation Time : Fri Mar 13 16:46:35 2020
Raid Level : raid1
Raid Devices : 4
Avail Dev Size : 2930010928 (1397.14 GiB 1500.17 GB)
Array Size : 1465005440 (1397.14 GiB 1500.17 GB)
Used Dev Size : 2930010880 (1397.14 GiB 1500.17 GB)
Data Offset : 264192 sectors
Super Offset : 8 sectors
Unused Space : before=264112 sectors, after=48 sectors
State : clean
Device UUID : 7df3d233:060aaac3:04eb9f3a:65a9119e
Internal Bitmap : 8 sectors from superblock
Update Time : Sat Mar 14 08:32:32 2020
Bad Block Log : 512 entries available at offset 16 sectors
Checksum : bbb40149 - correct
Events : 20558
Device Role : Active device 0
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdb1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 88b9fcb6:52d0f235:849bd9d6:c079cfc8
Name : ourserver:0 (local to host ourserver)
Creation Time : Fri Mar 13 16:46:35 2020
Raid Level : raid1
Raid Devices : 4
Avail Dev Size : 2930010928 (1397.14 GiB 1500.17 GB)
Array Size : 1465005440 (1397.14 GiB 1500.17 GB)
Used Dev Size : 2930010880 (1397.14 GiB 1500.17 GB)
Data Offset : 264192 sectors
Super Offset : 8 sectors
Unused Space : before=264112 sectors, after=48 sectors
State : clean
Device UUID : 434684bb:d297cd17:f5391b7b:0d73e9d7
Internal Bitmap : 8 sectors from superblock
Update Time : Sat Mar 14 08:32:32 2020
Bad Block Log : 512 entries available at offset 16 sectors
Checksum : 11dbfa76 - correct
Events : 20558
Device Role : Active device 1
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdg1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 88b9fcb6:52d0f235:849bd9d6:c079cfc8
Name : ourserver:0 (local to host ourserver)
Creation Time : Fri Mar 13 16:46:35 2020
Raid Level : raid1
Raid Devices : 4
Avail Dev Size : 3906762928 (1862.89 GiB 2000.26 GB)
Array Size : 1465005440 (1397.14 GiB 1500.17 GB)
Used Dev Size : 2930010880 (1397.14 GiB 1500.17 GB)
Data Offset : 264192 sectors
Super Offset : 8 sectors
Unused Space : before=264112 sectors, after=976752048 sectors
State : clean
Device UUID : 45a47922:251b01e7:a920b5ef:aec34c43
Internal Bitmap : 8 sectors from superblock
Update Time : Sat Mar 14 08:32:32 2020
Bad Block Log : 512 entries available at offset 16 sectors
Checksum : 623a20a2 - correct
Events : 20558
Device Role : Active device 3
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 88b9fcb6:52d0f235:849bd9d6:c079cfc8
Name : ourserver:0 (local to host ourserver)
Creation Time : Fri Mar 13 16:46:35 2020
Raid Level : raid1
Raid Devices : 4
Avail Dev Size : 2930012909 (1397.14 GiB 1500.17 GB)
Array Size : 1465005440 (1397.14 GiB 1500.17 GB)
Used Dev Size : 2930010880 (1397.14 GiB 1500.17 GB)
Data Offset : 264192 sectors
Super Offset : 8 sectors
Unused Space : before=264112 sectors, after=2029 sectors
State : clean
Device UUID : 9f705e06:0b9a6d1a:fe4a0368:8a279a1a
Internal Bitmap : 8 sectors from superblock
Update Time : Sat Mar 14 08:32:32 2020
Bad Block Log : 512 entries available at offset 16 sectors
Checksum : 8eeef44d - correct
Events : 20558
Device Role : Active device 2
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
So I’ve seen a user on serverfault, and another on SE recommend to use mdadm --assemble --update=devicesize /dev/md0
, which I ran and then mdadm -G /dev/md0 -z max
which still has the same:
mdadm --assemble --update=devicesize /dev/md0 /dev/sd[a-b]1 /dev/sdg1 /dev/sdd1
mdadm: /dev/md0 has been started with 4 drives.
mdadm: component size of /dev/md0 unchanged at 1465005464K
How would I alter this SF post on growing a RAID 1 to RAID 10, or simply get a mirrored partition that consists of 2 drives?