Mdadm can't reassemble array

I built a raid 5 array from three disks. Everything went fine: after about 1.5 days, the array was built, and after another 12 hours, I copied some backup data to it. Then I rebooted the machine, and mdadm couldn’t reassemble the array. This was because when I added the drives to the array, I used the labels “/dev/sdX”, and the drives were loaded in a different order after the reboot.

How do I go about fixing this, without having to rebuild the array from scratch again?

Any help appreciated.

TIA

ken

mdadm --assemble --scan to reassemble the array using the metadata.

If that fails, force it and feed it the sdX values: mdadm --assemble --force /dev/md0 /dev/sdb /dev/sdc /dev/sda. Obviously, double-check the order is correct for your devices.

Once that’s done, you can save the correct order into your mdadm.conf file: mdadm --detail --scan >> /etc/mdadm/mdadm.conf

You could also list out the UUID’s from the disks and specify those, and then it’ll always be correct. (eyeball ls -l /dev/disk/by-uuid/ and assemble using the uuids.)

Addendum: I should note it’s been years since I last used any kind of raid setup like this - it’s been big EMC setups in my world for at least 15 years, so my memory may not be bang on… should be close enough though. So long as the metadata in the raid is ok, then you shouldn’t have to rebuild it at all.

1 Like

The order the drives are seen should not affect assembling the array. There is metadata on each member device that ensures the system knows it is a member and the order for assembly.

The info from @anothermindbomb above seems reasonable to me, though for the first try I would not use the --force option with the assemble command. That should be reserved for when a regular assembly fails.

Use the man mdadm command to review all the possibilities and commands usable with md raid arrays.

I use raid5 for my /home and have never had it fail to assemble on boot. I also use raid6 on my media server with the same reliability.

The mdadm --detail --scan command does not provide much info.

$ sudo mdadm --detail --scan
ARRAY /dev/md/raptor:md1 metadata=1.2 UUID=80cb19cf:a80007cf:b6120c88:2b755211

FWIW: I have RTFM, and I also have a browser tab open to it :slight_smile:.

[root@Foghorn ~]# mdadm -v --detail --scan
INACTIVE-ARRAY /dev/md0 num-devices=3 metadata=1.2 UUID=de09039c:f4a9880b:fbe2b46b:064f3c5d
devices=/dev/sdf1,/dev/sdg1,/dev/sdh1
[root@Foghorn ~]#

One other output that might be interesting:

[root@Foghorn ~]# mdadm -v --examine /dev/disk/by-partuuid/65bc5bbb-5570-4a91-98f5-5d65cf7b5f4d /dev/disk/by-partuuid/0171909d-c458-46b0-aec2-3c177e19d21d /dev/disk/by-partuuid/f57cc601-228c-4880-98ff-95f37caccb31 | less
[root@Foghorn ~]# mdadm -v --examine /dev/disk/by-partuuid/65bc5bbb-5570-4a91-98f5-5d65cf7b5f4d /dev/disk/by-partuuid/0171909d-c458-46b0-aec2-3c177e19d21d /dev/disk/by-partuuid/f57cc601-228c-4880-98ff-95f37caccb31
/dev/disk/by-partuuid/65bc5bbb-5570-4a91-98f5-5d65cf7b5f4d:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : de09039c:f4a9880b:fbe2b46b:064f3c5d
Name : Foghorn:0 (local to host Foghorn)
Creation Time : Sun Feb 8 15:22:16 2026
Raid Level : raid5
Raid Devices : 3

Avail Dev Size : 15627786240 sectors (7.28 TiB 8.00 TB)
Array Size : 15627786240 KiB (14.55 TiB 16.00 TB)
Data Offset : 264192 sectors
Super Offset : 8 sectors
Unused Space : before=264112 sectors, after=0 sectors
State : active
Device UUID : 61a06166:b001a0ab:7cf9c56e:3f2d3ed2

Internal Bitmap : 8 sectors from superblock
Update Time : Wed Feb 11 08:57:02 2026
Bad Block Log : 512 entries available at offset 40 sectors
Checksum : 29a00de4 - correct
Events : 23113

     Layout : left-symmetric
 Chunk Size : 512K

Device Role : Active device 1
Array State : AAA (‘A’ == active, ‘.’ == missing, ‘R’ == replacing)
/dev/disk/by-partuuid/0171909d-c458-46b0-aec2-3c177e19d21d:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : de09039c:f4a9880b:fbe2b46b:064f3c5d
Name : Foghorn:0 (local to host Foghorn)
Creation Time : Sun Feb 8 15:22:16 2026
Raid Level : raid5
Raid Devices : 3

Avail Dev Size : 15627786240 sectors (7.28 TiB 8.00 TB)
Array Size : 15627786240 KiB (14.55 TiB 16.00 TB)
Data Offset : 264192 sectors
Super Offset : 8 sectors
Unused Space : before=264112 sectors, after=0 sectors
State : clean
Device UUID : 741f6bbe:4d08b6e9:3fa52e4f:a98499d5

Internal Bitmap : 8 sectors from superblock
Update Time : Wed Feb 11 09:01:37 2026
Bad Block Log : 512 entries available at offset 40 sectors
Checksum : a3819809 - correct
Events : 23162

     Layout : left-symmetric
 Chunk Size : 512K

Device Role : Active device 0
Array State : A.. (‘A’ == active, ‘.’ == missing, ‘R’ == replacing)
/dev/disk/by-partuuid/f57cc601-228c-4880-98ff-95f37caccb31:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : de09039c:f4a9880b:fbe2b46b:064f3c5d
Name : Foghorn:0 (local to host Foghorn)
Creation Time : Sun Feb 8 15:22:16 2026
Raid Level : raid5
Raid Devices : 3

Avail Dev Size : 15627786240 sectors (7.28 TiB 8.00 TB)
Array Size : 15627786240 KiB (14.55 TiB 16.00 TB)
Data Offset : 264192 sectors
Super Offset : 8 sectors
Unused Space : before=264112 sectors, after=0 sectors
State : active
Device UUID : 363665dc:955cfb64:4d58fee8:1ce09339

Internal Bitmap : 8 sectors from superblock
Update Time : Wed Feb 11 09:01:08 2026
Bad Block Log : 512 entries available at offset 40 sectors
Checksum : 3a8a1171 - correct
Events : 23158

     Layout : left-symmetric
 Chunk Size : 512K

Device Role : Active device 2
Array State : A.A (‘A’ == active, ‘.’ == missing, ‘R’ == replacing)
[root@Foghorn ~]#[root@Foghorn ~]# mdadm -v --examine /dev/disk/by-partuuid/65bc5bbb-5570-4a91-98f5-5d65cf7b5f4d /dev/disk/by-partuuid/0171909d-c458-46b0-aec2-3c177e19d21d /dev/disk/by-partuuid/f57cc601-228c-4880-98ff-95f37caccb31 | less
[root@Foghorn ~]# mdadm -v --examine /dev/disk/by-partuuid/65bc5bbb-5570-4a91-98f5-5d65cf7b5f4d /dev/disk/by-partuuid/0171909d-c458-46b0-aec2-3c177e19d21d /dev/disk/by-partuuid/f57cc601-228c-4880-98ff-95f37caccb31
/dev/disk/by-partuuid/65bc5bbb-5570-4a91-98f5-5d65cf7b5f4d:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : de09039c:f4a9880b:fbe2b46b:064f3c5d
Name : Foghorn:0 (local to host Foghorn)
Creation Time : Sun Feb 8 15:22:16 2026
Raid Level : raid5
Raid Devices : 3

Avail Dev Size : 15627786240 sectors (7.28 TiB 8.00 TB)
Array Size : 15627786240 KiB (14.55 TiB 16.00 TB)
Data Offset : 264192 sectors
Super Offset : 8 sectors
Unused Space : before=264112 sectors, after=0 sectors
State : active
Device UUID : 61a06166:b001a0ab:7cf9c56e:3f2d3ed2

Internal Bitmap : 8 sectors from superblock
Update Time : Wed Feb 11 08:57:02 2026
Bad Block Log : 512 entries available at offset 40 sectors
Checksum : 29a00de4 - correct
Events : 23113

     Layout : left-symmetric
 Chunk Size : 512K

Device Role : Active device 1
Array State : AAA (‘A’ == active, ‘.’ == missing, ‘R’ == replacing)
/dev/disk/by-partuuid/0171909d-c458-46b0-aec2-3c177e19d21d:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : de09039c:f4a9880b:fbe2b46b:064f3c5d
Name : Foghorn:0 (local to host Foghorn)
Creation Time : Sun Feb 8 15:22:16 2026
Raid Level : raid5
Raid Devices : 3

Avail Dev Size : 15627786240 sectors (7.28 TiB 8.00 TB)
Array Size : 15627786240 KiB (14.55 TiB 16.00 TB)
Data Offset : 264192 sectors
Super Offset : 8 sectors
Unused Space : before=264112 sectors, after=0 sectors
State : clean
Device UUID : 741f6bbe:4d08b6e9:3fa52e4f:a98499d5

Internal Bitmap : 8 sectors from superblock
Update Time : Wed Feb 11 09:01:37 2026
Bad Block Log : 512 entries available at offset 40 sectors
Checksum : a3819809 - correct
Events : 23162

     Layout : left-symmetric
 Chunk Size : 512K

Device Role : Active device 0
Array State : A.. (‘A’ == active, ‘.’ == missing, ‘R’ == replacing)
/dev/disk/by-partuuid/f57cc601-228c-4880-98ff-95f37caccb31:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : de09039c:f4a9880b:fbe2b46b:064f3c5d
Name : Foghorn:0 (local to host Foghorn)
Creation Time : Sun Feb 8 15:22:16 2026
Raid Level : raid5
Raid Devices : 3

Avail Dev Size : 15627786240 sectors (7.28 TiB 8.00 TB)
Array Size : 15627786240 KiB (14.55 TiB 16.00 TB)
Data Offset : 264192 sectors
Super Offset : 8 sectors
Unused Space : before=264112 sectors, after=0 sectors
State : active
Device UUID : 363665dc:955cfb64:4d58fee8:1ce09339

Internal Bitmap : 8 sectors from superblock
Update Time : Wed Feb 11 09:01:08 2026
Bad Block Log : 512 entries available at offset 40 sectors
Checksum : 3a8a1171 - correct
Events : 23158

Layout : left-symmetric
Chunk Size : 512K

Device Role : Active device 2
Array State : A.A (‘A’ == active, ‘.’ == missing, ‘R’ == replacing)
[root@Foghorn ~]#

Note the Array State lines for each drive.

Finally:

[root@Foghorn ~]# mdadm -v --assemble /dev/md0
mdadm: looking for devices for /dev/md0
mdadm: /dev/disk/by-partuuid/65bc5bbb-5570-4a91-98f5-5d65cf7b5f4d is busy - skipping
mdadm: /dev/disk/by-partuuid/0171909d-c458-46b0-aec2-3c177e19d21d is busy - skipping
mdadm: /dev/disk/by-partuuid/f57cc601-228c-4880-98ff-95f37caccb31 is busy - skipping
[root@Foghorn ~]#

After I stopped the array, then tried to reassemble:

[root@Foghorn ~]# mdadm -v --assemble /dev/md0
mdadm: looking for devices for /dev/md0
mdadm: /dev/disk/by-partuuid/65bc5bbb-5570-4a91-98f5-5d65cf7b5f4d is identified as a member of /dev/md0, slot 1.
mdadm: /dev/disk/by-partuuid/0171909d-c458-46b0-aec2-3c177e19d21d is identified as a member of /dev/md0, slot 0.
mdadm: /dev/disk/by-partuuid/f57cc601-228c-4880-98ff-95f37caccb31 is identified as a member of /dev/md0, slot 2.
mdadm: added /dev/disk/by-partuuid/65bc5bbb-5570-4a91-98f5-5d65cf7b5f4d to /dev/md0 as 1 (possibly out of date)
mdadm: added /dev/disk/by-partuuid/f57cc601-228c-4880-98ff-95f37caccb31 to /dev/md0 as 2 (possibly out of date)
mdadm: added /dev/disk/by-partuuid/0171909d-c458-46b0-aec2-3c177e19d21d to /dev/md0 as 0
mdadm: /dev/md0 assembled from 1 drive - not enough to start the array.
[root@Foghorn ~]#

I really don’t know what more to try. I don’t really care if I trash the array, since, at present, it was only a backup. But the two-day rebuild, plus 1 day data copy is kind of frustrating.

Thanks for the replies,

ken

This says the device is good. Both of the others show as (possibly out of date) and failed to be activated.

The quickest and safest way at this point probably is to delete the array completely then rebuild it.

Another option may be to incrementally assemble it starting with device 0 then adding one of the other drive to see if it will activate. If that allows the array to activate then wait for the rebuild/sync to complete before adding the 3rd device.

I note that you created partitions on each device. devices=/dev/sdf1,/dev/sdg1,/dev/sdh1
For raid this is not required as you see here
devices=/dev/sda,/dev/sdc,/dev/sdd
Then you can partition the raid device as a whole to create the file system.

 $ lsblk
NAME                  MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINTS
sda                     8:0    0  3.6T  0 disk  
└─md127                 9:127  0  7.3T  0 raid5 
  └─fedora_raid1-home 252:1    0  7.3T  0 lvm   /home
sdc                     8:32   0  3.6T  0 disk  
└─md127                 9:127  0  7.3T  0 raid5 
  └─fedora_raid1-home 252:1    0  7.3T  0 lvm   /home
sdd                     8:48   0  3.6T  0 disk  
└─md127                 9:127  0  7.3T  0 raid5 
  └─fedora_raid1-home 252:1    0  7.3T  0 lvm   /home

When posting long outputs please use the preformatted text button </> instead of the block quote button " so what we see retains the on-screen formatting and does not automatically wrap lines nor shrink white space to a single space. Scroll bars are added to enable viewing when doing so.

Above is the result of the </> button and below is the block quoted equivalent

$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 3.6T 0 disk
└─md127 9:127 0 7.3T 0 raid5
└─fedora_raid1-home 252:1 0 7.3T 0 lvm /home
sdb 8:16 0 7.3T 0 disk
└─sdb1 8:17 0 7.3T 0 part
sdc 8:32 0 3.6T 0 disk
└─md127 9:127 0 7.3T 0 raid5
└─fedora_raid1-home 252:1 0 7.3T 0 lvm /home
sdd 8:48 0 3.6T 0 disk
└─md127 9:127 0 7.3T 0 raid5
└─fedora_raid1-home 252:1 0 7.3T 0 lvm /home

OK. I bit the bullet, and I am creating a new array. The question now is: how do I label the drives in the array so that I don’t have the same name problems ie, /dev/sdc,d/e which can and probably will change on reboot.

TIA

ken

Don’t
As you saw from the detail the metadata for the array identifies the assembly order and redundancy may cause problems.

I suspect the actual problem may have been caused by a reboot/power cycle when the array was actively writing data (which spans all devices for each block written) and thus the data was out of sync at the time the write was interrupted.