Fedora 41 NMVe enumeration is different from Fedora 40 NVMe enumeration

For a while enumeration of multiple NVMe SSDs was random when using Fedora 40. Later, those devices were/are enumerated in physical order.

AT this writing, enumeration using Fedora 41 appears to have enumerated the SSD where anaconda loaded the OS as the first one.

Are my observations consistent with OS design? Screenshots of lsblk output for each case follow:

For me those always change on boot if I happen to add disks (hot swap HDDs, USB drives, etc). Even when I don’t, sometimes they get swapped around booting up. Usually my main drive is on nvme0n1 and my secondary dedicated game drive at nvme1n1, but sometimes when booting up the machine the labels get swapped. My internal fixed HDD is usually on sda, but if I happen to plug in a USB drive or turn on my external drive bay it gets set to sdb. It triggers my OCD, but I think this is normal behavior of the kernel. I have observed this on multiple distros.

On some hardware it happens on every boot, even for multiple network adapters. It really doesn’t matter as long as you address the drive partition in fstab with UUID instead of device names.

Some time back this was noted as normal in that the devices are always named in the order seen and configured, and that is not always the same.

UUIDs were developed to be written to the devices and partitions so the same device was always identified the same way.

How it is physically named no longer matters in fstab since the system writes the uuid for each device/partition there.

It does matter when using some of the utilities such as fdisk, parted, gparted, etc. since those commonly use the name from /dev to identify the device being managed. Such low level tools are not normally used by most users and most are capable of also using the UUID.

I remember a long time ago the /dev/ device enumerations were indeed used by things such as fstab. Eventually it was changed. I remember raging a bit at it because I had to reconfigure my custom fstab fixes (fixes that were no longer needed due to the new UUIDs :laughing: ).

1 Like

Random order NVMe SSD enumeration no longer occurs. Fedora 40 and Fedora 41 enumerate consistently but differently. When using dual NVMe SSDs, enumeration in effect when “grub.cfg” is updated determines whether the partition containing “OS version X” is hard coded to use one or the other SSD such as “/dev/nvme0n1np8” vs “/dev/nvme1n1p8”.

This behavior causes boot failures when a selection is made where NVMe SSD enumeration at boot time is different than it was when “grub.cfg” was updated. The boot process will hang following some sort of graphics processing which can be unnerving.

Fedora 40 and Fedora 41 appear to have addressed what was random NVMe SSD enumeration differently. I hope that the Fedora 41 NVMe SSD enumeration scheme is designed in rather than an unexpected side effect, thus the question to confirm this.

I disagree.
I have 4 HDD drives in a raid array (on f41) and the sdX naming is not 100% consistent between boots. I can tell it changes by looking at the serial numbers of the drives in the output of fdisk.

I only have one nvme (M.2) drive so that is always named the same.

In a default configuration, the various file systems are specified in the grub.cfg file by their file system uuid using the search command, like in

search --no-floppy --fs-uuid --set=root 56a83e29-1e98-44f6-a33e-8988906ef2a1

Nowhere are the device names /dev/nveme… mentioned.

1 Like

Snippets follow of grub.cfg files where the same OS version has a different different partition reference due to NVMe SSD enumeration differences between Fedora 40 and Fedora 41:

I really do not understand the focus on the differences between 40 & 41 and perceived differences in the naming in /dev.

It does not affect grub, booting, or file system mounting. Those are controlled by the UUIDs of the file systems and naming in /dev is a moot point as it has been for many moons.

My curiosity was piqued due to this behavior so I reached out for clarification.

Hi Ernest,

Don’t get to hung up on /dev :slight_smile:
If you want to get an idea of weather or not the disk device mappings are consistent across boots , here is how things more or less fit together:
(I’ll focus on NVME for this case)
At the command line:

  1. lspci | grep “Non-Volatile memory controller:”
    Pay attention to the far left that identifies device per memory mapping
    (mine looks like this)
    02:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller PM9A1/PM9A3/980PRO
    03:00.0 Non-Volatile memory controller: Sandisk Corp WD PC SN540 / Green SN350 NVMe SSD 1 TB (DRAM-less)

  2. ls -l /dev/disk/by-path
    pay attention to the part that says pci-nnnn:nn:nn.n-nvme-n-partn and where it is linked to
    ( mine looks like this)
    lrwxrwxrwx. 1 root root 13 Dec 11 12:25 pci-0000:02:00.0-nvme-1 → …/…/nvme1n1
    drwxr-xr-x. 6 root root 120 Dec 11 05:25 pci-0000:02:00.0-nvme-1-part
    lrwxrwxrwx. 1 root root 15 Dec 11 12:25 pci-0000:02:00.0-nvme-1-part1 → …/…/nvme1n1p1
    lrwxrwxrwx. 1 root root 15 Dec 11 12:25 pci-0000:02:00.0-nvme-1-part2 → …/…/nvme1n1p2
    lrwxrwxrwx. 1 root root 15 Dec 11 12:25 pci-0000:02:00.0-nvme-1-part3 → …/…/nvme1n1p3
    lrwxrwxrwx. 1 root root 15 Dec 11 12:25 pci-0000:02:00.0-nvme-1-part4 → …/…/nvme1n1p4
    lrwxrwxrwx. 1 root root 13 Dec 11 12:25 pci-0000:03:00.0-nvme-1 → …/…/nvme0n1
    drwxr-xr-x. 7 root root 140 Dec 11 05:25 pci-0000:03:00.0-nvme-1-part
    lrwxrwxrwx. 1 root root 15 Dec 11 12:25 pci-0000:03:00.0-nvme-1-part1 → …/…/nvme0n1p1
    lrwxrwxrwx. 1 root root 15 Dec 11 12:25 pci-0000:03:00.0-nvme-1-part2 → …/…/nvme0n1p2
    lrwxrwxrwx. 1 root root 15 Dec 11 12:25 pci-0000:03:00.0-nvme-1-part3 → …/…/nvme0n1p3
    lrwxrwxrwx. 1 root root 15 Dec 11 12:25 pci-0000:03:00.0-nvme-1-part4 → …/…/nvme0n1p4
    lrwxrwxrwx. 1 root root 15 Dec 11 12:25 pci-0000:03:00.0-nvme-1-part5 → …/…/nvme0n1p5

  3. ls -lh /dev/nvme*
    this basically tells you what node was assigned to the device (udev pretty much does a mknod for each device it finds)
    ( mine looks like this)
    crw-------. 1 root root 238, 0 Dec 11 12:25 nvme0
    brw-rw----. 1 root disk 259, 5 Dec 11 12:25 nvme0n1
    brw-rw----. 1 root disk 259, 6 Dec 11 12:25 nvme0n1p1
    brw-rw----. 1 root disk 259, 7 Dec 11 12:25 nvme0n1p2
    brw-rw----. 1 root disk 259, 8 Dec 11 12:25 nvme0n1p3
    brw-rw----. 1 root disk 259, 9 Dec 11 12:25 nvme0n1p4
    brw-rw----. 1 root disk 259, 10 Dec 11 12:25 nvme0n1p5
    crw-------. 1 root root 238, 1 Dec 11 12:25 nvme1
    brw-rw----. 1 root disk 259, 0 Dec 11 12:25 nvme1n1
    brw-rw----. 1 root disk 259, 1 Dec 11 12:25 nvme1n1p1
    brw-rw----. 1 root disk 259, 2 Dec 11 12:25 nvme1n1p2
    brw-rw----. 1 root disk 259, 3 Dec 11 12:25 nvme1n1p3
    brw-rw----. 1 root disk 259, 4 Dec 11 12:25 nvme1n1p4

the attributes tell you how the kernel should do I/O with the node — c is stream or Char, b is Block mode. the n,n (major,minor) is the device type (all devices of the same type have the same major) 259,1 means this device has more than 1 logical unit (in the case of block devices, this is usually the LUN (Logical Unit Number) and then you have the name assigned - nvme1n1p1 (device_type{which_one}{which_subunit/LUN}{which_partition}

The important part when it comes to non-removable disks is that this entire mapping remains consistent across boots, particularly if you use the dev_name in the /etc/fstab to identify/map disks partitions to file systems. Linux went a bit further (they complicated the simple IMO) and put data on the storage device in a couple of ways to handle changes to device mappings that may not be mapped consistently across boots per udev — partition labels and / or UUID is the result …