Help me optimize my Fstab and DNF setup

Hello all,

When I bought this pc it had F39, upgrade to F40 and F41. Tired of chasing errors/fixes (mostly python and nvidia) and re-installed fresh F41. Should of from the start.

I put back the previous fstab and dnf.conf, and would like your input to optimize them.

–fstab–

**
system default
Sabrent Rocket 4 Plus nvme
**

UUID=1edd8dd9-d030-4153-af83-e57a4e038090 /boot          ext4    defaults   0 2
UUID=83e59980-de03-4d94-ae38-03d8572e4fbc /              btrfs   subvol=/@,compress=zstd:1,x-systemd.device-timeout=0 0 0
UUID=83e59980-de03-4d94-ae38-03d8572e4fbc /home          btrfs   subvol=/@home,compress=zstd:1,x-systemd.device-timeout=0 0 0
tmpfs                                     /tmp           tmpfs   defaults,noatime,mode=1777 0 0

**
storage, backups
Toshiba X300 hd (replacing with 2 seagate ironwolf)
**
UUID=07ceff63-bd43-4f83-9740-2873e34b904c /vault/safe    ext4    defaults,noatime 0 0

**
studio one (web development)
Intel 670p nvme ssd
**
UUID=73733bf6-6c44-469f-b9fb-e8d769a26574 /vault/closet  btrfs   auto nosuid,nodev,nofail,x-gvfs-show 0 0

**
studio two (audio storage)
Crucial MX500 sdd
**
UUID=7de462c1-978a-4784-b219-df081fca9e2a /vault/bunker  btrfs   auto nosuid,nodev,nofail,x-gvfs-show 0 0

**
studio three (video storage)
Crucial MX500 sdd
**
UUID=b383660e-7b2e-4c3a-9dbc-d4c9d716ba98 /vault/cellar  btrfs   auto nosuid,nodev,nofail,x-gvfs-show 0 0

–dnf.conf–
gpgcheck=True
installonly_limit=3
clean_requirements_on_remove=True
best=False
skip_if_unavailable=True
max_parallel_downloads=20
zchunk=False

Yes. I set up the studios as btrfs file systems and will use subvolumes to catagorize, and adding more drives is easier (4 more incoming). Even after reading this btrfs post I’m going for it, if it works, fine, if not, move on.

Thanks in advance, zz

So you are on F41 and want to make sure your fstab and dnf.conf from F39 will still work properly?

I took the liberty of editing your post to add the preformatted text tags so the fstab entry appears as seen on screen.

I then have to assume that the parts that are bracketed with the ** are simply noted you added to the listing and that the actual fstab appears like this


UUID=1edd8dd9-d030-4153-af83-e57a4e038090 /boot          ext4    defaults   0 2
UUID=83e59980-de03-4d94-ae38-03d8572e4fbc /              btrfs   subvol=/@,compress=zstd:1,x-systemd.device-timeout=0 0 0
UUID=83e59980-de03-4d94-ae38-03d8572e4fbc /home          btrfs   subvol=/@home,compress=zstd:1,x-systemd.device-timeout=0 0 0
tmpfs                                     /tmp           tmpfs   defaults,noatime,mode=1777 0 0
UUID=07ceff63-bd43-4f83-9740-2873e34b904c /vault/safe    ext4    defaults,noatime 0 0
UUID=73733bf6-6c44-469f-b9fb-e8d769a26574 /vault/closet  btrfs   auto nosuid,nodev,nofail,x-gvfs-show 0 0
UUID=7de462c1-978a-4784-b219-df081fca9e2a /vault/bunker  btrfs   auto nosuid,nodev,nofail,x-gvfs-show 0 0
UUID=b383660e-7b2e-4c3a-9dbc-d4c9d716ba98 /vault/cellar  btrfs   auto nosuid,nodev,nofail,x-gvfs-show 0 0

The entry for /tmp is NOT optimal since the system automatically creates /tmp at boot time.
The entries for / and /home appear normal except the default fedora config and naming for subvolumes is ‘root’ for / (you show ‘/@’) and ‘home’ for /home (you show ‘/@home’). Those names are not necessarily invalid, but are not fedora default.
The remaining btrfs file systems appear to be one btrfs volume per drive and may or may not be an issue in using the main volume on the drive instead of a subvolume.

These options for the additional btrfs volumes have a definite issue.
auto nosuid,nodev,nofail,x-gvfs-show in that an additional field has been created by the space between the auto and nosuid options. Every fstab line must have six fields and you have created 7.

‘auto’ is not required since that is default when booting or running mount -a
When using ‘noatime’ it usually should be applied only to SSDs and not used on HDDs. With newer SSDs that is normally not required but seems a good idea to me. Using ‘noatime’ greatly reduces the total number of writes to the SSD.

My default /etc/fstab on f41 is this

UUID=9251dc90-a0ef-4df1-836b-a8c0ff22056e /                       btrfs   subvol=root,compress=zstd:1 0 0
UUID=19297faa-c91a-4126-8e8e-5792599d4ad7 /boot                   ext4    defaults        1 2
UUID=9393-D982          /boot/efi               vfat    umask=0077,shortname=winnt 0 2
UUID=9251dc90-a0ef-4df1-836b-a8c0ff22056e /home                   btrfs   subvol=home,compress=zstd:1 0 0

I notice that you do not have an entry for /boot/efi so it would appear you are using a legacy (MBR) boot and not using UEFI?

1 Like

Thank you, I couldn’t find it in menu, know where it is now. Sory for delay, nightshift ended up being a double, home to house of grandkids.

Fresh and final install, all went well but one drive would not mount. Closet is on an nvme.

z@fedora:~$ sudo mount /dev/nvme1n1p1 /vault/closet
mount: /vault/closet: fsconfig system call failed: /dev/nvme1n1p1: Can't lookup blockdev.
       dmesg(1) may have more information after failed mount system call.

dmesg
[ 8.589864] block nvme1n1: No UUID available providing old NGUID

Gave it a new uuid… notta
Tried btrfs rescue super-recover -v… notta
Opened disks, recover file system… bingo… but now it is nvme0n1. Had the same issue a while back and never got a definitive answer. It only happens on multiple nvme, the id changes.

New fstab

UUID=dd0ca2ea-837a-4526-9c89-3b1b8059aca6 /              btrfs   subvol=root,noatime,compress=zstd:1 0 0
UUID=c1fe2803-b870-4ded-b16f-11c4e55bec24 /boot          ext4    defaults        1 2
UUID=52A6-BA01                            /boot/efi      vfat    umask=0077,shortname=winnt 0 2
UUID=dd0ca2ea-837a-4526-9c89-3b1b8059aca6 /home          btrfs   subvol=home,compress=zstd:1 0 0
UUID=07ceff63-bd43-4f83-9740-2873e34b904c /vault/safe    ext4    defaults 0 0
UUID=7de462c1-978a-4784-b219-df081fca9e2a /vault/bunker  btrfs   noatime,nosuid,nodev,nofail,x-gvfs-show 0 0
UUID=b383660e-7b2e-4c3a-9dbc-d4c9d716ba98 /vault/cellar  btrfs   noatime,nosuid,nodev,nofail,x-gvfs-show 0 0
UUID=3c8e89ac-b289-4adc-a0dd-d3b5a14722e4 /vault/closet  btrfs   noatime,nosuid,nodev,nofail,x-gvfs-show 0 0

Thanks, zz

I had a problem with my NVMe changing ID.
I only have two NVMe drives so I put a full path to mount the second drive to. (e.g. /home/user/media/nvme1n1/ ) And now it does not change either.

Maybe (I ask) is it possible to mount either your (ones) secondary drives with longer paths to fix a drive assignment?

I don’t think that should matter here since they have UUID= in fstab, but changing device order of drives is exactly why you should use UUID in fstab. I’ve been burned by that more than once.

1 Like

The device name that is assigned by udev can change depending upon the order configured. Thus with 2 SSDs it is possible for nvme0n1 and nvme1n1 to swap the order and thus the name used in /dev/.
This is the reason that fedora (and most linux distros) by default uses the file system UUID in fstab and why those changes you made to fstab should eliminate issues in consistently mounting the same file system at the same location.

1 Like

Years ago when working in openstack I found that targeting specific drives by wwn or serial number helpful. The storage device can be moved to a different server and it’s reference stays the same.

lsscsi --wwn
systool -c nvme -v | grep -E 'Class Device|serial'

The rules for naming from the serial number or wwn gets too complicated for me so the device to use in fstab can be found by

find /dev/disk | grep -E "${WWN}|${SERIAL}"