SSD Write speed decreased significantly

I purchased Samsung 990 pro 2TB on 2023-05-13 and I did storage benchmarking with KDiskMark on empty ext4 logical volume of about 50 GB space. I used pre-built profile (default, peak performance and real world performance) of Kdiskmark on 2023-05-13: (in each image the header indicated the profile name)

Measurement on 2023-05-13 (almost empty)




Today’s measurement

Today I repeated the above process on another logical volume (ext4) of about 100 GB of which 18 GB is used.





  • The first measurement was done when the 2TB SSD was almost empty.
  • Today’s measurent is done when atleast 560 GiB is unallocated on Volume group and all logical volume have itself a lot of empty space as follows:
  $ sudo vgs
    VG       #PV #LV #SN Attr   VSize  VFree   
    vgubuntu   1  10   0 wz--n- <1.82t <560.64g
=========== lsblk ===============================
zram0                                  8G                              [SWAP]                        /dev/zram0
nvme0n1                              1.8T                                                            /dev/nvme0n1
├─nvme0n1p1             ext4         768M                                                            /dev/nvme0n1p1
├─nvme0n1p2             ext4         768M 738.4M   257M  427.7M    35% /boot          FedoraBoot     /dev/nvme0n1p2
├─nvme0n1p3             ext4         768M                                             UnassignedBoot /dev/nvme0n1p3
├─nvme0n1p4             vfat         128M 127.7M  22.4M  105.3M    18% /boot/efi      EFI-SP         /dev/nvme0n1p4
└─nvme0n1p5             LVM2_member  1.8T                                                            /dev/nvme0n1p5
  ├─vgubuntu-FedoraRoot ext4         100G  98.1G  46.5G   46.6G    47% /              FedoraRoot     /dev/mapper/vgubuntu-FedoraRoot
  ├─vgubuntu-FedoraSwap swap          32G                              [SWAP]                        /dev/mapper/vgubuntu-FedoraSwap
  ├─vgubuntu-UbuntuSwap swap          32G                                                            /dev/mapper/vgubuntu-UbuntuSwap
  ├─vgubuntu-UbuntuRoot ext4          50G                                                            /dev/mapper/vgubuntu-UbuntuRoot
  ├─vgubuntu-UbuntuHome ext4          18G                                                            /dev/mapper/vgubuntu-UbuntuHome
  ├─vgubuntu-FedoraHome ext4          18G  17.5G   4.8G   11.8G    28% /home          FedoraHome     /dev/mapper/vgubuntu-FedoraHome
  ├─vgubuntu-Data       ext4         150G 147.3G 108.2G   32.6G    73% /mnt/Data      Data           /dev/mapper/vgubuntu-Data
  ├─vgubuntu-Documents  ext4         300G 294.2G 210.1G   69.6G    71% /mnt/Documents Documents      /dev/mapper/vgubuntu-Documents
  ├─vgubuntu-Media      ext4         500G 491.1G 318.9G  147.2G    65% /mnt/Media     Media          /dev/mapper/vgubuntu-Media
  └─vgubuntu-Backup     ext4         100G  97.9G  17.7G   75.2G    18% /mnt/Backup    Backup         /dev/mapper/vgubuntu-Backup
  • Smart Data from today
$ sudo smartctl -a /dev/nvme0 
smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.7.4-200.fc39.x86_64] (local build)
Copyright (C) 2002-23, Bruce Allen, Christian Franke,

Model Number:                       Samsung SSD 990 PRO 2TB
Serial Number:                      S6Z2NJ0W312785Y
Firmware Version:                   1B2QJXD7
PCI Vendor/Subsystem ID:            0x144d
IEEE OUI Identifier:                0x002538
Total NVM Capacity:                 2,000,398,934,016 [2.00 TB]
Unallocated NVM Capacity:           0
Controller ID:                      1
NVMe Version:                       2.0
Number of Namespaces:               1
Namespace 1 Size/Capacity:          2,000,398,934,016 [2.00 TB]
Namespace 1 Utilization:            869,734,813,696 [869 GB]
Namespace 1 Formatted LBA Size:     512
Namespace 1 IEEE EUI-64:            002538 4331415506
Local Time is:                      Wed Feb 14 22:23:24 2024 CET
Firmware Updates (0x16):            3 Slots, no Reset required
Optional Admin Commands (0x0017):   Security Format Frmw_DL Self_Test
Optional NVM Commands (0x0055):     Comp DS_Mngmt Sav/Sel_Feat Timestmp
Log Page Attributes (0x2f):         S/H_per_NS Cmd_Eff_Lg Ext_Get_Lg Telmtry_Lg Log0_FISE_MI
Maximum Data Transfer Size:         512 Pages
Warning  Comp. Temp. Threshold:     82 Celsius
Critical Comp. Temp. Threshold:     85 Celsius

Supported Power States
St Op     Max   Active     Idle   RL RT WL WT  Ent_Lat  Ex_Lat
 0 +     9.39W       -        -    0  0  0  0        0       0
 1 +     9.39W       -        -    1  1  1  1        0     200
 2 +     9.39W       -        -    2  2  2  2        0    1000
 3 -   0.0400W       -        -    3  3  3  3     2000    1200
 4 -   0.0050W       -        -    4  4  4  4      500    9500

Supported LBA Sizes (NSID 0x1)
Id Fmt  Data  Metadt  Rel_Perf
 0 +     512       0         0

SMART overall-health self-assessment test result: PASSED

SMART/Health Information (NVMe Log 0x02)
Critical Warning:                   0x00
Temperature:                        44 Celsius
Available Spare:                    100%
Available Spare Threshold:          10%
Percentage Used:                    0%
Data Units Read:                    7,176,949 [3.67 TB]
Data Units Written:                 12,791,029 [6.54 TB]
Host Read Commands:                 202,267,968
Host Write Commands:                280,092,405
Controller Busy Time:               3,197
Power Cycles:                       188
Power On Hours:                     469
Unsafe Shutdowns:                   42
Media and Data Integrity Errors:    0
Error Information Log Entries:      0
Warning  Comp. Temperature Time:    0
Critical Comp. Temperature Time:    0
Temperature Sensor 1:               44 Celsius
Temperature Sensor 2:               47 Celsius


We can observe that the write performance of the SSD has significantly gone down on each metric of the 3 images when compared from today with respect to the previous data. Any idea what could be the reason?

Also, do we need to leave empty space in each logical volume for optimal performance and optimal life of the SSD? Also, we should leave unallocated space? If we use all unallocated space into various logical volumes, but if each of those logical volume had some percentage of space empty will that be sufficient?

I would guess that it is just the overhead of finding an available block to write to. When the SSD is new or (nearly) empty, that is a quick operation. When many blocks are in use, it might take longer to find one that is available. I don’t know. That is just my guess.

I doubt it matters. The circuitry within the SSD that is responsible for finding available blocks and ordering them by how many times they’ve been written to (preferring blocks that have received fewer writes, i.e., wear leveling) is completely unaware of those higher-level concepts.

The SSD reserves blocks that are hidden from you to make the ware leveling work well.
But it is true that the more disk space that is free the better it is for ware leveling.

What you can do is compare this smartctl line with the spec of the SSD’s endurance:

Data Units Written:                 12,791,029 [6.54 TB]

I have a dual boot setup that I can compare Fedora to Windows.

On my Samsung SSD’s the smartctl data corresponds to the figure shown in the Windows Samsung Magician tool.

The data sheet for your drive states in the “Warranty” section that the 2TB drive has an endurance of 1,200TB written.

This means that you have use approximately 6.54TB/1,200TB of the endurance or 0.55%.

My smartctl report on a SATA SSD gives me a percent life remaining attribute even (through a full report, I think).