Test Speed of external ssd and USB ports

Can someone here recommend a utility to perform benchmark tests on usb ports and external drives? I found a few searching the web, but I imagine that there is one or more that work well or best with Fedora 38/GNOME 44.5.


PS - I found this code:
$ dd if=tempfile of=/dev/null bs=1M count=1024

which works great on the internal drive, but when I try to use an external drive:
$ dd if=tempfile of=/dev/sde2 bs=1M count=1024

I get this error:
dd: failed to open ‘/dev/sde2’: Permission denied

How do I get this command to offer me the password request for permission to use the external SSD?


Put sudo in front of the dd command. Keep in mind that what you’re doing here will likely blow away /dev/sde2 as you’re overwriting that partition with the contents of “tempfile”!!

bonnie++ and iozone are useful tools for benchmarking storage. We use them in my dayjob for benchmarking ceph.


bonnie++ is in the Fedora repos. iozone is available from rpmfusion-nonfree.

1 Like

YIKES! Thanks for that warning!

Fools rush in where angels fear to tread!


1 Like

Thanks! I’ll look for Bonnie++.


1 Like

This is getting old. This is the second time in two threads that a COMPLETELY innocuous post has been “flagged” by “the Bot”.


This is an automated message from Fedora Discussion to let you know that your post was hidden.


Your post was flagged as inappropriate: the community feels it is offensive, abusive, to be hateful conduct or a violation of our community guidelines.

This post was hidden due to flags from the community, so please consider how you might revise your post to reflect their feedback. You can edit your post after 5 minutes, and it will be automatically unhidden.

However, if the post is hidden by the community a second time, it will remain hidden until handled by staff.

For additional guidance, please refer to our community guidelines.

Yeah, bots aren’t perfect … @moderators

Yeah, I’m working on it. Last time, I raised the threshold, but that clearly didn’t stick. If it keeps happening, I’ll turn off the (experimental) feature entirely until it works better.


If you wish to test the speed for reading/writing to a device you should mount the file system on the device then test the speed of reading and writing to that file system, not the raw device. /dev/sde2 would be the raw device, and as noted would require sudo to write to it (and would destroy the data on the partition and likely the partition itself as well)

Mounting /dev/sde2 at /mnt then reading/writing to /mnt would perform that test.

Just be aware that no SSD can operate at full speed when it is limited by the data path, and usb paths (although improving) are still much slower than standard pcie/SATA paths.

Beware that apart from destroying the file system it will not test the spped of writing to the device.
What happens to that the kernel will copy the buffers of data into memory and slowly write then to the disk.
The dd command will return a long time before the last buffer is committed to the disk.
In your case its only 1GiB of data so will fit in memory usually these days.

It is useful in the sense of a real-world scenario, but it’s possible to override that with iflag=direct.

Re: destroying the filesystem - dd will write to anything, including a file, so mounting /dev/sde2 to say /mnt/sde2 and then dd of=/mnt/sde2/ddtest.img would likely be a better test because it is writing to a filesystem and not wrecking the partition data in the process. However, there are legit reasons for writing directly to a device or partition - for example, you can make an LVM PV directly from a block device or perhaps you want less variables in testing our ceph RBD by excluding the partition/filesystem. dd to a block device is also a good way to intentionally and quickly wipe the partition table and this is done fairly regularly in my dayjob. It can also be used to restore a backup raw disk image, which can sometimes be especially useful for migrating virtual machine volumes in a pinch or writing a live installation image to a USB stick.

It’s still good to (politely) warn someone in a situation like this that they could be potentially losing their data in using dd this way, in case that isn’t their intent.

Thanks! Those are very helpful posts.

To elaborate on my intentions:
I’m not overly concerned with individual component performance details. I’m more interested in ‘system’ or path performance. For example: “How long does it take to read and write files from ‘here’ to ‘there’ using the ‘system’ of components: busses, connectors, cables and hardware.” Furthermore, that interest is mostly just curiosity plus checking to see if a new-to-me computer’s specs were correctly represented. (For all the reasons mentioned above, “close” will be close enough wrt the specs.) Plus, I want to learn more about Linux. I’m not going to be changing components - other than cables - because some piece of the system is ‘too slow’.

I appreciate you all taking the time to explain this to me.


Ultimately, it’s your decision since its your system. dd is a viable way to benchmark this, but it’s also a powerful tool that will definitely break things if you ask it to. dd is also only block data transfer at a specified rate. dd also doesn’t do any error checking, so it sends to reads and writes the data but it doesn’t check to see that the data was actually read or written correctly, so to oversimply things, it’s sort of the UDP method of data transfers. To that end, tools like bonnie++ are more useful in that you get data around different scenarios - like different kinds of read and write operations, which give a more holistic picture overall. Like I said, in my dayjob, we use dd, bonnie++, and iozone for profiling iops.

If I’m trying to benchmark transfer between two different storage mediums, say a local disk and an NFS mount (or USB disk, Ceph RBD, etc.), then I’ll more often use rsync -av --stats and include a diverse fileset with folders that have a whole bunch of tiny files and some larger files like isos and disk images. This method has a notable drawback in that rsync itself isn’t the fastest transfer method overall, especially is using remote ssh, so it’s not so much useful for telling what your capabilities are, but it can be useful for comparison and identifying potential bottlenecks. For example, it’s useful if I wanted to know how different performance might be writing to an internal SSD vs a USB-C attached SSD.

Excellent information and explanation! Thanks. You’ve assessed precisely where my interests lie.


1 Like