Disabling plymouth reduces boot time by 50 percent


Disabling plymouth on a fresh Fedora 31 installation reduces the boot time by about 50 percent over all tested systems, using a Desktop PC (+ Nvidia), a T495 and a T440p.

The test involves:
1.) systemd-analyze time
1.1) Optional: systemd-analyze blame; systemd-analyze plot > boot.svg
2.) Adding rd.plymouth=0 plymouth.enable=0 to kernel boot parameters
3.) Re-generating grub

If you have a fairly fresh installation (to make it reproduceable) please test this yourself. I’d love to hear if this applies to all setups, and is realted to plymouth, or only applies to my set of hardware.

I’ve created a bug report, but it has not been gaining traction yet.


“Ideally, the goal is to get rid of all flicker during startup.” – (:smirk_cat:“Spare no expenses!”)

Plymouth: link

There’s actually three questions:

  1. Does plymouth slow down boot?
  2. Does plymouth-quit-wait.service slow down boot?
  3. Does systemd-analyze actually show a time differential e.g. the time GRUB initiates its boot command, and the time GNOME login appears?

I tested 2, also using systemd-analyze and come to a similar (though not identical conclusion) you did. But then per this bug report, I retested using a stopwatch instead. And while the results show there is a time delay, it’s much less than what systemd-analyze shows, it’s not revealing why there is a time delay. It may not be the service unit itself causing the delay, but affecting parallelization of other units.

I suggest you ignore plymouth-quit-wait.service (keep it unmasked); and retest with and without rd.plymouth=0 plymouth.enable=0 using a stopwatch. I also suggest something like 3 back to back tests with each setting (6 tests), that way you’ve got a decently suggestive sample size.

For sure boot speeds are important. It’s widely beneficial to speed up the startup process, for everyone, across all hardware make/model, and arch’s, baremetal and VM. And for many, it compensates somewhat if they’re unable to get hibernation to work (too complicated, firmware bugs, kernel bugs, etc).


You are indeed correct…

Testing 3 back to back tests with enabled and disabled plymouth shows that the time reported by systemd-analyze is incorrect.

Average results:

systemd-analyze time manual stopwatch
Plymouth: disabled 28.3s 34.2s
Plymouth: enabled 51.2s 37.0s

It’s not incorrect per se. It just has a different perspective. Plymouth sticks around, active, for a while after the login window appears. And therefore systemd is tracking its active time, but it’s not anything that causes boot to be delayed by the full time. But your results do indicate a roughy 3s difference with and without plymouth. That’s real. And I think there’s room for improvement there.



Upstream plymouth maintainer / Mr flicker-free boot here. Chris Murphy pointed me to this thread, so as Chris already explained the systemd-analyze time is not really reliable because it includes the plymouth-quit-wait.service which does not reflect when the login screen is ready for login.

With that said, there is some room for improvement after Chris filed this bug I started trying to reproduce this on my own system; and although I see no difference when I mask plymouth-quit-wait.service, I did notice 2 other interesting things:

  1. On my system plymouth-switch-root.service takes 2 and a bit seconds, while this should be more like 20ms, the problem is that plymouth is synchronously (using the classic unix select/poll on a bunch of filedescriptors design) and just before the initrd finds my root filesystem the i915 driver loads and then plymouth takes about 2 seconds to deal with all the udev-events (which result in probing the monitor) and loading the initial theme. You may want to check your systemd-analyze plot output to see if you have something similar going on. I have a plan to fix this, but I’m not sure when I will get around to it.

  2. If you’ve done your installation from a livecd then you may very well have the dmraid and device-mapper-multipath packages installed. These both come with a enabled by default service, which drags in the deprecated systemd-udev-settle.service and that takes a long time. You very likely do not need multipath (if you do you will know you do) and unless you are using a raid-set which is managed by your disk-controller or system firmware (rather then pure software mdraid) you also do not need dmraid. Even if you are using the RAID functionality from your motherboard, if that is Intel RAID / RST then you still do not need dmraid. So typically you can safely do: “sudo rpm -e dmraid dmraid-events device-mapper-multipath” and this will shave a number of seconds of your boot time.

Note do not remove dmraid if you are using a firmware managed raidset or if you are not sure you do, removing it when it is necessary will result in a non booting system!


Thank you very much Chris and Hans for the valueable insight. I’ve been looking into the systemd-analyze documentation plus your information I now understand the calculation much better.

You brought up an interesting point regarding the livecd… I have a fresh notebook that I’d like to keep as “clean and lean” as possible. You were mentioning that the livecd keeps anaconda, and other dependencies, in the final installation - I’ve noticed this too on my test machines. Does using the netinstall image solve these “issues”?

I believe that it should, yes. Specifically I would expect anaconda
to only add dmraid and device-mapper-multipath to the transaction if
necessary. But it has been almost a decade since I last looked at
the anaconda code for this.

1 Like

Taking the current Workstation DVD image (livecd) and the netinstall image I setup two test machines (both using BIOS boot). Updated them to latest and compared the installed packages of the final installation.

The netinstall installation has no additional packages compared to the livecd installation.

The livecd installation contains the netinstall packages and in addition:

Additional packages

NetworkManager-team.x86_64 1:1.20.10-1.fc31
SDL.x86_64 1.2.15-42.fc31
anaconda.x86_64 31.22.6-2.fc31
anaconda-core.x86_64 31.22.6-2.fc31
anaconda-gui.x86_64 31.22.6-2.fc31
anaconda-install-env-deps.x86_64 31.22.6-2.fc31
anaconda-live.x86_64 31.22.6-2.fc31
anaconda-tui.x86_64 31.22.6-2.fc31
anaconda-user-help.noarch 26.1-10.fc31
anaconda-widgets.x86_64 31.22.6-2.fc31
authselect-compat.x86_64 1.1-2.fc31
bcache-tools.x86_64 1.0.8-16.fc31
blivet-data.noarch 1:3.1.6-1.fc31
blivet-gui-runtime.noarch 2.1.11-2.fc31
chkconfig.x86_64 1.11-5.fc31
createrepo_c.x86_64 0.15.5-1.fc31
createrepo_c-libs.x86_64 0.15.5-1.fc31
daxctl-libs.x86_64 67-1.fc31
dbus-daemon.x86_64 1:1.12.16-3.fc31
dbxtool.x86_64 8-10.fc31
device-mapper-multipath.x86_64 0.8.0-3.fc31
dmraid.x86_64 1.0.0.rc16-43.fc31
dmraid-events.x86_64 1.0.0.rc16-43.fc31
dracut-live.x86_64 049-27.git20181204.fc31.1
drpm.x86_64 0.4.1-1.fc31
efi-filesystem.noarch 4-3.fc31
efibootmgr.x86_64 16-6.fc31
fcoe-utils.x86_64 1.0.32-9.git9834b34.fc31
grub2-efi-ia32.x86_64 1:2.02-104.fc31
grub2-efi-ia32-cdboot.x86_64 1:2.02-104.fc31
grub2-efi-x64.x86_64 1:2.02-104.fc31
grub2-efi-x64-cdboot.x86_64 1:2.02-104.fc31
grub2-tools-efi.x86_64 1:2.02-104.fc31
hfsplus-tools.x86_64 540.1.linux3-18.fc31
isomd5sum.x86_64 1:1.2.3-6.fc31
kdump-anaconda-addon.noarch 005-5.20190103gitb16ea2c.fc31
kernel-modules-extra.x86_64 5.4.13-201.fc31
keybinder3.x86_64 0.3.2-7.fc31
libblockdev-btrfs.x86_64 2.23-1.fc31
libblockdev-dm.x86_64 2.23-1.fc31
libblockdev-kbd.x86_64 2.23-1.fc31
libblockdev-lvm.x86_64 2.23-1.fc31
libblockdev-mpath.x86_64 2.23-1.fc31
libblockdev-nvdimm.x86_64 2.23-1.fc31
libblockdev-plugins-all.x86_64 2.23-1.fc31
libblockdev-vdo.x86_64 2.23-1.fc31
libconfig.x86_64 1.7.2-4.fc31
libdnet.x86_64 1.12-31.fc31
libmodulemd.x86_64 2.8.3-1.fc31
libnl3-cli.x86_64 3.5.0-1.fc31
libreport-anaconda.x86_64 2.11.3-1.fc31
libteam.x86_64 1.29-2.fc31
lldpad.x86_64 1.0.1-16.git036e314.fc31
lxpolkit.x86_64 0.5.4-1.fc31.1
mactel-boot.x86_64 0.9-21.fc31
memtest86+.x86_64 5.01-27.fc31
mokutil.x86_64 1:0.3.0-14.fc31
ndctl.x86_64 67-1.fc31
ndctl-libs.x86_64 67-1.fc31
oddjob.x86_64 0.34.4-9.fc31
oddjob-mkhomedir.x86_64 0.34.4-9.fc31
python3-argh.noarch 0.26.1-13.fc31
python3-blivet.noarch 1:3.1.6-1.fc31
python3-blockdev.x86_64 2.23-1.fc31
python3-bytesize.x86_64 2.1-2.fc31
python3-kickstart.noarch 3.21-1.fc31
python3-langtable.noarch 0.0.50-1.fc31
python3-meh.noarch 0.48-1.fc31
python3-meh-gui.noarch 0.48-1.fc31
python3-ntplib.noarch 0.3.3-15.fc31
python3-ordered-set.noarch 3.1-2.fc31
python3-pid.noarch 2.2.3-3.fc31
python3-productmd.noarch 1.23-1.fc31
python3-pwquality.x86_64 1.4.2-1.fc31
python3-pydbus.noarch 0.6.0-9.fc31
python3-pyparted.x86_64 1:3.11.2-2.fc31
python3-pytz.noarch 2019.2-1.fc31
python3-pyudev.noarch 0.21.0-11.fc31
python3-requests-file.noarch 1.4.3-11.fc31
python3-requests-ftp.noarch 0.3.1-15.fc31
python3-simpleline.noarch 1.6-1.fc31
samba-libs.x86_64 2:4.11.4-0.fc31
shim-ia32.x86_64 15-8
shim-x64.x86_64 15-8
syslinux.x86_64 6.04-0.12.fc31
syslinux-extlinux.x86_64 6.04-0.12.fc31
syslinux-extlinux-nonlinux.noarch 6.04-0.12.fc31
syslinux-nonlinux.noarch 6.04-0.12.fc31
teamd.x86_64 1.29-2.fc31
tigervnc-license.noarch 1.10.1-1.fc31
tigervnc-server-minimal.x86_64 1.10.1-1.fc31
tmux.x86_64 2.9a-3.fc31
tpm2-abrmd.x86_64 2.2.0-4.fc31
tpm2-abrmd-selinux.noarch 2.1.0-3.fc31
tpm2-tools.x86_64 4.0.1-1.fc31
udisks2-iscsi.x86_64 2.8.4-3.fc31
unique.x86_64 1.1.6-23.fc31
usermode.x86_64 1.112-5.fc31

Additional services (which are visible in systemd-analyze blame)


EDIT: added services


Are all those additional packages useless … really? I see createrepo_c* there and those are indispensible for me. I am experiencing long booting times too. It would be nice to have one location/page that describes potentially redundant packages – i.e. after an installation.

It’s never that easy. I do not use dmraid, I just use a simple F31 Workstation on a single SSD. But:

[root@hat ~]# rpm -e dmraid dmraid-events device-mapper-multipath
error: Failed dependencies:
        dmraid is needed by (installed) libblockdev-dm-2.23-1.fc31.x86_64
        libdmraid.so.1()(64bit) is needed by (installed) libblockdev-dm-2.23-1.fc31.x86_64
        libdmraid.so.1(Base)(64bit) is needed by (installed) libblockdev-dm-2.23-1.fc31.x86_64
        device-mapper-multipath is needed by (installed) libblockdev-mpath-2.23-1.fc31.x86_64
        device-mapper-multipath is needed by (installed) fcoe-utils-1.0.32-9.git9834b34.fc31.x86_64

1 Like

Thank you for running these tests, so things are as I suspected them to be.

Removing the matching libblockdev plugins should be fine, libblockdev has a separate lvm plugin, the -dm plugin really is only for dmraid. You likely also are not using fibre-channel-over-ethernet, so removing fcoe-utils should be fine too.

So I guess a better command would be “sudo dnf remove dmraid device-mapper-multipath” which will take care of the deps itself. But please do take a good look at what ends up being removed and say ‘N’ when asked to proceed if you do not trust things.


I’ve filed a bug (against anaconda for now) to discuss how to make the dmraid and multipath services not run from a livecd install when they are not needed:
1796437 - RFE: remove or disable dmraid and device-mapper-multipath from livecd installs (if not needed)


I hope it will not spawn a heck of bugs for such a little improvement over few additional lines in FAQ / User Manual… But good luck!

When in doubt, adding --noautoremove is always a good idea.

Hi @jwrdegoede,

Did you get around to fixing #1?
I saw some commits that look like they touch the issue, but it’s hard to understand the status in general :slight_smile:


#1 is somewhat fixed. I’ve added some patches halving the time. But there might still be some delay there.

1 Like

Good to hear!

plymouth-switch-root.service casually takes 1-1.5 seconds (the latter is probably when there are two displays), would you say this is normal and is there any feasible way to improve?

plymouth-switch-root.service| casually takes 1-1.5 seconds (the latter is probably when there are two displays), would you say this is normal and is there any feasible way to improve?

1 - 1.5 seconds is normal (in some cases) and atm there are no plans to improve this.