Disabling plymouth on a fresh Fedora 31 installation reduces the boot time by about 50 percent over all tested systems, using a Desktop PC (+ Nvidia), a T495 and a T440p.
The test involves:
systemd-analyze plot > boot.svg
rd.plymouth=0 plymouth.enable=0 to kernel boot parameters
3.) Re-generating grub
If you have a fairly fresh installation (to make it reproduceable) please test this yourself. I’d love to hear if this applies to all setups, and is realted to plymouth, or only applies to my set of hardware.
I’ve created a bug report, but it has not been gaining traction yet.
“Ideally, the goal is to get rid of all flicker during startup.” – ( – “Spare no expenses!”)
There’s actually three questions:
- Does plymouth slow down boot?
- Does plymouth-quit-wait.service slow down boot?
- Does systemd-analyze actually show a time differential e.g. the time GRUB initiates its boot command, and the time GNOME login appears?
I tested 2, also using systemd-analyze and come to a similar (though not identical conclusion) you did. But then per this bug report, I retested using a stopwatch instead. And while the results show there is a time delay, it’s much less than what systemd-analyze shows, it’s not revealing why there is a time delay. It may not be the service unit itself causing the delay, but affecting parallelization of other units.
I suggest you ignore plymouth-quit-wait.service (keep it unmasked); and retest with and without
rd.plymouth=0 plymouth.enable=0 using a stopwatch. I also suggest something like 3 back to back tests with each setting (6 tests), that way you’ve got a decently suggestive sample size.
For sure boot speeds are important. It’s widely beneficial to speed up the startup process, for everyone, across all hardware make/model, and arch’s, baremetal and VM. And for many, it compensates somewhat if they’re unable to get hibernation to work (too complicated, firmware bugs, kernel bugs, etc).
You are indeed correct…
Testing 3 back to back tests with enabled and disabled plymouth shows that the time reported by systemd-analyze is incorrect.
It’s not incorrect per se. It just has a different perspective. Plymouth sticks around, active, for a while after the login window appears. And therefore systemd is tracking its active time, but it’s not anything that causes boot to be delayed by the full time. But your results do indicate a roughy 3s difference with and without plymouth. That’s real. And I think there’s room for improvement there.
Upstream plymouth maintainer / Mr flicker-free boot here. Chris Murphy pointed me to this thread, so as Chris already explained the systemd-analyze time is not really reliable because it includes the plymouth-quit-wait.service which does not reflect when the login screen is ready for login.
With that said, there is some room for improvement after Chris filed this bug I started trying to reproduce this on my own system; and although I see no difference when I mask plymouth-quit-wait.service, I did notice 2 other interesting things:
On my system plymouth-switch-root.service takes 2 and a bit seconds, while this should be more like 20ms, the problem is that plymouth is synchronously (using the classic unix select/poll on a bunch of filedescriptors design) and just before the initrd finds my root filesystem the i915 driver loads and then plymouth takes about 2 seconds to deal with all the udev-events (which result in probing the monitor) and loading the initial theme. You may want to check your systemd-analyze plot output to see if you have something similar going on. I have a plan to fix this, but I’m not sure when I will get around to it.
If you’ve done your installation from a livecd then you may very well have the dmraid and device-mapper-multipath packages installed. These both come with a enabled by default service, which drags in the deprecated systemd-udev-settle.service and that takes a long time. You very likely do not need multipath (if you do you will know you do) and unless you are using a raid-set which is managed by your disk-controller or system firmware (rather then pure software mdraid) you also do not need dmraid. Even if you are using the RAID functionality from your motherboard, if that is Intel RAID / RST then you still do not need dmraid. So typically you can safely do: “sudo rpm -e dmraid dmraid-events device-mapper-multipath” and this will shave a number of seconds of your boot time.
Note do not remove dmraid if you are using a firmware managed raidset or if you are not sure you do, removing it when it is necessary will result in a non booting system!
Thank you very much Chris and Hans for the valueable insight. I’ve been looking into the systemd-analyze documentation plus your information I now understand the calculation much better.
You brought up an interesting point regarding the livecd… I have a fresh notebook that I’d like to keep as “clean and lean” as possible. You were mentioning that the livecd keeps anaconda, and other dependencies, in the final installation - I’ve noticed this too on my test machines. Does using the netinstall image solve these “issues”?
I believe that it should, yes. Specifically I would expect anaconda
to only add dmraid and device-mapper-multipath to the transaction if
necessary. But it has been almost a decade since I last looked at
the anaconda code for this.
Taking the current Workstation DVD image (livecd) and the netinstall image I setup two test machines (both using BIOS boot). Updated them to latest and compared the installed packages of the final installation.
The netinstall installation has no additional packages compared to the livecd installation.
The livecd installation contains the netinstall packages and in addition:
Additional services (which are visible in systemd-analyze blame)
EDIT: added services
Are all those additional packages useless … really? I see
createrepo_c* there and those are indispensible for me. I am experiencing long booting times too. It would be nice to have one location/page that describes potentially redundant packages – i.e. after an installation.
It’s never that easy. I do not use dmraid, I just use a simple F31 Workstation on a single SSD. But:
[root@hat ~]# rpm -e dmraid dmraid-events device-mapper-multipath
error: Failed dependencies:
dmraid is needed by (installed) libblockdev-dm-2.23-1.fc31.x86_64
libdmraid.so.1()(64bit) is needed by (installed) libblockdev-dm-2.23-1.fc31.x86_64
libdmraid.so.1(Base)(64bit) is needed by (installed) libblockdev-dm-2.23-1.fc31.x86_64
device-mapper-multipath is needed by (installed) libblockdev-mpath-2.23-1.fc31.x86_64
device-mapper-multipath is needed by (installed) fcoe-utils-1.0.32-9.git9834b34.fc31.x86_64
Thank you for running these tests, so things are as I suspected them to be.
Removing the matching libblockdev plugins should be fine, libblockdev has a separate lvm plugin, the -dm plugin really is only for dmraid. You likely also are not using fibre-channel-over-ethernet, so removing fcoe-utils should be fine too.
So I guess a better command would be “
sudo dnf remove dmraid device-mapper-multipath” which will take care of the deps itself. But please do take a good look at what ends up being removed and say ‘N’ when asked to proceed if you do not trust things.
I’ve filed a bug (against anaconda for now) to discuss how to make the dmraid and multipath services not run from a livecd install when they are not needed:
1796437 - RFE: remove or disable dmraid and device-mapper-multipath from livecd installs (if not needed)
I hope it will not spawn a heck of bugs for such a little improvement over few additional lines in FAQ / User Manual… But good luck!
When in doubt, adding
--noautoremove is always a good idea.
Did you get around to fixing #1?
I saw some commits that look like they touch the issue, but it’s hard to understand the status in general
#1 is somewhat fixed. I’ve added some patches halving the time. But there might still be some delay there.
Good to hear!
plymouth-switch-root.service casually takes 1-1.5 seconds (the latter is probably when there are two displays), would you say this is normal and is there any feasible way to improve?
plymouth-switch-root.service| casually takes 1-1.5 seconds (the latter is probably when there are two displays), would you say this is normal and is there any feasible way to improve?
1 - 1.5 seconds is normal (in some cases) and atm there are no plans to improve this.