Disabling plymouth on a fresh Fedora 31 installation reduces the boot time by about 50 percent over all tested systems, using a Desktop PC (+ Nvidia), a T495 and a T440p.
The test involves:
1.) systemd-analyze time
1.1) Optional: systemd-analyze blame; systemd-analyze plot > boot.svg
2.) Adding rd.plymouth=0 plymouth.enable=0 to kernel boot parameters
3.) Re-generating grub
If you have a fairly fresh installation (to make it reproduceable) please test this yourself. I’d love to hear if this applies to all setups, and is realted to plymouth, or only applies to my set of hardware.
I’ve created a bug report, but it has not been gaining traction yet.
Does systemd-analyze actually show a time differential e.g. the time GRUB initiates its boot command, and the time GNOME login appears?
I tested 2, also using systemd-analyze and come to a similar (though not identical conclusion) you did. But then per this bug report, I retested using a stopwatch instead. And while the results show there is a time delay, it’s much less than what systemd-analyze shows, it’s not revealing why there is a time delay. It may not be the service unit itself causing the delay, but affecting parallelization of other units.
I suggest you ignore plymouth-quit-wait.service (keep it unmasked); and retest with and without rd.plymouth=0 plymouth.enable=0 using a stopwatch. I also suggest something like 3 back to back tests with each setting (6 tests), that way you’ve got a decently suggestive sample size.
For sure boot speeds are important. It’s widely beneficial to speed up the startup process, for everyone, across all hardware make/model, and arch’s, baremetal and VM. And for many, it compensates somewhat if they’re unable to get hibernation to work (too complicated, firmware bugs, kernel bugs, etc).
It’s not incorrect per se. It just has a different perspective. Plymouth sticks around, active, for a while after the login window appears. And therefore systemd is tracking its active time, but it’s not anything that causes boot to be delayed by the full time. But your results do indicate a roughy 3s difference with and without plymouth. That’s real. And I think there’s room for improvement there.
Upstream plymouth maintainer / Mr flicker-free boot here. Chris Murphy pointed me to this thread, so as Chris already explained the systemd-analyze time is not really reliable because it includes the plymouth-quit-wait.service which does not reflect when the login screen is ready for login.
With that said, there is some room for improvement after Chris filed this bug I started trying to reproduce this on my own system; and although I see no difference when I mask plymouth-quit-wait.service, I did notice 2 other interesting things:
On my system plymouth-switch-root.service takes 2 and a bit seconds, while this should be more like 20ms, the problem is that plymouth is synchronously (using the classic unix select/poll on a bunch of filedescriptors design) and just before the initrd finds my root filesystem the i915 driver loads and then plymouth takes about 2 seconds to deal with all the udev-events (which result in probing the monitor) and loading the initial theme. You may want to check your systemd-analyze plot output to see if you have something similar going on. I have a plan to fix this, but I’m not sure when I will get around to it.
If you’ve done your installation from a livecd then you may very well have the dmraid and device-mapper-multipath packages installed. These both come with a enabled by default service, which drags in the deprecated systemd-udev-settle.service and that takes a long time. You very likely do not need multipath (if you do you will know you do) and unless you are using a raid-set which is managed by your disk-controller or system firmware (rather then pure software mdraid) you also do not need dmraid. Even if you are using the RAID functionality from your motherboard, if that is Intel RAID / RST then you still do not need dmraid. So typically you can safely do: “sudo rpm -e dmraid dmraid-events device-mapper-multipath” and this will shave a number of seconds of your boot time.
Note do not remove dmraid if you are using a firmware managed raidset or if you are not sure you do, removing it when it is necessary will result in a non booting system!
Thank you very much Chris and Hans for the valueable insight. I’ve been looking into the systemd-analyze documentation plus your information I now understand the calculation much better.
You brought up an interesting point regarding the livecd… I have a fresh notebook that I’d like to keep as “clean and lean” as possible. You were mentioning that the livecd keeps anaconda, and other dependencies, in the final installation - I’ve noticed this too on my test machines. Does using the netinstall image solve these “issues”?
I believe that it should, yes. Specifically I would expect anaconda
to only add dmraid and device-mapper-multipath to the transaction if
necessary. But it has been almost a decade since I last looked at
the anaconda code for this.
Taking the current Workstation DVD image (livecd) and the netinstall image I setup two test machines (both using BIOS boot). Updated them to latest and compared the installed packages of the final installation.
The netinstall installation has no additional packages compared to the livecd installation.
The livecd installation contains the netinstall packages and in addition:
Are all those additional packages useless … really? I see createrepo_c* there and those are indispensible for me. I am experiencing long booting times too. It would be nice to have one location/page that describes potentially redundant packages – i.e. after an installation.
It’s never that easy. I do not use dmraid, I just use a simple F31 Workstation on a single SSD. But:
[root@hat ~]# rpm -e dmraid dmraid-events device-mapper-multipath
error: Failed dependencies:
dmraid is needed by (installed) libblockdev-dm-2.23-1.fc31.x86_64
libdmraid.so.1()(64bit) is needed by (installed) libblockdev-dm-2.23-1.fc31.x86_64
libdmraid.so.1(Base)(64bit) is needed by (installed) libblockdev-dm-2.23-1.fc31.x86_64
device-mapper-multipath is needed by (installed) libblockdev-mpath-2.23-1.fc31.x86_64
device-mapper-multipath is needed by (installed) fcoe-utils-1.0.32-9.git9834b34.fc31.x86_64
Removing the matching libblockdev plugins should be fine, libblockdev has a separate lvm plugin, the -dm plugin really is only for dmraid. You likely also are not using fibre-channel-over-ethernet, so removing fcoe-utils should be fine too.
So I guess a better command would be “sudo dnf remove dmraid device-mapper-multipath” which will take care of the deps itself. But please do take a good look at what ends up being removed and say ‘N’ when asked to proceed if you do not trust things.