I recently completed the survey on Fedora magazine regarding Fedora server and I figured I would stop by and share some of my thoughts.
My current use case for Fedora server is to test out the newest versions of Samba for playing around with Samba powered active directory. The install is virtualized on my laptop along with a Windows and Debian VM. I really like what Samba is doing and I like to occasionally tinker around with it.
With that being said, I would personally never use Fedora server in any use case were reliability is a concern. I see that there is work going into Ansible automation but if I am being totally honest I don’t really understand the use case. In my opinion Fedora server is not really suited for anything that requires lots of installs. I have used Fedora server in the past for various things but it has always been to test or play with newer packages. I really like the Cockpit GUI and when I do use Fedora Server I almost never touch SSH since the GUI is pretty powerful.
With that being said, I think that there is a lot of opportunity with Fedora server. I think it could be used as a test bed for new ways of doing things. The docs are a little bit sparse but once you figure out Fedora server it works pretty well. I would like to see Fedora server become more of a skeleton that people can build interesting things on. Currently it is a bit heavy to be all that useful but if you could strip it down and then have a quick installer I would use it more. Fedora IoT and Core are more what I had in mind but they are both a pain to setup.
One thing I find fascinating is the idea of immutable Linux. ostree and similar tech has a lot of promise and I think it would be cool if I could roll forward and roll back changes. Ideally there also should be some sort of change control for /etc as well so that I can fully undo changes to the system. (perhaps via git or filesystem snapshots) I think workloads for such a immutable system could ship via containers that way workloads can have there own /etc to work with. The idea behind this is that I could test and deploy different configurations rapidly without having to worry about left over configuration files, lingering services or resource conflicts. I can do this already with podman but Podman feels a bit overkill for what I am going for. What I am thinking is that a snadboxing system like bubblewrap could be used to pass though the underlying system so that a dedicated container image is not necessary. Part of this idea is done here in my sysvol replication work around using bubblewrap. darin755/samba_sysvol_repl - Codeberg.org
You don’t have to use ostree to do snapshots and rollbacks. You should be able to select Btrfs as the filesystem for your root partition. IMO snapshots and rollbacks should be done at the filesystem level anyway, not the package manager level.
I have Fedora Server running my email gathering and IMAP server, file serving, source control repo server, prometheus monitoring, grafana web UI, supporting backups of Windows, macOS and Fedora systems.
And Fedora Server is my home router protecting my home from the big bad internet.
I have run this way for more then a decade with barely a blip in up time.
I have 24x365 uptime - it’s very reliable for me.
And I update all systems once a week. I have very rarely seen an update break any of my services.
I’ve done rolling unattended daily updates and reboots for years with Server since around F23 and never had an issue. Can’t say the same about Ubuntu and openSUSE.
Fedora Server is the best Linux Server distro from what I’ve seen, and what I’d use right now. I feel it’s light-enough (for a webserver/NAS I didn’t need to check/uncheck anything from Default for basic headless), and have zero interest in anything immutable for Server.
The problem for me is that it requires a lot of updates. I think that is by design but for the majority of my stuff I don’t need newer packages. Debian stable works great and only needs the occasional update once a month. I have also seen Debian systems run for years with unattended updates. It actually can be a serious problem as sometimes systems are forgotten about for years. In my experience Fedora is generally pretty stable in the sense that it is mostly bug free. However, from a change control standpoint it changes once a week which is much more of a hassle than I want to deal with.
At the end of the day it is really what fits your needs I suppose.
I use Fedora Server on hetzner and am very happy with it. I manually update it for now.
I agree with your point on image-based updates - the only thing that could be improved for me is getting the same image-based rollbacks (for peace of mind) and auto-updates (so I don’t need to run them manually) that we get through Silverblue/U-blue. Btrfs is not quite as fool-proof as image-based OSs in this fool’s experience
I’m aware of CoreOS, but most providers don’t support it out of the box, and I’ve never taken the time to learn the Ignition system, Butane files, and Terraform (!) required to get it set up on hetzner and other providers.
Something inbetween would be great for my purposes. I suggested this in the survey too but expanded on it here.
I think Fedora Server should look into taking some inspiration from the embedded Linux space. With embedded systems you typically want the system to be as minimal as possible with strong change controls and quick reliable updates. They could build a small minimal OS image like a embedded system and then have anything that is extra be installed in some other way what is out of band from from the main system. The idea is that the core system needs to always work while your workloads can be changed and potentially recovered at runtime. Ideally your core system should always boot no matter what because a booted system is much easier to fix.
Core OS does this somewhat but they biggest problem for me is that it lacks a lot of flexibility. I want simplicity more than I want automation but Core OS is designed to be fully automated which isn’t practical for only a few installs. What would be nice is a system that is easy to install and configure. To do this the OS needs to have some sort of traditional Linux like system where I can install and configure extra packages. My idea is to use a sandboxing environment like Bubblewrap or Chroot to create a overlay system where workloads can live. The core would be solid and reliable but you would still get the flexibility of a traditional install.
This is kind of what Skiff OS does but I personally think that running everything in Docker is needlessly heavy. Instead, the base OS should be used so that you don’t need to worry about container images.
The problem with btrfs snapshots is that they require manual intervention when something goes wrong. I want something that will attempt a change and then undo it on a failed boot or health check.
btrfs snapshots could be used still but there needs to be some sort of middle man. Even ostree is vulnerable to a bad kernel since the recovery feature needs the kernel to be able to boot.
I’m not a big fan of a lot of automation. But maybe it depends on your use-case.
For example, I’m told that Grub has some automation where it is supposed to automatically show the boot menu whenever there is a problem with the software. Presumably the user is never supposed to have to figure out how to summon the boot menu. However, I’m constantly having to explain to users on this forum how to get the Grub boot menu to show when there is some problem or another with the kernel or a driver so they can select and use the previous one that worked.
So far, I’m not impressed with the computer’s ability to self-determine that it is doing what the user wants it to do. I cannot imagine what would happen if it were left to itself to make determinations about package updates and reconfiguration whenever it feels like it. As a sysadmin, I’d much rather make the changes myself and review them myself on my schedule.
There is some (reasonable) avoidance of atomic editions, but many of the mentioned shortcomings can be avoided with atomic systems[1].
Presuming the system did work initially, one’s never left in a situation of having no kernel (i.e. OSTree deployment) to boot from, given that oldest deployments are only deleted after successful booting into the newly deployed image.
Atomic systems always present the GRUB menu .
I can only speak about atomic desktops though, not servers. ↩︎
I’m still not entirely sure how to intentionally summon GRUB up on UEFI (holding Shift never worked), but been lucky that automation made it appear when I wanted it (I think multiple quick reboots or rebooting outside of systemctl trigger it).