Can we build the easiest home server ever?

TL;DR: I’ve been thinking a lot lately about an operating system for a home server that uses existing technologies such as Fedora CoreOS, Cockpit, Podman, etc. to enable even inexperienced users to set up a home server without much effort and whose operation is as simple as that of a Fedora workstation. Preconfigured Podman containers should be installed from a kind of app store (Cockpit extension) with just one click, and no interaction should be necessary for further operation. What do you think of this idea?

The whole story
I have been using Fedora on my notebook and server for many years now. I recently showed a friend my home server and what I use it for (Nextcloud, Gittea, Home Assistant, OctoPrint and Pi-hole). He found it so cool that he also wants to set up a server. Unlike me, he has no IT background and is therefore having a hard time setting up his server with the instructions I gave him.

This got me thinking how cool it would be to have a server operating system that is as easy to use as Fedora Workstation on my notebook.

The user story
As a maker, privacy advocate, non-profit organization, small business, start-up and so on, I would like to set up a server without much knowledge on which I can run containerized workloads like a website, a Nextcloud installation, a Home Assistant setup and so on without having to worry about maintenance or backups. Ideally, I download an installation medium, flash it onto an SD card for my Raspberry Pi or onto a USB stick for an (old) computer that should serve as a server and boot from the live USB stick. On my notebook, I then start a web browser and open the setup page (something like homeserver.local) that was started on my server from the live USB stick. As with a Fedora Workstation installation, I choose a username for the server, create partitions and configure my server. After the installation, the page reloads and I see a login screen (Cockpit) where I can log in. After logging in, I get a tab called Apps. There I find a list of community maintained apps (containers) that I can install with one click. Before the installation starts, I am asked under which (sub)domain the app/service should be available. In the background, one or more containers are automatically set up for me, which make the app/service available. There is a reverse proxy and a container that creates backups of the container volumes and saves them in a location that can be set by the user. Where possible, the apps/services are preconfigured so that they can be operated securely on a home server. After installation, the user no longer has to worry about anything. The server and the containers update themselves automatically or at the user’s request.

The technology
For the implementation, I am thinking of something like a bare-metal installation of Fedora CoreOS with Cockpit. The App Store could be realized via a Cockpit extension. The applications are preconfigured Podman containers that can be loaded from a repository and started with a single click. Cockpit already offers many options for the rest of the system configuration, like e.g. user management.

The question
What do you think of this idea? Do you think something like this would be useful? If so, do you think that Fedora CoreOS, Cockpit and Podman are the right tools for this or would it be better to choose a different basis? Does something like this perhaps already exist? Would anyone like to work on this together with me?

I am looking forward to your answers.


My friends and I have been thinking about this too, and we came up with some ideas. Let me share some of the solutions and problems we encountered when trying to design this.

Making it user friendly
Probably the most difficult is making it user friendly. Creating a page where the user can install software shouldn’t be to difficult, but maintenance of the software is.

The problem is mostly updates. Updating software like gitea / miniflux is mostly fine, but it breaks down when there is a breaking change in the config format. And when something breaks rollback is different for every service. Can’t downgrade something that just did database migrations.

Then there is also software such as postgres, which is a pain to upgrade to a new major release. The system would need to do an automatic pg_dump and pg_restore. Not impossible but it makes it a bit harder. If anything goes wrong the user probably doesn’t know how to fix it.

We came to the conclusion that it is impossible to use a single container as the “application” / “service” that is installed by the user. This is because a service:

  • Consists of multiple containers that depend on each-other.
  • Has a different backup / restore method. Can’t just copy a running postgres volume and call it a backup. This could be fixed by stopping the services, but this is not a trade-off we wanted to make.
  • Has config for the reverse proxy.
  • Has certain secrets / environment variables.

What we came up with and are using now is a git repo for each service. A service contains quadlet files, configuration files, backup / restore scripts and reverse proxy config.

The backup and reverse proxy (caddy) are also services, but they scan all installed services for their config. Services are installed by simply git cloning them into ~/.config/containers/systemd/. By doing this we achieved a simple install method (without user interface), but maintenance would still be a problem.

Making local backups is not enough when there is important data stored on the server. So the backup functionality should backup to another server / s3 bucket. We ended up using restic together with blackblaze b2.


I think this would be useful and Fedora CoreOS and podman are probably a great option for creating something like this. The idea you are describing already exist in the nas world from what I understand. I never used it myself but have seen others using it. Take a look at systems such as truenas, synology and ClearOS.

Making Fedora more user-friendly for the home lab use case sounds great to me! Is there a reason why CoreOS, IoT, Cloud, or Server don’t currently serve that purpose well? I would recommend reaching out to one of them to see if these changes can be implemented.

Hey, thanks for sharing your thoughts on my post and the information about what you already achieved. Is this work somewhere publicly available, or do you plan to publish it? I would be really interested to see what you did there, since it sounds a lot like what I had in mind, when I wrote the text above.

I had some time the last days and did some research, looked at my current setup and tinkered around a bit. I basically came to the same conclusion as what you wrote in your post. For every service there must be a way to define a multi container setup, and dedicated config files and backup and restore scripts are needed.

Oh, Fedora CoreOS is already very user-friendly, for people who know what they are doing. The idea in the text above is more an idea to build something new for people who don’t know much or even anything about containers or Fedora CoreOS. It is about giving them an easy-to-use UI to set up a specific services, which is normally done by writing multiple config files and scrips.

Even if used Fedora for many years now, I am relatively new to this community. Where and how is the best way to reach out to them? I thought, this might already be the best place :smiley:

If you’re on Matrix you can talk in their team chat here. Otherwise try posting in the Project Discussion category with the coreos-wg tag to reach them. The Water Cooler category is more of a topic for general chat, which is fine for how this thread started. :slight_smile:

What you’re saying I’ve heard from another person as well. I don’t really have a server or know much about it, but I did hear about there being this expectation of starting with Ignition files or some kind of config to manage CoreOS. If that’s the case then it would be nice to have something simpler. I’ve heard that Fedora IoT funnily enough has satisfied this use case while also still using rpm-ostree. That may work in the short term, but it would be nice if people knew this worked as an alternative.

Most of our service repositories are private because they contain sensitive information in the git history. I made a couple public without history, they are available here:

I think these should give a good idea of what we are doing, mostly just quadlet files but some services(caddy, backups) can collect configuration from other services. The main advantage of this compared to doing everything with ignition is that services can easily be moved between servers. Installing these services consists of git cloning them into .config/containers/systemd and restoring the required volumes from backup (if needed).

Config updates are also a simple git clone instead of redeploying the server, which is much more workable for us because we have many services on a single server. We do use ignition, but only for a basic server setup. This includes setting up ssh, a firewall, sysctl values, enabling lingering etc…

I don’t think this is a full solution to making a user friendly server, but it might be a step in the right direction. We mostly optimized it for our use-case, which is a simple setup that requires minimal maintenance.

Would be interested in what you think of what we are doing, is it similar to how you are using CoreOS? Any suggestions to improve what we are doing would be greatly appreciated ;P. Also if something is still not clear I would be more than happy to clarify.

I don’t believe beginners should be using Podman or any containers (unless you really want to learn those), or anything other than bare-metal. Anything else could hide details that you want to be aware of early imo.

I don’t encourage one-clicks, repacks, or anything that abstracts from the basics. I started out with random WoW server repacks, XAMPP, and eventually just running the stuff from upstream source with MariaDB, Git, and learned how to compile it. I claim to have had the most WoW private server for WotLK for a bit (with 2FA even), and was doing what ChromieCraft now does before they even existed.

One-click easy solutions might be ok for starting out if you’re interested in how it works, but relying on them in any kind of security standpoint is not ideal, particularly if you’re not aware of the core details of the container or whatever you’re using. You want a foundation, and a strong foundation is knowing how stuff works on bare-metal without abstraction.

And if it’s a time or effort ordeal, you 100% surely don’t want to be running your own servers because good luck with the continuous security monitoring of it. Running insecure stuff is like candy for botnets, and I don’t want more people becoming part of those out of laziness :stuck_out_tongue:

I also don’t like the idea of immutable distros or whatever CoreOS is as that also feels like abstraction. Severs and files don’t just blow-up out of nowhere; if you have to run that in-fear of the distro pushing a bad update, why would you trust the distro at all? I’ve self-hosted for about 9 years now and have done openSUSE Tumbleweed and Fedora Server with unattended, auto-accepted, daily updates with zero issues the entire time, including today.

If you have a hard interest in only using container(s) or immutable though, go for it! I guess there’s some appeal for them, and I assume there’s other people like me that won’t go near em so you’ll have an edge with that knowledge.

I started my wiki for my own notes, but I want it to be a good source for people wanting to run servers as that’s mostly my ordeal currently; self-hosting stuff and not relying on cloud or big companies! I like knowing how stuff works, and I like making it work better :stuck_out_tongue:

Here’s some notes that can be copy/pasted into Fedora Server (what I run primarily) to basically copy the DokuWiki instance I run:

If my sever blew up right now, I can restore it entirely from those notes alone (and a copy of my actual pages).

Those are the basics, on bare-metal, no abstraction. nginx is set-up by hand. PHP-FPM and the pools are also by-hand. Hooking them together with FastCGI, also by hand. And it’s all there in text :stuck_out_tongue:

And the best part with that style is I can copy it and change a few things for other websites too, like WordPress and Joomla. I like consistency!

Here’s all the servers I’ve ran with set-up notes.


I generally agree to this, if someone is new they should start out with learning the basics. Then after they have a basic understanding they can move onto containers. But if a user is already familiar with containers from using them on windows or macos I don’t think it would be bad to continue using them.

I think that creating a “easy home server” as Steffen proposed is great for getting users interested . I don’t see how creating a good one-click install solution would cause security concerns. It would probably be the opposite, a well configured service by someone who knows what he is doing is more secure than something setup by a new user. That the user is not aware of how everything works should not be a problem. My browser is not insecure because I don’t completely understand how it works.

This is a good point, a new user will probably not monitor the logs for intruders. But how many people running a homeserver do that? My guess is not that many people. Or are you tying to point out that running homeservers is a bad idea in general? Making the logs easy accessible using a webui might partially solve this problem?

There are also many services which a user might want to run that don’t need to be accessible from the internet. If a user only wants to run services like this I think that security would also be less of a concern.

It is not about fear for a bad update or not having trust in fedora. I will still create backups of my disk even if it is from a respectable brand and has been running fine for 10 year. Yes I trust the disk, but it is great to have a fallback might it fail.

What is also great is that CoreOS works with different streams. Stable, testing and next. By running a server on the next stream it is possible to catch bugs before they are deployed to the important servers. A couple months ago podman changed how it interprets the ReadOnly quadlet config option. This broke some of my services and was happy to catch that before it got to more important servers.

Also I think that choosing containers over installing things on the host has several advantages:

  • When doing updates it is easier to detect which part of the system is having problems because containers can be updated one by one.
  • It is easier to rollback a single service when an automatic update fails. When running everything on the host it could be difficult to rollback a single service because it might depend on older versions of libraries.
  • Combined with SELinux containers give improved security when configured correctly (resource limits, limited file access, readonly filesystem etc…). Yes it is possible to achieve the same thing without containers but it will be more difficult.
  • Because every service is isolated and has its own volume / config directory it would be easier to create a one-click install system / backups.

And there are probably more which I am forgetting.

I think your server setup is great, really nice to see so much documentation! Interesting to see how others are running their servers. Might be great to have a topic where everyone shares how they are running servers using fedora :slight_smile:

This is quite an awesome solution. I really like how loosely coupled the services are with the rest of the system. I tinkered around a bit with Oktoprint and WordPress, that I am currently running on my server (Fedora Server not CoreOS) and this is so much simpler than that what I was doing there. I am already thinking about to move all my services to this approach.

But I also have to admit that I am not an expert in this field. My everyday job is to write embedded software for ECUs in agricultural machinery, which is relatively far away from all this server, podman and network stuff. It is just my hobby. With that being sad, I fully agree to this:

If experts had seen some of my configurations, especially those from the early days when I just wanted to get things working, it might have caused them headaches :smiley:

I also fully agree to the other points you replied to.

Happy to hear that you like our approach. If you have any questions or suggestions feel free to ask or to send me a message on matrix (available in my profile). Would be glad to receive some feedback so we can improve it.

Can definitely say that I did the same! Some mistakes are also very easy to make. One of the mistakes I remember making when I configured my first server was setting up a crontab for the root user that executed a script that was writable by an unprivileged user. Something like that you only notice once you have a good understanding of how stuff works.