podman auto-update rolls back automatically if systemd doesn’t signal that the container started automatically. It’s really slick. And now we’re kind of in the weeds here… the related point is that if we had these basic pre-created Fedora containers that automatically got bugfix and security updates in the registry, they could be a lot more “configure and forget” for many use-cases, which seems genuinely valuable to me.
[note: this comment is somewhat similar to the one I just posted in another discussion about the objective about programming lang stack: Objective Review: We integrate programming language stack ecosystems - #3 by thl ]
How good does our release model with its roots in the 90s (when the internet was young and yum/dnf didn’t even exist) fit the expectation of what users these days expect from containers and Flatpaks? A lot of people afaics want “latest and greatest upstream version” these days especially when it comes to containers and Flatpack, which will sometimes collide with the release cycle and the update policies for Fedora afaics.
That’s among the reasons why I proposed a different release model recently with releases every few weeks and two distro streams (e.g. an approach similar to that of Firefox and Firefox ESR): knurd: How would you change Fedora, if you became its supreme leader. It would make Fedora get somewhat closer to what rolling release distros do, but not make it a real rolling release distro: there still would be a “distro stabilizes things and makes sure the different parts are working well together before a new distro version is released” phase (which then would be the base for containers and Flatpacks).
Flatpaks and containers aren’t necessarily linked to the base OS release model. In fact, if we shift more of our applications to that paradigm, the base OS release model becomes less important. It means we could deliver like you suggest at the application level.
Your proposal relies on an assumption that application delivery and OS delivery are tightly coupled. That doesn’t have to be the case (and probably shouldn’t).
I’m fine with that as well. I agree that I was unclear previously.
I think that the question is more: What makes a container image an “official Fedora container image”?
I think that we need to extend what we consider Fedora official images to images built in GitLab CI in the Fedora namespace and hosted on Quay.io. Making a new Fedora based container image will thus be much easier.
We’re not going to attract contributors by using the current infrastructure to build Fedora based containers and I think we should stop using it for anything else other than the base images.
One of the key value drivers of Flatpaks, as I see it, is that users can consume the “product” from the developers directly. This enables a healthier relationship between the user and the developer, and makes both sides more confident working on the Linux platform.
When Fedora creates a Flatpak based on upstream it “competes” with the developer’s product, leading to confusion for users (and threatening that relationship). As such, I think there needs to be a strong rationale - on an individual package level - to build a competing product.
A great example of this is Firefox - it is unambiguously a better experience for most users to use the Flathub, Mozilla-packaged Firefox Flatpak. This includes key support for formats that Fedora will not package (but many consider essential on the modern web) and it ensures users get updates as soon as Mozilla release them (instead of waiting for Fedora). This leads to an initial user experience on Silverblue where users:
- Try the packaged Firefox RPM, realise it’s “missing” extensions
- Ask for help, and get told to install the Flatpak
- Install the Flatpak, realise it’s still “missing” the extensions
- Ask for help, and get told “no, the other Flatpak”
(Fedora 38 improves this experience a little bit but Fedora Flatpaks are still the default, where they exist)
I understand that Fedora is in a bit of a tight spot here - you can’t ship e.g. a Firefox Flatpak that includes proprietary and/or patented software - but I wonder if the solution is in working with upstream (Flathub) to create a subset of packages that Fedora can consume, and then a set of add-ons that are labelled as more restricted. This would lead to two things:
- Fedora not being the sole OS without out-of-the-box “official” Flatpaks
- The wider Linux ecosystem maintaining a strong split between FLOSS and non-FLOSS packages
The nightmare scenario is that developers just start telling people not to use Fedora because it breaks their software (which is not plausible now, but may become plausible if every other OS just consumes directly from Flathub).
This is a double-edged blade, though. App developers each have their own idea of how to treat security and privacy. Does the developer use good practices in building binaries in a secure buildsystem, or are they made on a laptop in a coffeeshop somewhere? Firefox is almost a special case because there is a large open-source organization behind it.
I don’t want to work against app developers at all — but if they’re willing to work with our packagers, we can together provide a reliable, secure, and consistent experience for users.
Fedora provides first-line user support, quality assurance, and more — not just bug reports, but bug reports from people who understand the technology, and which often come with fixes! We find and fix CVEs in not just the applications, but their underlying open source components. We build with the newest compilers, and help fix applications which have real bugs revealed by those newer tools. We make software work across different architectures — something that everyone kind of takes for granted now, but “everything is just basically seamless on ARM” wouldn’t have happened without distros doing heavy lifting.
If app container differs from the base container only by a
dnf install there is no point in choosing app container over the generic one. You still need to build your own layer.
There is a class of app containers though which are not as simple as that. They actually go extra mile so that you don’t have just a correct content inside, but you also get the API in a form of Environment variables, mount points and other container metadata tailored for running that specific app in a cloud environment.
Then you don’t add your configs into the image at all, all you changes live in ConfigMaps and mounted volumes.
I used postgresql image ( Red Hat Ecosystem Catalog ) for deploying a test instance of Postgres on Kubernetes this way, and it is quite nice when you can just skip the step of building a container completely.
This topic was automatically closed after 31 days. New replies are no longer allowed.