Objective Review: Fedora is a popular source for containers and Flatpaks

Can we help improve the flatpak default configs so that they’re more secure and actually take advantage of the sandboxing? I’m not sure if there is a way to encourage this, but I know that’s been a common complaint about flatpak that it doesn’t go far enough in it’s expectation of sandboxing.

These options presented could be good alternatives so that we keep contains and flatpaks in focus, just with a change to the objectives.

1 Like

This is mostly FUD spread by websites such as flatkill. There might indeed be some apps with “too much access” on Flathub but most of the time, the access is there to either let the app work as intended or to workaround some design limitations that would require significant work such as the introduction of a new desktop portal.

I’m not saying Flatpaks are perfectly sandboxed (they are not) but the point is that they are not installed as root on the system and don’t run random scripts as root during updates. This already makes them hundreds times more secure and sandboxed than regular packages that you would download from a random repository.

In some cases in Kinoite for example, we’re keeping some apps in the image because it’s too hard to make them run sandboxed (Dolphin (file manager), Filelight (file size explorer)).

If Fedora Flatpaks have stricter configs than Flathub ones, then it’s likely that some things won’t work in Fedora Flatpaks and that will create another reason making them less compelling compared to Flathub ones.

Work to improve how apps work with Flatpak is however a good idea and will benefit everyone.

I’d say there is value in having well made images (not running as root, using a configuration that works by default in a container, etc.) for popular applications such as nginx, postgresql, etc.

The main difference with the rest of the artifacts that Fedora produces is that their life cycle is not synced to Fedora releases. You want a PostgreSQL 13, 14 or 15 image, not an image with “the version of PostgreSQL included in Fedora 38”.

Thus why I think it makes sense to have it as a separated / autonomous thing that is not tied to our release cycles, which is kind of the state it’s in right now.

1 Like

This alone feels like it would be a huge win for both this Objective and possibly other strategy themes (and a load off of the Fedora Infra folks).

Adopting a more well known and documented way of building container images via GitLab CI would open up the ability for more of the community to help curate/support the “official” Fedora container images.

I disagree — I think we should ship server applications in containers by default, which means we need some way to produce them for ourselves.

I agree more with this, but the release boundaries are also where we change compilers, introduce security hardening features and policy changes, etc., and so it’s actually useful to have both things. (I’d also like to see us produce EPEL containers on UBI.)

This goes right back to what I was saying in the first post in this topic — I hope we can bring some of what we’d wanted to get from Modularity to our users, so they can get the version of PostgreSQL you need, on whatever base they like.

What about a compromise where the base images are built in the existing pipeline, but we make it easier for contributors to maintain and build the server application containers via Gitlab? Thankfully there is already a Fedora org in place on Gitlab, so it seems possible to create a Containers project under there and let folks contribute.

Kevin agrees that the existing container build pipeline is not ideal and limiting the support for that pipeline to just base images would reduce the burden on the Fedora Infra team.

The existing pipeline needs to be replaced. Gitlab CI might be the answer.

I’m quite possibly ‘old man yelling at containers’ here, but for me (and lots and lots of other SRE/admin types I interact with), I don’t understand where you would seek out and use a ‘apache container’ or a ‘nginx container’ instead of using a base image and layering the packages/application you need + all the config you want to do in it. I mean, I can see a case for complex to setup applications to have it all setup (so, say a synapse-matrix container makes sense to me), but do people really choose specific app containers like this over layering?

And yes, I would be happy to replace our current pipeline, but I don’t think we should focus on the technical replacement/that here…

I can think of three reasons quickly:

  1. It’s nice to have it all (no pun intended) contained. More lightweight (and energy-efficient) than starting a VM for each service. (And you can manage the configuration on the main host and mount it into the container.)
  2. Addresses “too fast + too slow” by decoupling the base OS from the applications and the applications from each other.
  3. If you get used to working this way, it’s easy to move things to kubernetes / openshift if you want to do that.

Evidence suggests “yes”. :slight_smile:

I am not suggesting using VM’s. I mean the difference between:

  1. pull fedora:38/nginx and then add your config to it (I guess via passing options or variables to the container?)
    vs
  2. pull fedora:38 base container and ‘dnf -y install nginx’ as the first part of your configuration, then add your config, then use that.

The second one is what I do and see others doing. It means you don’t have to learn what way the person who setup the container has to configure things or setup things, you can just install and configure like you always would.

The ‘too fast + too slow’ issue I don’t see being solved by this either. We have to make the containers from packages, and if we have postgresql-10 and postgresql-11 packages, they could just install the one they want in the base container.

You can totally use a base container + installing what you need + config in openshift, this is what we do in fedora. This also allows you to install N things if you need them instead of running seperate containers for each.

But anyhow, I’ll shutup now… perhaps I just need to adjust to learning how ‘specialized’ app containers are configured.

Oh, I see! Sorry for misunderstanding. Please disregard my entire reply above. :slight_smile:

Ngnix is easily configured with one simple file, so I just do -volume=~/etc/nginx.conf:/etc/nginx/nginx.conf:Z.

For the build-on-the-base-image approach, how do you deal with updates? Rebuild periodically? (Genuine question — I am, after all, only a dilettante sysadmin these days!) If, instead, there’s an Official Image, I just use podman auto-update.

Yeah, sorry if I wasn’t clear there at first.

Yeah, rebuild the image… Then if some update causes pain you can rollback to the old one, etc (which I imagine podman auto-update also lets you do).

podman auto-update rolls back automatically if systemd doesn’t signal that the container started automatically. It’s really slick. And now we’re kind of in the weeds here… the related point is that if we had these basic pre-created Fedora containers that automatically got bugfix and security updates in the registry, they could be a lot more “configure and forget” for many use-cases, which seems genuinely valuable to me.

[note: this comment is somewhat similar to the one I just posted in another discussion about the objective about programming lang stack: Objective Review: We integrate programming language stack ecosystems - #3 by thl ]

How good does our release model with its roots in the 90s (when the internet was young and yum/dnf didn’t even exist) fit the expectation of what users these days expect from containers and Flatpaks? A lot of people afaics want “latest and greatest upstream version” these days especially when it comes to containers and Flatpack, which will sometimes collide with the release cycle and the update policies for Fedora afaics.

That’s among the reasons why I proposed a different release model recently with releases every few weeks and two distro streams (e.g. an approach similar to that of Firefox and Firefox ESR): knurd: How would you change Fedora, if you became its supreme leader. It would make Fedora get somewhat closer to what rolling release distros do, but not make it a real rolling release distro: there still would be a “distro stabilizes things and makes sure the different parts are working well together before a new distro version is released” phase (which then would be the base for containers and Flatpacks).

Flatpaks and containers aren’t necessarily linked to the base OS release model. In fact, if we shift more of our applications to that paradigm, the base OS release model becomes less important. It means we could deliver like you suggest at the application level.

Your proposal relies on an assumption that application delivery and OS delivery are tightly coupled. That doesn’t have to be the case (and probably shouldn’t).

1 Like

I’m fine with that as well. I agree that I was unclear previously.

I think that the question is more: What makes a container image an “official Fedora container image”?

If it’s “hosted in the registry.fedoraproject.org” registry or “built in Fedora infra”, then none of the images hosted on Quay.io right now are official and only the base & toolbox images are.

I think that we need to extend what we consider Fedora official images to images built in GitLab CI in the Fedora namespace and hosted on Quay.io. Making a new Fedora based container image will thus be much easier.

We’re not going to attract contributors by using the current infrastructure to build Fedora based containers and I think we should stop using it for anything else other than the base images.

1 Like

One of the key value drivers of Flatpaks, as I see it, is that users can consume the “product” from the developers directly. This enables a healthier relationship between the user and the developer, and makes both sides more confident working on the Linux platform.

When Fedora creates a Flatpak based on upstream it “competes” with the developer’s product, leading to confusion for users (and threatening that relationship). As such, I think there needs to be a strong rationale - on an individual package level - to build a competing product.

A great example of this is Firefox - it is unambiguously a better experience for most users to use the Flathub, Mozilla-packaged Firefox Flatpak. This includes key support for formats that Fedora will not package (but many consider essential on the modern web) and it ensures users get updates as soon as Mozilla release them (instead of waiting for Fedora). This leads to an initial user experience on Silverblue where users:

  1. Try the packaged Firefox RPM, realise it’s “missing” extensions
  2. Ask for help, and get told to install the Flatpak
  3. Install the Flatpak, realise it’s still “missing” the extensions
  4. Ask for help, and get told “no, the other Flatpak”

(Fedora 38 improves this experience a little bit but Fedora Flatpaks are still the default, where they exist)

I understand that Fedora is in a bit of a tight spot here - you can’t ship e.g. a Firefox Flatpak that includes proprietary and/or patented software - but I wonder if the solution is in working with upstream (Flathub) to create a subset of packages that Fedora can consume, and then a set of add-ons that are labelled as more restricted. This would lead to two things:

  • Fedora not being the sole OS without out-of-the-box “official” Flatpaks
  • The wider Linux ecosystem maintaining a strong split between FLOSS and non-FLOSS packages

The nightmare scenario is that developers just start telling people not to use Fedora because it breaks their software (which is not plausible now, but may become plausible if every other OS just consumes directly from Flathub).

This is a double-edged blade, though. App developers each have their own idea of how to treat security and privacy. Does the developer use good practices in building binaries in a secure buildsystem, or are they made on a laptop in a coffeeshop somewhere? Firefox is almost a special case because there is a large open-source organization behind it.

I don’t want to work against app developers at all — but if they’re willing to work with our packagers, we can together provide a reliable, secure, and consistent experience for users.

Fedora provides first-line user support, quality assurance, and more — not just bug reports, but bug reports from people who understand the technology, and which often come with fixes! We find and fix CVEs in not just the applications, but their underlying open source components. We build with the newest compilers, and help fix applications which have real bugs revealed by those newer tools. We make software work across different architectures — something that everyone kind of takes for granted now, but “everything is just basically seamless on ARM” wouldn’t have happened without distros doing heavy lifting.

If app container differs from the base container only by a dnf install there is no point in choosing app container over the generic one. You still need to build your own layer.

There is a class of app containers though which are not as simple as that. They actually go extra mile so that you don’t have just a correct content inside, but you also get the API in a form of Environment variables, mount points and other container metadata tailored for running that specific app in a cloud environment.

Then you don’t add your configs into the image at all, all you changes live in ConfigMaps and mounted volumes.

I used postgresql image ( Red Hat Ecosystem Catalog ) for deploying a test instance of Postgres on Kubernetes this way, and it is quite nice when you can just skip the step of building a container completely.

1 Like

This topic was automatically closed after 31 days. New replies are no longer allowed.