What does Fedora CoreOs mean for Fedora and rkt?

Is it planned to spend more effort on improving the integration between rkt and the entire Fedora ecosystem (workstation, server, and CoreOs)?

I ask this in two contexts, first I still have SELinux pains with systemd-nspawn/machinectl managed containers. And the last time I tried rkt, there were other SElinux problems (though I haven’t looked since F25). So the “spin up a container” promise still isn’t as easy as creating a VM.

Second, spending effort to make rkt a more seamless (see below), production ready, container solution weaved into the fabric of the Fedora ecosystem would be very powerful. E.g. it would make local desktop security practices more achievable for every day users, it would provide a smoother operations transition between local development (workstation), small production deployments (server), and massively scalable production environments (coreos). I would love to see Fedora THE place to start hacking on a project, launching a product, and scaling up - and I can see rkt playing a large part in that.

By “more seamless” I don’t mean a new gnome UI, like boxes or anything. I mean fundamental support into SElinux, firewalld, cockpit, etc. Official guides and documentation. Essentially I’m suggesting official support across lower level integrations and project/community fundamental

I didn’t see anything in the announcement it FAQs. Please let me know if I missed it.

1 Like

@lucab can you speak on this?

I am not aware of any such plan at the moment, no. In general, rkt is now a community project part of CNCF, so the overall development and maintenance is not anymore strongly linked to CoreOS (the company, now RedHat).

I think that any contributions regarding all the integration points you mentioned are welcome, assuming that they don’t result in tightly coupling and they don’t go against rkt design principles (e.g. introducing an overarching daemon).

As a side note, there is nowadays a blooming ecosystem of modern container runtimes (e.g. cri-o and podman). If you feel that rkt is not exactly fitting for some of your usecases, my suggestion would be to have a look around and see if any of those suits you better.

1 Like

hey @erickj - is there any chance you have evaluated podman or any chance you could evaluate podman and see if it could fill the use case you are using rkt for today? If not let’s get a list of features that are missing and discuss to see if we can fill those use cases. It’s possible we won’t be able to (architectural limitations), but I don’t see harm in trying.

I think the main nice thing about rkt over other container runtimes I’ve seen is the multistage approach, as in, you take rkt (stage 0) and some container (stage 2) and what you use as stage 1 is something you can decide based on how secure/isolated or lightweight something needs to be. Want a more regular container: Use systemd-nspawn. Want something with it’s own kernel? Use KVM. Doing that is (afaik) not something that you can do with podman right now.

On the other hand, CoreOS is still listed in the README of rkt, so I can see why it might appear to people to be rather related to RedHat an with that Fedora CoreOS.

CoreOS is currently the primary sponsor of rkt development

Also, with Fedora CoreOS being pitched as a replacement for Container Linux, coming with rkt seems not that far off, considering that Container Linux ships with rkt.

On a second look, depending on how the OCI runtime spec actually works (I don’t know enough about this), podman supports to use whatever OCI runtime one wants to use, so this should actually be possible to, if there were runtimes outside of runc, that used systemd-nspawn and kvm, like rkt’s stage1’s did.

I’m happy to provide an alternate answer for some advantages I think rkt may have over podman right now:

Integration with other stuff

rkt integrates more nicely with other unixy tools and systemd. Using timeout --foreground 10 rkt run works and knocks over the pod after 10s, but the same doesn’t work with podman.
They both do inherit 'nice’ness and don’t have a monolithic daemon, but rkt doesn’t daemonize, while podman does daemonize off conman.

rkt also integrates with systemd in some ways, for example:

$ cat > /etc/systemd/system/memlimit-example.service <<EOF
ExecStart=/usr/local/bin/rkt run --debug --insecure-options=image docker://euank/gunpowder-memhog:latest -- 1G

$ systemctl start memlimit-example
# memlimit-example will be stopped by systemd due to the memory limit, and systemd will correctly cleanup all processes since they remained nested under the cgroup systemd made for this service
# journalctl -u memlimit-example will also work correctly

# with podman, the MemoryLimit doens't work, though it does somewhat nest under the cgroup so 'systemctl status' shows it in the process tree at least

Another integration rkt has which podman lacks is the machinectl / journalctl integration, though I admit I don’t really use the machined integration at all.


rkt’s packaged for far more distros than podman. On gentoo, it’s a simple emerge rkt away.
podman, on the other hand, is more complex to install without a package (needs podman from libpod + conman from crio + cni + configuration files in /etc/containers/ before it runs at all).


rkt’s speed is bad for some things, mainly anything dealing with ACIs (image fetching, aci render overhead while running an image).
However, on the flip side, podman ends up being much slower for e.g. podman ps vs rkt list by about 10x. On my machine at least, it’s quite noticeable.

I had noticed that podman felt sluggish before, but I never dove into it deeply enough to know why. Perhaps I should do so and file an issue with any findings.


I think the main issues are packaging, especially since it requires pieces from 3 different projects, followed by it intentionally mimicing docker’s process/cgroup hierarchy closely (which is a plus for people coming from docker probably and might be required for some runc/docker features, but is a minus when compared to rkt I think).


Thanks for the feedback on Podman.

Could you open issues on https://github.com/projectatomic/libpod/issues that you would like to see changed in podman.
I would love to dive deaper into how you want it to intergrate in systemd unit files. Not really sure I understand your first point.

As far as integration of machinectl, we tried some stuff here in both docker and podman with oci-register-machine, but machinectl really is concerned with container runtimes that run systemd as pid1 inside of the container, and never worked quite right. So we have dropped the support.

As far as packaging for other Distros, we are trying to get others to help with this. We currently package it for Fedora/Centos/RHEL and Ubuntu (in a PPA?) and currently have to rely on the packaging of CNI, runc, conmon and content in containers-common as well, which as you point out can be difficult.

Our goal though is to have podman be part of the universe of differnet container tools like buildah, cri-o, skopeo which all can share the same tools, storage and configuration. Forcing each one to carry their own copies makes this problematic.

We have worked to improve the performance of podman ps, and would love to have your feedback on that. The problem is podman ps is probing /execing runc many times to gether the information it is displaying. Perhaps we could build a quick method that retrieves less information but returns the running containers.

Anyways any help thoughts on improving podman is appreciated.


As just information, I’ve created gentoo ebuild of podman (and some dependencies) on my personal overlay[1]
Any feedback is appreciated:)

[1] https://gitlab.com/kenya888/kenya888-gentoo-repo

1 Like

@euank lays out concrete cases that demonstrate my original question nicely up above

specifically the deeper integration with systemd and machinectl.

AFAICT podman doesn’t have this yet (and I’m just getting into looking at podman so please excuse my ignorance there. as an aside, thanks for the pointer @dustymabe)

1 Like