The initial request for the SIG has been approved by the board so we are now in the process of setting up tools and communication channels. Everyone is welcome to join the group and figure out the ways of our collaboration
I’ll start. We already deploy Stream in production, but there are pockets of the company with dependencies on third party ISVs that are dragging their feet on supporting even the latest RHEL, let alone Stream.
Would love to have something to show them to demonstrate why integrating early and often is a win win for everyone
Is there a formal automated process to adopt an update in such systems?
Are there any strict criteria like “a certain third-party software can be installed on the system” which we can work with, or is it based more of a generic gut feeling, somewhat along the lines of “we won’t consider the system stable enough until it is two years old”?
There are also, of course, ISVs who don’t even qualify on EL at all (eg in the AOSP world) - they build and test on Ubuntu and if we get a report of missing libraries we try and get them in EPEL…
On behalf of the Packit team, I am quite interested in the discussions around CentOS integrations.
I would say Packit itself can be called a CentOS integration – you can use our GitHub/GitLab integration to do CentOS Stream builds on your pull requests/commits/releases, and run tests on those builds, but we are nearly finished with the support for automatic release updates to CentOS Stream dist-git.[1].
Packit itself does not do much alone but uses other services to do the hard work and “integrates” those together. I am not sure if we want to cover here “just” integration with the CentOS Stream dist-git or any CentOS-Stream-related automation/integration…
The topics, I’m interested and would like to see being discussed:
Good practices, and shared tooling/approach for integrations.
Bot accounts and their permissions handling.
Particular issues if when they arrive… (Just have a place/channel/group to ask for help.)
From Cloud SIG (OpenStack), our main interest at this point is being able to validate new composes (not packages) content before released to CentOS mirrors by running custom OpenStack deployment jobs so that we find issues in CentOS Stream or OpenStack/RDO as early as possible and avoid impact in users (operators or developers).
This may be done by using CentOS provided infra and tools or using some third party CI that we’d own (we use Zuul).
Since it is a very specific tag name with low risk of overlap with anything, I think we can make it available beyond the scope of the CentOS category. So if possible, make it available in the Project Discussion as well.
Yes, I think it is in scope. We have three different points where the tests can be triggered: on Merge Request is created, on package is built and on compose is built. While these points serve different purposes we can share the tooling and tests between them.
In the ideal case it should be a minimal change to re-target the test pipeline from package level to compose-level and back, depending on the needs of a specific project.
It is not the case right now, but by collecting the knowledge under one umbrella I hope to get closer to it.
I would be mostly interested in integration with respect to publicly-available but out-of-tree kernel modules like Lustre and ZFS, the even build-ability or load-ability of which I currently test manually when preparing to do updates.
That’s an important point for the Third-party CI though:
as a third-party integrating with CentOS Stream you do not have to share the builds you create.
If you build the ZFS modules for yourself, not distributing them, you still can listen to events happening in CentOS Stream, for example for new merge requests coming to a kernel rpm. You then setup your infrastructure to run something based on that event, for example, rebuild the ZFS module. And then you work with the outcome.
You can publish the outcome in some form and provide feedback to RHEL kernel maintainers. Maybe your testing would expose an issue in the kernel which should prevent shipping it. But you don’t have to share or distribute the entire setup.
And it is ok to run integration scenarios just for yourself so that you know, what it is coming in CentOS Stream and RHEL and prepare your custom stuff for it.
That’s entirely the choice of that Third-Party on how public they want or can be.