During Flock, we’ve discussed with a few CPE members (mainly @zlopez and @humaton ) if Packit can’t be migrated to a CPE-managed OpenShift cluster. We were asked to communicate this openly so, here it is…
But firstly, why?
The Packit’s main focus is Fedora and it mainly integrates with Fedora services, so it might be a better fit than Red Hat-managed Open Shift.
We don’t need access to Red Hat’s internal network.
Move away from the current firewall setup that requires any external host we want to connect to (e.g. to download package archive) to be manually requested.
We hope having Packit closer to other Fedora services can help integrate Packit better with the Fedora service family, making it easier to be accepted/understood by people.
One of the steps we are taking to get Packit closer to the Fedora community. (We also plan and prioritise in public and our main communication channel is on the Fedora Matrix server.) This is also a reason why we want to use Fedora Discussions more.
CPE might have access to Packit’s deployment in case of any issues.
The current state
Currently, Packit runs on a Red Hat-managed OpenShift cluster and we are not in a hurry to move away.
The Stage instance corresponds to the main branch state of all the projects. (Deployed multiple times a day, automatically after each merged pull request.)
The Production instance corresponds to the stable branch state of all the projects. (stable branch moved manually every Monday.)
Requirements
Luckily, Packit offloads most of the work to other services (e.g. Koji, Testing Farm, Copr,…).
For each instance (production and stage):
current usage 2 CPUs, 5GB memory
PVCs: unused repository cache, and 4GB PostgreSQL volume (will need to enlarge that soon )
API endpoint for (weekly) deployments is needed
separate sandbox namespace (ideally with only internet access, no internal networks; also requires higher privileges to set up); pods are provisioned ad-hoc, resources are not included above
Conclusion
Here are the things we are interested in:
Do you think in general it’s a good idea to have Packit hosted there?
If yes, how should we proceed?
Is there anything else you want to know about how Packit manages the setup?
During Flock, we’ve discussed with a few CPE members (mainly @zlopez and @humaton ) if Packit can’t be migrated to a CPE-managed OpenShift cluster. We were asked to communicate this openly so, here it is…
Hello!
…snip…
Requirements
Luckily, Packit offloads most of the work to other services (e.g. Koji, Testing Farm, Copr,…).
For each instance (production and stage):
current usage 2 CPUs, 5GB memory
PVCs: unused repository cache, and 4GB PostgreSQL volume (will need to enlarge that soon )
API endpoint for (weekly) deployments is needed
separate sandbox namespace (ideally with only internet access, no internal networks; also requires higher privileges to set up); pods are provisioned ad-hoc, resources are not included above
We haven’t setup sandbox namespaces before, so that might be a bit of a
learning curve on our side. Should be completely doable I would
think.
Conclusion
Here are the things we are interested in:
Do you think in general it’s a good idea to have Packit hosted there?
Sure! I think as you said this would be closer to the things its
using/working with.
If yes, how should we proceed?
Is there anything else you want to know about how Packit manages the setup?
How do you currently store secrets/credentials?
Do you have any monitoring in place either in openshift or externally?
I’m sure there will be more things to figure out, but I think it should
be completely possible.
I hope from your point of view, this should be just a regular namespace we can manage. We use our own sandboxing mechanism on it (GitHub - packit/sandcastle: Run untrusted commands in a sandbox.) but ideally, we would like to migrate to something more generic since it does not scale well. (Either a generic OpenShift solution or a service that can provide this – e.g. Testing Farm or Copr workers.) For upstream Copr builds, we already use Copr itself to run user-defined actions instead of this “Sandcastle”.
We use Bitwarden and updating is done via Ansible playbooks run locally. Here’s a documentation for the team if you want to deep dive.
We use Prometheus (slightly covered here) and an internal shared Grafana.
We also use Sentry.
I hope from your point of view, this should be just a regular namespace we can manage. We use our own sandboxing mechanism on it (GitHub - packit/sandcastle: Run untrusted commands in a sandbox.) but ideally, we would like to migrate to something more generic since it does not scale well. (Either a generic OpenShift solution or a service that can provide this – e.g. Testing Farm or Copr workers.) For upstream Copr builds, we already use Copr itself to run user-defined actions instead of this “Sandcastle”.
Whats the reason for the sandboxing? Do you need privledged containers?
Or you just want to make sure you are seperated from any other
namespaces as a precaution?
We use Bitwarden and updating is done via Ansible playbooks run locally. Here’s a documentation for the team if you want to deep dive.
ok, cool.
We use Prometheus (slightly covered here) and an internal shared Grafana.
We also use Sentry.
It’s to let Packit users run user-definable actions that can be used either as a hook or as a replacement for Packit’s default behaviour… And the thing is that when syncing the new release, we need to combine these user actions with calls that need to be authenticated (like git forge API calls, git commands and various fedpkg calls). So, basically, we copy the project working directory to this sandbox pod, run the user command, and sync the state of the directory back.