Best way to install a software package in an air-gapped environment

I understand that installing any packages for software is discouraged in favor of containers, however, I think I may be working within a unique situation. Please correct me if I’m wrong.

I have an installation of coreOS in an air-gapped, bare-metal environment where I am attempting to install OpenShift. The installation of OpenShift in an environment with no internet acces requires the creation of a mirror repository, which is moved over the network boundary on physical media, for the OpenShift installation to pull images from.

The problem that I’m having is this: The steps I have taken in creating the registry on the network facing machine required apache2-utils, specifically the htpasswd program, to establish the registry authorization. Now that I have transported the registry over to the new environment, it appears that I also need htpasswd installed on this machine, in order to set up the new registry in which to upload the mirrored registry images, and it’s corresponding authorizations.

If someone thinks there is a better solution, please let me know. Otherwise my question is this: What is the best way to package up a tool like apache2-utils and its dependencies, in order to transport and install it on a coreOS machine after the coreOS installation? Is it possible to do this with the toolbox dnf?

Any guidance or advice would be much appreciated.

Thanks very much.

1 Like

Hi; presumably you’ve already discovered

What is the best way to package up a tool like apache2-utils and its dependencies, in order to transport and install it on a coreOS machine after the coreOS installation?

Why not make a simple container image for this?

$ cat Dockerfile
RUN yum -y install apache2-utils && yum clean all

Then mirror that container image the same way as the rest of the containers you need.

1 Like

Hi; presumably you’ve already discovered

Yes, thanks, I have been following along with their documentation, but ran into some trouble with the authorization section. I wound up getting it to work using this tutorial. Its certainly possible that my difficulties are due to unnecessary extra steps.

Why not make a simple container image for this?

I may be misunderstanding your process here, but wouldn’t the mirrored image container still need access to the internet to do the installation with the yum command? Could you possibly elaborate on your reasoning here? You’ll have to fogive me as I’m pretty new to containerizations concepts and I am still learning.

The idea is to build the image on a connected machine and then replicate the resulting container image inside your disconnected environment.

1 Like

I see, so building the image on the connected machine would mean it contains all the content it needs to perform the installation.

I’ll give this a shot. Is the dockerfile for something like that really as simple as described in the last reply?

$ cat Dockerfile
RUN yum -y install apache2-utils && yum clean all

Thanks very much to you both.

So I did try this solution, and I can’t say it works as I intended. I had to replace apache2-utils with http-tools, which is apparently the package that contains htpasswd for fedora. After building and running the container htpasswd is still not available to me from the command line outside of the container, which is the ultimate goal.

I’m not sure if I misunderstood the intended use of the container described here. Does anyone have any thoughts?

If you need to make it available on the host you may have to host your own RPM repo:

$ cat Dockerfile
RUN dnf install -y nginx dnf-utils createrepo_c && dnf clean all
WORKDIR /srv/repo
RUN yumdownloader httpd-tools arp arp-utils
ENTRYPOINT nginx -g 'daemon off;'

Now you’d need to build this image, deploy and expose it via service / route. Once hosts can reach it, place a repo config pointing to this service in /etc/yum.repos.d - and your hosts can install RPMs via rpm-ostree install httpd-tools in air-gapped environment

1 Like

It will not be available directly but you can call it with some filesystem paths shared. For example:

podman run --rm --tty --interactive \
    --security-opt label=disable \
    --volume ${PWD}:/pwd --workdir /pwd \
    localhost/myrepo/myimage:latest \
    htpasswd <args>

This will run htpasswd from the container while sharing the current directory with the container.

1 Like

Alright, I’ll start working on this. Thanks very much. Two small questions:

  1. Might you breifly elaborate on what you mean by “expose it via service / route”? Are you talking about exposing ports when the image is run on the new machine?

  2. Just out of curiosity, do you think it’s necessary to build the image on a network facing machine running coreOS specifically or is it possible to build it on a machine that’s running, say, Fedora 33?

Thanks a lot.

I think I see what’s happening here and how this would allow interaction with the directory I’m in. Using this method would allow me to, say, generate an htpasswd file to be placed at the path I specify? Am I correct in thinking that it still wouldn’t allow access by external programs? My understanding of my needs here is that htpasswd has to be available in PATH in order to facilitate the authorization of the new registry when I specify the authorization method during podman run. Is my understanding incorrect here that you know of? Thanks very much for the input. I will absolutely play around with this.

Another follow-up question. During the build process I was getting errors indicating that neither arp or arp-utils parckages are available. I was able to get past the error for arp by replacing it with net-tools in the Dockerfile, which from what I can tell includes arp. I am, however, unable to find a source for arp-utils. Is it possible that this package has been renamed or that this was a typo? The closest resource I can find is apr-util which is an Apache package. Is this the package I need, and if so, is the previous package I need apr and not arp?

Thanks again.

Just a follow up for anyone looking for this answer later, the dependencies in the Dockerfile are apr not arp and apr-util, not arp-utils.

It turns out I was able to solve my main problem before trying to setup connection between the repository and the host, but I can confirm that the container did acquire all the necessary rpm files for an rpm installation after making those changes.