How do you manage your toolbox container?

Hi all,

I am using Silverblue for a good month now and I end up using toolbox for 90% of the time. That leads me in installing a lot of packages in the toolbox with F31 coming soon I was wondering how I will move to the F31 toolbox container.

How are people managing their toolbox ? Also do your run dnf update inside your toolbox ?

Thanks

Iā€™m still working most of the time in Silverblue command line. I layered a few packages to be able to work on the projects Iā€™m working with on daily basis, but when doing something new Iā€™m always switching to toolbox and first command I run inside toolbox is dnf upgrade.

1 Like

I use Silverblue on my laptop and have my main dev environment in a toolbox container. Thankfully, it gives me everything I need to work on Cockpit ā€” including running VSCode from inside the toolbox and even running test VMs inside the container too!

Another person on the team is also using an rpm-ostree system with his custom tree (bootstrapped from Silverblue) and we made a script together to set up our dev environments.

His script is a little different from mine, but much of it is the same.

Hereā€™s my script: Cockpit development container toolbox script Ā· GitHub

More details about my experience using Silverblue to develop Cockpit: Is Cockpit supported? - #6 by garrett

Hopefully the little script I linked above might prove as inspiration. The gist of it is that we create a container with toolbox and tell it to install some packages.

You can have a simple script that you update as you go along. If you install a package inside the toolbox container, just add that to the list of packages your script would install. Then it should be reproducible. (You can test it by using another container nameā€¦ or be brave and wipe out & reinstall your main container.)

2 Likes

Iā€™ve been messing around with Silverblue on my home machine. Iā€™m getting new machines at work soon and I wanted everything to be identical, so Iā€™ve been configuring toolbox containers with ansible. Iā€™ve got a couple of bootstrap scripts to get things sorted with rpm-ostree, basically installing cinnamon desktop and ffmpeg so I can get video runinng in Firefox, then another script to initialize a couple of toolbox containers, one to act as the ansible control server, while the other is configured as a user environment. The playbook for the control toolbox uses 127.0.0.1 for the host and runs ansible over ssh, installing flatpaks on my laptop and spinning up a couple of pods from podman to run local copies of nexus and jenkins. The playbook for the user toolbox runs ansible over the local connection and installs packages with dnf in the user environment.

I just got the basic structure worked out this past weekend. Itā€™s still a little rough so unfortunately I donā€™t have anything to show at the moment, but if youā€™re interested I can clean it up a bit and post a working copy. I should be able to get to it next weekend.

-f

Iā€™d love to see some more details on this. Iā€™m new to both silverblue and ansible, so Iā€™m looking for more examples of how Ansible/tooboxes/hosts can interact, mostly for simple configuration management purposes.

@flannon Any updates re: the automation scripting you mentioned? I was following the discussion and just now returned to see if you had provided any updates. Greatly appreciate anything that you are willing to share!

Hey @heatmiser,

Thanks for prodding me a bit. Iā€™ve been meaning to reply for awhile but I keep going around in circles. This is still very much a work in progress. I had some stuff working in Fedora SB 30 and now on SB 31 some stuff works better and other stuff is broken. Anyway, have a look here,

https://github.com/flannon/dotfiles-fedora-sb

My initial idea was to build two toolbox containers, control and $USER, and use ansible to provision them. After awhile I realized I really only need one container, so now everything runs from the $USER container. The control playbook makes a network connection on 127.0.0.1 and installs flatpaks on the host, while the user playbook makes an ansible local connection, installing packages with dnf in the $USER container.

Thereā€™s a few things that need to be done to get started. First thereā€™s setup.sh. On SB 30 there wasnā€™t H.264 support for video so I had to install rpmfusion and compat-ffmpeg28 with rpm-ostree. On SB 31 this is no longer necessary and Iā€™ve gotten things worked out so I donā€™t need to install any packages on the base image. So now setup.sh pretty much just configures sshd for remote connectivity on 127.0.0.1. This part worked just fine on SB 30, but for some reason I havenā€™t been able to get remote login working on SB 31. At this point, when I ssh to 127.0.0.1 I canā€™t authenticate, either with ssh key, or with a password. So I still have to sort that out again.

After setup.sh itā€™s time to run errata.sh. I ran into a problem on SB 30 where I was getting a ā€œfailed ot obtain transaction lockā€ error thatā€™s described in this issue, dnf transaction fails inside Fedora container and corrupts container Ā· Issue #108 Ā· containers/fuse-overlayfs Ā· GitHub. I havenā€™t had time to look into it to see if this is still an issue on SB 31, so for now Iā€™m still running errata.sh.

A couple of more steps and weā€™ll get to where things are ā€œautomaticā€. So next up is build.sh, which is going to set up the $USER container and do the first ansible run. Thereā€™s also a wrapper for ansbile so you can re-run the playbooks by doing ansible-run.sh $USER or ansible-run.sh control.

At this point Iā€™m pretty much running all my desktop applications from the $USER container, but there are a few idiosyncrasies that come along with the system. I used $USER for the container name, rather than just using the default container, because I built this on my home computer, but now Iā€™m also running it on my workstaion and laptop at work, so I like the name difference. But itā€™s a pain to always have to do toolbox enter -c $USER, so in by .bashrc I have ā€˜alias enter="toolbox -c $USERā€™. Iā€™ve also installed VSCode in the $USER container, but sometimes I want to run it from a regular shell session. Iā€™ve got it so code will now start anywhere with this alias: alias code=ā€˜toolbox run -c terrance bash -c ā€˜'ā€˜codeā€™'ā€™ā€™. The other thing I had problems with was vim. I like to alias vi to vim, but in this situation vim only lives in the container. Itā€™s a bit of kludge but I ended up aliasing vi to v, and vim to vi, which works well enough that Iā€™m happy.

Rather than bloating the playbook I like to write roles and call the role from the playbook, even for silly things like this. So the user role is in dotfiles-fedors-sb/ansible/roles/local/fedora_sb_user. The templates directory has my bashrc and some package repo definitions, and the tasks directory is where most everything happens. When it runs it installs some packages, sets up vim and vundle, installs vscode, keybase, chrome and some other stuff. It does a decent job for what it is, but I have to say Iā€™m starting to have serious doubts about using toolbox at all. My quible comes at the point where toolbox presents a completely mutable containerized environment. Iā€™d rather it were variably mutable, than completely mutable. With that in mind Iā€™ve shifted focus and Iā€™m working on using buildah to provision a container image from the ansible user playbook. Itā€™s not working yet, but Iā€™ve got the building a container in a container part mostly working now, so Iā€™m hopeful itā€™ll be running before too long.

-f

I also use shell scripts to bootstrap my toolboxes. Eg. I have a general init script that installs updates, zsh, nano, etc. and sets my default shell to zsh. If I have a more involved setup for some project I also write a script to setup my container so I can generate many containers for the project without any manual work.

These scripts have saved me several times when toolbox didnā€™t work with old containers and I had to recreate them.

Iā€™ve shared a bit in this thread above, but wanted to add that since then, Iā€™ve added some toolbox-based scripts to Cockpitā€™s Jekyll-based website @ cockpit-project.github.io/_scripts at main Ā· cockpit-project/cockpit-project.github.io Ā· GitHub

The biggie is the _scripts/toolbox-create script, which is:

#!/usr/bin/bash -e

echo "## Creating website container..."
toolbox create -c cockpit-website

run="toolbox run -c cockpit-website"

echo "## Installing RPM dependencies inside container..."
$run sudo dnf install -y rubygem-bundler \
	ruby-devel libffi-devel make gcc gcc-c++ redhat-rpm-config zlib-devel \
	libxml2-devel libxslt-devel tar nodejs

echo "## Setting local gem path"
$run bundle config path .gem

echo "## Installing local gems"

$run bundle install

echo "## Done!"
echo "## Run _scripts/toolbox-run"

And then we run it with an even simpler _scripts/toolbox-run script, which also passes flags and other arguments through:

#!/bin/sh

toolbox run -c cockpit-website bundle exec jekyll server $@

With these two scripts, Iā€™ve made sure that we can reliably run the Cockpit website locally for development & testing on all of our machines that run have toolbox. (We basically use Fedora.)

Whatā€™s really nifty is that those of us on Silverblue can run the toolbox-based scripts from within our toolbox development environments and it happens to just work. (In other words, we donā€™t have to switch to the host shell or run any funky flatpak-spawn --host command.)

For what itā€™s worth, we also run our testing VMs inside our toolbox dev environments too. Itā€™s amazing how many things work in user containers thanks to toolbox + podman. (Huge thanks, devs!)

I hope these examples help!

1 Like

So, with your bash script - are you running the container from the host machine without actually entering the container itself? Reason Iā€™m asking is that I deal with a site that I ported to Jekyll and might prove handy

Iā€™ve actually found that I can run the scripts from my generic developer toolbox and toolbox/podman figures out what to do. It transparently works.