How do you manage your toolbox container?

Hi all,

I am using Silverblue for a good month now and I end up using toolbox for 90% of the time. That leads me in installing a lot of packages in the toolbox with F31 coming soon I was wondering how I will move to the F31 toolbox container.

How are people managing their toolbox ? Also do your run dnf update inside your toolbox ?

Thanks

I’m still working most of the time in Silverblue command line. I layered a few packages to be able to work on the projects I’m working with on daily basis, but when doing something new I’m always switching to toolbox and first command I run inside toolbox is dnf upgrade.

I use Silverblue on my laptop and have my main dev environment in a toolbox container. Thankfully, it gives me everything I need to work on Cockpit — including running VSCode from inside the toolbox and even running test VMs inside the container too!

Another person on the team is also using an rpm-ostree system with his custom tree (bootstrapped from Silverblue) and we made a script together to set up our dev environments.

His script is a little different from mine, but much of it is the same.

Here’s my script: https://gist.github.com/garrett/ab2d09c9da55353dac63f56409b95369

More details about my experience using Silverblue to develop Cockpit: Is Cockpit supported?

Hopefully the little script I linked above might prove as inspiration. The gist of it is that we create a container with toolbox and tell it to install some packages.

You can have a simple script that you update as you go along. If you install a package inside the toolbox container, just add that to the list of packages your script would install. Then it should be reproducible. (You can test it by using another container name… or be brave and wipe out & reinstall your main container.)

2 Likes

I’ve been messing around with Silverblue on my home machine. I’m getting new machines at work soon and I wanted everything to be identical, so I’ve been configuring toolbox containers with ansible. I’ve got a couple of bootstrap scripts to get things sorted with rpm-ostree, basically installing cinnamon desktop and ffmpeg so I can get video runinng in Firefox, then another script to initialize a couple of toolbox containers, one to act as the ansible control server, while the other is configured as a user environment. The playbook for the control toolbox uses 127.0.0.1 for the host and runs ansible over ssh, installing flatpaks on my laptop and spinning up a couple of pods from podman to run local copies of nexus and jenkins. The playbook for the user toolbox runs ansible over the local connection and installs packages with dnf in the user environment.

I just got the basic structure worked out this past weekend. It’s still a little rough so unfortunately I don’t have anything to show at the moment, but if you’re interested I can clean it up a bit and post a working copy. I should be able to get to it next weekend.

-f

I’d love to see some more details on this. I’m new to both silverblue and ansible, so I’m looking for more examples of how Ansible/tooboxes/hosts can interact, mostly for simple configuration management purposes.

@flannon Any updates re: the automation scripting you mentioned? I was following the discussion and just now returned to see if you had provided any updates. Greatly appreciate anything that you are willing to share!

Hey @heatmiser,

Thanks for prodding me a bit. I’ve been meaning to reply for awhile but I keep going around in circles. This is still very much a work in progress. I had some stuff working in Fedora SB 30 and now on SB 31 some stuff works better and other stuff is broken. Anyway, have a look here,

https://github.com/flannon/dotfiles-fedora-sb

My initial idea was to build two toolbox containers, control and $USER, and use ansible to provision them. After awhile I realized I really only need one container, so now everything runs from the $USER container. The control playbook makes a network connection on 127.0.0.1 and installs flatpaks on the host, while the user playbook makes an ansible local connection, installing packages with dnf in the $USER container.

There’s a few things that need to be done to get started. First there’s setup.sh. On SB 30 there wasn’t H.264 support for video so I had to install rpmfusion and compat-ffmpeg28 with rpm-ostree. On SB 31 this is no longer necessary and I’ve gotten things worked out so I don’t need to install any packages on the base image. So now setup.sh pretty much just configures sshd for remote connectivity on 127.0.0.1. This part worked just fine on SB 30, but for some reason I haven’t been able to get remote login working on SB 31. At this point, when I ssh to 127.0.0.1 I can’t authenticate, either with ssh key, or with a password. So I still have to sort that out again.

After setup.sh it’s time to run errata.sh. I ran into a problem on SB 30 where I was getting a “failed ot obtain transaction lock” error that’s described in this issue, https://github.com/containers/fuse-overlayfs/issues/108. I haven’t had time to look into it to see if this is still an issue on SB 31, so for now I’m still running errata.sh.

A couple of more steps and we’ll get to where things are “automatic”. So next up is build.sh, which is going to set up the $USER container and do the first ansible run. There’s also a wrapper for ansbile so you can re-run the playbooks by doing ansible-run.sh $USER or ansible-run.sh control.

At this point I’m pretty much running all my desktop applications from the $USER container, but there are a few idiosyncrasies that come along with the system. I used $USER for the container name, rather than just using the default container, because I built this on my home computer, but now I’m also running it on my workstaion and laptop at work, so I like the name difference. But it’s a pain to always have to do toolbox enter -c $USER, so in by .bashrc I have ‘alias enter="toolbox -c $USER’. I’ve also installed VSCode in the $USER container, but sometimes I want to run it from a regular shell session. I’ve got it so code will now start anywhere with this alias: alias code=‘toolbox run -c terrance bash -c ‘’‘code’’’’. The other thing I had problems with was vim. I like to alias vi to vim, but in this situation vim only lives in the container. It’s a bit of kludge but I ended up aliasing vi to v, and vim to vi, which works well enough that I’m happy.

Rather than bloating the playbook I like to write roles and call the role from the playbook, even for silly things like this. So the user role is in dotfiles-fedors-sb/ansible/roles/local/fedora_sb_user. The templates directory has my bashrc and some package repo definitions, and the tasks directory is where most everything happens. When it runs it installs some packages, sets up vim and vundle, installs vscode, keybase, chrome and some other stuff. It does a decent job for what it is, but I have to say I’m starting to have serious doubts about using toolbox at all. My quible comes at the point where toolbox presents a completely mutable containerized environment. I’d rather it were variably mutable, than completely mutable. With that in mind I’ve shifted focus and I’m working on using buildah to provision a container image from the ansible user playbook. It’s not working yet, but I’ve got the building a container in a container part mostly working now, so I’m hopeful it’ll be running before too long.

-f

I also use shell scripts to bootstrap my toolboxes. Eg. I have a general init script that installs updates, zsh, nano, etc. and sets my default shell to zsh. If I have a more involved setup for some project I also write a script to setup my container so I can generate many containers for the project without any manual work.

These scripts have saved me several times when toolbox didn’t work with old containers and I had to recreate them.

I’ve shared a bit in this thread above, but wanted to add that since then, I’ve added some toolbox-based scripts to Cockpit’s Jekyll-based website @ https://github.com/cockpit-project/cockpit-project.github.io/tree/master/_scripts

The biggie is the _scripts/toolbox-create script, which is:

#!/usr/bin/bash -e

echo "## Creating website container..."
toolbox create -c cockpit-website

run="toolbox run -c cockpit-website"

echo "## Installing RPM dependencies inside container..."
$run sudo dnf install -y rubygem-bundler \
	ruby-devel libffi-devel make gcc gcc-c++ redhat-rpm-config zlib-devel \
	libxml2-devel libxslt-devel tar nodejs

echo "## Setting local gem path"
$run bundle config path .gem

echo "## Installing local gems"

$run bundle install

echo "## Done!"
echo "## Run _scripts/toolbox-run"

And then we run it with an even simpler _scripts/toolbox-run script, which also passes flags and other arguments through:

#!/bin/sh

toolbox run -c cockpit-website bundle exec jekyll server $@

With these two scripts, I’ve made sure that we can reliably run the Cockpit website locally for development & testing on all of our machines that run have toolbox. (We basically use Fedora.)

What’s really nifty is that those of us on Silverblue can run the toolbox-based scripts from within our toolbox development environments and it happens to just work. (In other words, we don’t have to switch to the host shell or run any funky flatpak-spawn --host command.)

For what it’s worth, we also run our testing VMs inside our toolbox dev environments too. It’s amazing how many things work in user containers thanks to toolbox + podman. (Huge thanks, devs!)

I hope these examples help!

1 Like

So, with your bash script - are you running the container from the host machine without actually entering the container itself? Reason I’m asking is that I deal with a site that I ported to Jekyll and might prove handy

I’ve actually found that I can run the scripts from my generic developer toolbox and toolbox/podman figures out what to do. It transparently works.