Cannot upgrade from 32.20201104.3.0 to newer release

I have a couple of instances with 32.20201104.3.0 and now would like to upgrade them to at least 33.20210217, or newer.

If I try to upgrade just with a simple “rpm-ostree upgrade”, it fails:

error: Checkout filesystem-3.14-3.fc33.x86_64: opendir(local): No such file or directory

Does anyone have an idea what might be wrong and how I could fix this?
Thanks!

Hi, see https://bodhi.fedoraproject.org/updates/FEDORA-2021-9091468793 and Cannot upgrade Silverblue - #31 by jakfrost

Thanks for your hint.
But these seem two totally different things? Which exactly do you mean?

This is due to the rpmdb move to sqlite. You need to let Zincati perform the upgrade so that it goes through the 33.20201201.3.0 barrier release.

In general, we need to avoid using rpm-ostree upgrade directly for now and let Zincati do the upgrades (in fact in later releases, this requires an override switch). Barrier releases fix or migrate important bits of the OS.

We’re working on improving the UX there so that e.g. rpm-ostree talks to Zincati, but for now the only supported way to upgrade nodes is through Zincati (and the longer a node has gone without upgrading, the more likely it is to hit barrier-related issues like this).

Thanks for the information.
We have specifically disabled zincati auto updates to avoid sudden reboots of the VMs.
Can I somehow call this manually?
Could I also upgrade to 33.20201201.3.0 explicitly with rpm-ostree? (And then further.)
I tried rpm-ostree -os= but it at least doesn’t accept this version string …

You can manually use rpm-ostree deploy $version to upgrade to a specific version.

More generally, it’s awkward but what you could do for now is simply to re-enable Zincati to let it run through the updates and once it’s done you can re-disable it. (Upstream tracker for this is RFE: Allow manual update checks and reboots · Issue #498 · coreos/zincati · GitHub. )

1 Like

I can confirm that going through the barrier release with “rpm-ostree deploy” has worked fine with a test clone of one of my VMs!

E.g. with this sequence:

rpm-ostree deploy 33.20201201.3.0
<reboot>
rpm-ostree upgrade

I got the VM up to date without any problem. Will do the same with the production VMs during the next couple of days or even later today.

Thanks for your help!

Upgrading the production VMs also worked fine except for that exactly half of the VMs had the /run/systemd/resolv/stub-resolve.conf symlink missing after each upgrade step. But that is easily handled.