Troubles with podman / toolbox containers after upgrading to fedora 31

Hi all!

I upgraded to Fedora Silverblue 31 from Fedora Silverblue 30. I’m having issues starting my old containers. The error I’m getting is:

DEBU[0000] Received: -1
ERRO[0000] oci runtime “runc” does not support CGroups V2: use system migrate to mitigate
DEBU[0000] Cleaning up container f9159d6da0d05393e136d274a1a95dce5b53f0f077ffc8969846d0edacbf8203
DEBU[0000] Network is already cleaned up, skipping…
DEBU[0000] unmounted container “f9159d6da0d05393e136d274a1a95dce5b53f0f077ffc8969846d0edacbf8203”
ERRO[0000] unable to start container “test_container”: time=“2019-10-30T17:18:02+01:00” level=error msg=“this version of runc doesn’t work on cgroups v2”
this version of runc doesn’t work on cgroups v2: OCI runtime error

How do I “migrate” my container? And what does migrating a container mean?

Thanks in advance!!

hi @florianlackner - can you see if this common bugs entry applies to you? If so then try the workaround.

I have the same issue.
Running podman system migrate does nothing.

thank you for your answer!

The mentioned bug relates to docker. The proposed workaround is to use podman. I am already using podman (bc toolbox uses podman under the hood), so this known bug does not relate to my issue unfortunately.

Sorry I missed that from the title. cc @baude

Hi - Podman dev here.

We messed up here, and the error message does not include the full command necessary to fix this. It’s not just podman system migrate but podman system migrate --runtime=crun. Give that a try, and things should start working. We’ll see about getting the error message updated.

4 Likes

thank you very much for your reply! I ran the above command (slightly modified: --new-runtime=crun instead of --runtime) and it fixes my error.

But I’m getting a new error:

DEBU[0000] running conmon: /usr/bin/conmon args=“[–api-version 1 -s -c f9159d6da0d05393e136d274a1a95dce5b53f0f077ffc8969846d0edacbf8203 -u f9159d6da0d05393e136d274a1a95dce5b53f0f077ffc8969846d0edacbf8203 -r /usr/bin/crun -b /var/home/me/.local/share/containers/storage/overlay-containers/f9159d6da0d05393e136d274a1a95dce5b53f0f077ffc8969846d0edacbf8203/userdata -p /tmp/1000/overlay-containers/f9159d6da0d05393e136d274a1a95dce5b53f0f077ffc8969846d0edacbf8203/userdata/pidfile -l k8s-file:/var/home/me/.local/share/containers/storage/overlay-containers/f9159d6da0d05393e136d274a1a95dce5b53f0f077ffc8969846d0edacbf8203/userdata/ctr.log --exit-dir /run/user/1000/libpod/tmp/exits --socket-dir-path /run/user/1000/libpod/tmp/socket --log-level debug --syslog --conmon-pidfile /tmp/1000/overlay-containers/f9159d6da0d05393e136d274a1a95dce5b53f0f077ffc8969846d0edacbf8203/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/home/me/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /tmp/1000 --exit-command-arg --log-level --exit-command-arg error --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mount_program=/usr/bin/fuse-overlayfs --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg container --exit-command-arg cleanup --exit-command-arg f9159d6da0d05393e136d274a1a95dce5b53f0f077ffc8969846d0edacbf8203]”
[conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied

DEBU[0000] Received: -1
DEBU[0000] Cleaning up container f9159d6da0d05393e136d274a1a95dce5b53f0f077ffc8969846d0edacbf8203
DEBU[0000] Network is already cleaned up, skipping…
DEBU[0000] unmounted container “f9159d6da0d05393e136d274a1a95dce5b53f0f077ffc8969846d0edacbf8203”
ERRO[0000] unable to start container “test_container”: sd-bus call: Invalid argument: OCI runtime error

Any ideas?

What Podman and crun packages do you have installed (rpm -q podman and rpm -q crun)? This sounds familiar to a crun issue we fixed a few weeks back.

% rpm -q podman
podman-1.6.2-2.fc31.x86_64
% rpm -q crun
crun-0.10.2-1.fc31.x86_64

do you see any error in journalctl --user ?

@giuseppe: I only get one line:

Okt 31 17:36:24 localhost.localdomain podman[29744]: 2019-10-31 17:36:24.165454064 +0100 CET m=+0.084568718 container cleanup f9159d6da0d05393e136d274a1a95dce5b53f0f077ffc8969846d0edacbf8203 (image=registry.fedoraproject.org/f30/fedora-t>

how did you create the user session? Did you login as root and used sudo afterwards?

Few things to check:

Does /run/user/$UID/bus exist?

Could you post the full “podman --log-level debug start …” output?

I’m not sure what you mean by that. I use toolbox to manage my containers. The containers in question were created on fedora 30 with the fedora 30 base image.

% ls /run/user/$UID/bus
/run/user/1000/bus

% podman --log-level debug start test_container
DEBU[0000] using conmon: "/usr/bin/conmon"              
DEBU[0000] Initializing boltdb state at /var/home/me/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Using graph driver overlay                   
DEBU[0000] Using graph root /var/home/me/.local/share/containers/storage 
DEBU[0000] Using run root /tmp/1000                     
DEBU[0000] Using static dir /var/home/me/.local/share/containers/storage/libpod 
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp      
DEBU[0000] Using volume path /var/home/me/.local/share/containers/storage/volumes 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] [graphdriver] trying provided driver "overlay" 
DEBU[0000] overlay: mount_program=/usr/bin/fuse-overlayfs 
DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=false 
DEBU[0000] Initializing event backend journald          
DEBU[0000] using runtime "/usr/bin/runc"                
DEBU[0000] using runtime "/usr/bin/crun"                
INFO[0000] running as rootless                          
DEBU[0000] using conmon: "/usr/bin/conmon"              
DEBU[0000] Initializing boltdb state at /var/home/me/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Using graph driver overlay                   
DEBU[0000] Using graph root /var/home/me/.local/share/containers/storage 
DEBU[0000] Using run root /tmp/1000                     
DEBU[0000] Using static dir /var/home/me/.local/share/containers/storage/libpod 
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp      
DEBU[0000] Using volume path /var/home/me/.local/share/containers/storage/volumes 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] Initializing event backend journald          
DEBU[0000] using runtime "/usr/bin/runc"                
DEBU[0000] using runtime "/usr/bin/crun"                
DEBU[0000] overlay: mount_data=lowerdir=/var/home/me/.local/share/containers/storage/overlay/l/LLJIZ7WBYGQ34O5RP7E7U4FIXD:/var/home/me/.local/share/containers/storage/overlay/l/H5MIMLH3E3F73PCPDIHMX6LITR,upperdir=/var/home/me/.local/share/containers/storage/overlay/e0d0048562fd7ade40d2e07f563e5529113dbf035ea4820cada38f513a28daa8/diff,workdir=/var/home/me/.local/share/containers/storage/overlay/e0d0048562fd7ade40d2e07f563e5529113dbf035ea4820cada38f513a28daa8/work,context="system_u:object_r:container_file_t:s0:c646,c891" 
DEBU[0000] mounted container "f9159d6da0d05393e136d274a1a95dce5b53f0f077ffc8969846d0edacbf8203" at "/var/home/me/.local/share/containers/storage/overlay/e0d0048562fd7ade40d2e07f563e5529113dbf035ea4820cada38f513a28daa8/merged" 
DEBU[0000] Created root filesystem for container f9159d6da0d05393e136d274a1a95dce5b53f0f077ffc8969846d0edacbf8203 at /var/home/me/.local/share/containers/storage/overlay/e0d0048562fd7ade40d2e07f563e5529113dbf035ea4820cada38f513a28daa8/merged 
DEBU[0000] /etc/system-fips does not exist on host, not mounting FIPS mode secret 
DEBU[0000] Setting CGroups for container f9159d6da0d05393e136d274a1a95dce5b53f0f077ffc8969846d0edacbf8203 to libpod_parent:libpod:f9159d6da0d05393e136d274a1a95dce5b53f0f077ffc8969846d0edacbf8203 
DEBU[0000] set root propagation to "rslave"             
DEBU[0000] Created OCI spec for container f9159d6da0d05393e136d274a1a95dce5b53f0f077ffc8969846d0edacbf8203 at /var/home/me/.local/share/containers/storage/overlay-containers/f9159d6da0d05393e136d274a1a95dce5b53f0f077ffc8969846d0edacbf8203/userdata/config.json 
DEBU[0000] /usr/bin/conmon messages will be logged to syslog 
DEBU[0000] running conmon: /usr/bin/conmon               args="[--api-version 1 -s -c f9159d6da0d05393e136d274a1a95dce5b53f0f077ffc8969846d0edacbf8203 -u f9159d6da0d05393e136d274a1a95dce5b53f0f077ffc8969846d0edacbf8203 -r /usr/bin/crun -b /var/home/me/.local/share/containers/storage/overlay-containers/f9159d6da0d05393e136d274a1a95dce5b53f0f077ffc8969846d0edacbf8203/userdata -p /tmp/1000/overlay-containers/f9159d6da0d05393e136d274a1a95dce5b53f0f077ffc8969846d0edacbf8203/userdata/pidfile -l k8s-file:/var/home/me/.local/share/containers/storage/overlay-containers/f9159d6da0d05393e136d274a1a95dce5b53f0f077ffc8969846d0edacbf8203/userdata/ctr.log --exit-dir /run/user/1000/libpod/tmp/exits --socket-dir-path /run/user/1000/libpod/tmp/socket --log-level debug --syslog --conmon-pidfile /tmp/1000/overlay-containers/f9159d6da0d05393e136d274a1a95dce5b53f0f077ffc8969846d0edacbf8203/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/home/me/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /tmp/1000 --exit-command-arg --log-level --exit-command-arg error --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mount_program=/usr/bin/fuse-overlayfs --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg container --exit-command-arg cleanup --exit-command-arg f9159d6da0d05393e136d274a1a95dce5b53f0f077ffc8969846d0edacbf8203]"
[conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied

DEBU[0000] Received: -1                                 
DEBU[0000] Cleaning up container f9159d6da0d05393e136d274a1a95dce5b53f0f077ffc8969846d0edacbf8203 
DEBU[0000] Network is already cleaned up, skipping...   
DEBU[0000] unmounted container "f9159d6da0d05393e136d274a1a95dce5b53f0f077ffc8969846d0edacbf8203" 
ERRO[0000] unable to start container "test_container": sd-bus call: Invalid argument: OCI runtime error

I’m getting this same error: sd-bus call: Invalid argument: OCI runtime error. Not able to start toolboxes. But at least I can create a new F31 toolbox and work with that. It would be very nice to start the several f30 toolboxes I have. One weird thing I notice, maybe totally inconsequential is when trying to start a previous toolbox is the message: toolbox: base image is fedora-toolbox:31. The toolbox was created as f30.

toolbox: failed to start container dev
[bkelly@xps13 ~]$ toolbox -v enter -c dev
toolbox: running as real user ID 1000
toolbox: resolved absolute path for /usr/bin/toolbox to /usr/bin/toolbox
toolbox: checking if /etc/subgid and /etc/subuid have entries for user bkelly
toolbox: TOOLBOX_PATH is /usr/bin/toolbox
toolbox: running on a cgroups v2 host
toolbox: current Podman version is 1.6.2
toolbox: migration not needed: Podman version 1.6.2 is unchanged
toolbox: Fedora generational core is f31
toolbox: base image is fedora-toolbox:31
toolbox: container is dev
toolbox: checking if container dev exists
toolbox: calling org.freedesktop.Flatpak.SessionHelper.RequestSession
toolbox: starting container dev
toolbox: /etc/profile.d/toolbox.sh already mounted in container dev
Error: unable to start container "dev": sd-bus call: Invalid argument: OCI runtime error
toolbox: failed to start container dev
[bkelly@xps13 ~]$

Check the owner of this file is your user and it has write permission set.

% ls -al /proc/self/oom_score_adj
-rw-r--r--. 1 me me 0  7. Nov 17:59 /proc/self/oom_score_adj

Check this: Can't get toolbox working in Fedora Silverblue Beta 31 - #26 by returntrip

Thank you - this worked for me and container created on old version of podman now starts:
sudo podman system migrate --new-runtime crun