Unable to move podman containers to new location on CoreOS

Hello. I want to move both root and non-root containers running on podman on Fedora CoreOS to my high-endurance NVME drive mounted on /var/mnt/POOL-NVME instead of running on my root drive. I want to mount the containers into the directory /var/mnt/POOL-NVME/Podman/Storage.

I adjusted the configuration file /etc/containers/storage.conf to contain the following:

[storage]
driver = "overlay"
root = "/mnt/POOL-NVME/Podman/"
graphroot = "/mnt/POOL-NVME/Podman/storage"
runroot = "/mnt/POOL-NVME/Podman/storage/run"
libpodroot = "/mnt/POOL-NVME/Podman/storage/libpod"

Then I relabled the selinux context to match the previous directory:

semanage fcontext -a -e /var/lib/containers /mnt/POOL-NVME/Podman/
restorecon -Rv /mnt/POOL-NVME/Podman/

but now after I restart podman, and I run podman info I get the following error:

podman info

WARN[0000] Failed to decode the keys ["storage.root" "storage.libpodroot"] from "/etc/containers/storage.conf" 
Error: database static dir "/var/lib/containers/storage/libpod" does not match our static dir "/var/mnt/POOL-NVME/Podman/storage/libpod": database configuration mismatch

Is it possible to change the directory which podman containers are stored and run from on Fedora CoreOS? I ideally wanted to change the entire /var directory to run on the NVME drive but that doesn’t seem to be supported on CoreOS because of how CoreOS handles /var so I decided to settle on just running the podman containers on my NVME drive even though that’s not the ideal setup. Thanks for reading.

You can mount storage from a separate disk. The file system format must be ext4, btrfs, xfs, or vfat.

As already stated in the previous referred post:

Fedora CoreOS does not include support for ZFS so it won’t be able to mount it.

If you provide us with the output of the lsblk --paths --fs --all command, we can try to assemble a similar disk/partition layout and give a sample Butane config.

lsblk --paths --fs --all

NAME                    FSTYPE            FSVER LABEL       UUID                                 FSAVAIL FSUSE% MOUNTPOINTS
/dev/loop0              erofs                                                                                   
/dev/sda                                                                                                        
├─/dev/sda1                                                                                                     
├─/dev/sda2             vfat              FAT16 esp-2       A037-F484                                           
├─/dev/sda3             linux_raid_member 1.0   any:md-boot b36b9248-2d9a-974d-cea2-cc44f352f46e                
│ └─/dev/md126          ext4              1.0   boot        f45bdf49-522b-4390-9ab0-30c6c6e43e8f  103.4M    64% /boot
└─/dev/sda4             linux_raid_member 1.2   any:md-root b5addcf6-f452-0010-a171-60854f68cfd3                
  └─/dev/md127          crypto_LUKS       2     luks-root   0768c721-8b50-4076-b324-8880743a95f3                
    └─/dev/mapper/root  ext4              1.0   root        49b2cc11-fcd8-4e68-8d2e-aa1148707eee  212.2G     2% /var/lib/containers/storage/overlay
                                                                                                                /var
                                                                                                                /sysroot/ostree/deploy/fedora-coreos/var
                                                                                                                /etc
                                                                                                                /sysroot
/dev/sdb                                                                                                        
├─/dev/sdb1                                                                                                     
├─/dev/sdb2             vfat              FAT16 esp-1       A016-A386                                           
├─/dev/sdb3             linux_raid_member 1.0   any:md-boot b36b9248-2d9a-974d-cea2-cc44f352f46e                
│ └─/dev/md126          ext4              1.0   boot        f45bdf49-522b-4390-9ab0-30c6c6e43e8f  103.4M    64% /boot
└─/dev/sdb4             linux_raid_member 1.2   any:md-root b5addcf6-f452-0010-a171-60854f68cfd3                
  └─/dev/md127          crypto_LUKS       2     luks-root   0768c721-8b50-4076-b324-8880743a95f3                
    └─/dev/mapper/root  ext4              1.0   root        49b2cc11-fcd8-4e68-8d2e-aa1148707eee  212.2G     2% /var/lib/containers/storage/overlay
                                                                                                                /var
                                                                                                                /sysroot/ostree/deploy/fedora-coreos/var
                                                                                                                /etc
                                                                                                                /sysroot
/dev/sdc                crypto_LUKS       2                 17035a7f-38fa-49f8-b60b-7a00af951e07                
└─/dev/mapper/LUKS-NVR1 zfs_member        5000  POOL-NVR    2476206375860892766                                 
/dev/sdd                crypto_LUKS       2                 81b553bb-f843-4128-8269-fd4ff04c6fa1                
└─/dev/mapper/LUKS-NAS1 zfs_member        5000  POOL-NAS    13961303981013571736                                
/dev/sde                crypto_LUKS       2                 717c3840-7e95-424c-b445-dda843e877d1                
└─/dev/mapper/LUKS-NAS2 zfs_member        5000  POOL-NAS    13961303981013571736                                
/dev/sdf                crypto_LUKS       2                 61923592-d91a-4ce6-a243-a1c740fe28c5                
└─/dev/mapper/LUKS-NAS3 zfs_member        5000  POOL-NAS    13961303981013571736                                
/dev/nvme0n1            crypto_LUKS       2                 781814d8-fb20-4e5d-8a1e-4ee4f8270595                
└─/dev/mapper/LUKS-NVME zfs_member        5000  POOL-NVME   7728821665455308809 

I have zfs running on this machine. That doesn’t seem to be an issue. Is there something unique about zfs that makes it incompatible with traditional methods of moving files once it’s running?

Could you also please provide the output from the sudo rpm-ostree status command?

rpm-ostree status

State: idle
AutomaticUpdates: stage; rpm-ostreed-automatic.timer: no runs since boot
Deployments:
● ostree-image-signed:docker://ghcr.io/secureblue/securecore-zfs-main-userns-hardened:latest
                   Digest: sha256:5d19146c1dcc6bafc1c438e9170b42f07553dc0028ce105298082187d58f7aca
                  Version: 41.20241122.2.0 (2024-12-07T07:43:26Z)
      RemovedBasePackages: nfs-utils-coreos 1:2.8.1-1.rc1.fc41
          LayeredPackages: cockpit-file-sharing cockpit-files cockpit-machines cockpit-networkmanager cockpit-ostree cockpit-podman cockpit-sosreport cockpit-storaged cockpit-system libvirt-dbus nfs-utils virt-install virt-manager
                Initramfs: regenerate

As you can see from this output, as I stated in the previous thread, I am running a fork of Fedora CoreOS called Secureblue. There shouldn’t be so many changes that would render what I’m trying to do here different from a vanilla CoreOS installation (especially after I have the drives mounted properly)

Now, I already know what you’re going to say. In fact, I’m going to post what you’re going to say encrypted in AES which I will give the password to later so I can show that I predicted it:

This is the tool I used: https://www.iodraw.com/en/tool/encrypt

U2FsdGVkX1/ZmKqrwu/QxfN3Gv44OcRfisc1zPQEioJWaQ4VFlGVaXReVC3HAHzxTKtebKPSjaCbgZ5YFpEflrFJ5GvNdgCW2VSO0/SZtnFw2z8SS4v4m4vlqIdHuavHUU2WfqOqCZ/2QpXMNzwzmuWhkK9p9EwvioOZ2/E/xBUZYjE3xQqbjFnVLYomfpAN6beim1VK9sO1plRioqIcNgpp2Wadp5ny77CVvWBrCPAJq3v1tMybU14isV05SqMOv60Zzei2XpIQ6O1+6SOzaAbPMr1FDLfzPnDCmwccWR7h638i6FZ2bel1wkHrQ8InTS/6wgtPMmwG6UkpSwDtC74PUapjVaDQG69kSFsYTQJ3sDSPky9xpdKPqTDhlP+ZLLtUzsm0WHtYdulBXsiqOfkLp+wA0AF20iRlqR3+yqeAEbt/3C1Yyn3Mdn/HBLjRWn8B9FMlFAE0+RnsKI4Sf2lkRW3+IawkM6a9Cm3tY7K04MBCfuto1L1ptC9ldLr6PQY97s8JhCNJwhpeVcGXua936izPei7olxVKdnV1FcZqVG8aL5pPg05XbRbi4umxuql54B14GcrNMfGQKSbFHoqi2E6Hd/UJpQru3MSD4s0SlzkDUNUAi6Wl70o5hGLN

Unfortunately, I am not familiar with either the securecore-zfs-main-userns-hardened image or the ZFS file system, so I will not be able to completaly replicate your setup.

However, that what I can do is to try to provide a sample config for the btrfs file system. Then you can try adapting and testing it on your system. If this works for you, please provide your current Butane config so I can use it as a reference.

Looking at your initial post and setup, I created a simplified similar VM environment using the uCore image.

sudo rpm-ostree status
core@localhost:~$ sudo rpm-ostree status 
State: idle
Deployments:
● ostree-image-signed:docker://ghcr.io/ublue-os/fedora-coreos:stable-zfs
                   Digest: sha256:b7d0ab6bc055ecccf768732c210bb82058eb4f2acc66f2ae1b8796776ceac840
                  Version: 41.20241109.3.0 (2024-12-09T03:12:49Z)

core@localhost:~$ sudo zfs list
NAME        USED  AVAIL  REFER  MOUNTPOINT
POOL-NVME  6.05M  9.20G  5.81M  /mnt/POOL-NVME

core@localhost:~$ sudo zpool status -LP
  pool: POOL-NVME
 state: ONLINE
config:

	NAME         STATE     READ WRITE CKSUM
	POOL-NVME    ONLINE       0     0     0
	  /dev/vdb1  ONLINE       0     0     0

errors: No known data errors

Then I created the file /etc/containers/storage.conf with the following content.

cat /etc/containers/storage.conf
core@localhost:~$ cat /etc/containers/storage.conf
# This file is the configuration file for all tools
# that use the containers/storage library. The storage.conf file
# overrides all other storage.conf files. Container engines using the
# container/storage library do not inherit fields from other storage.conf
# files.
#
#  Note: The storage.conf file overrides other storage.conf files based on this precedence:
#      /usr/containers/storage.conf
#      /etc/containers/storage.conf
#      $HOME/.config/containers/storage.conf
#      $XDG_CONFIG_HOME/containers/storage.conf (If XDG_CONFIG_HOME is set)
# See man 5 containers-storage.conf for more information
# The "container storage" table contains all of the server options.

[storage]

# Default Storage Driver, Must be set for proper operation.
driver = "overlay"

# Temporary storage location
runroot = "/run/containers/storage"

# Primary Read/Write location of container storage
# When changing the graphroot location on an SELINUX system, you must
# ensure  the labeling matches the default locations labels with the
# following commands:
# semanage fcontext -a -e /var/lib/containers/storage /NEWSTORAGEPATH
# restorecon -R -v /NEWSTORAGEPATH
#graphroot = "/var/lib/containers/storage"
graphroot = "/mnt/POOL-NVME/Podman/Storage/graphroot"

# Storage path for rootless users
#
# rootless_storage_path = "$HOME/.local/share/containers/storage"
rootless_storage_path = "/mnt/POOL-NVME/Podman/Storage/rootless_storage_path"

This resulted in the following info about the Podman system.

podman info --format json | jq --raw-output '.store'
core@localhost:~$ podman info --format json | jq --raw-output '.store'
{
  "configFile": "/var/home/core/.config/containers/storage.conf",
  "containerStore": {
    "number": 0,
    "paused": 0,
    "running": 0,
    "stopped": 0
  },
  "graphDriverName": "overlay",
  "graphOptions": {},
  "graphRoot": "/var/mnt/POOL-NVME/Podman/Storage/rootless_storage_path",
  "graphRootAllocated": 9881518080,
  "graphRootUsed": 6160384,
  "graphStatus": {
    "Backing Filesystem": "zfs",
    "Native Overlay Diff": "true",
    "Supports d_type": "true",
    "Supports shifting": "false",
    "Supports volatile": "true",
    "Using metacopy": "false"
  },
  "imageCopyTmpDir": "/var/tmp",
  "imageStore": {
    "number": 1
  },
  "runRoot": "/run/user/1000/containers",
  "volumePath": "/var/mnt/POOL-NVME/Podman/Storage/rootless_storage_path/volumes",
  "transientStore": false
}

core@localhost:~$ sudo podman info --format json | jq --raw-output '.store'
{
  "configFile": "/etc/containers/storage.conf",
  "containerStore": {
    "number": 0,
    "paused": 0,
    "running": 0,
    "stopped": 0
  },
  "graphDriverName": "overlay",
  "graphOptions": {},
  "graphRoot": "/var/mnt/POOL-NVME/Podman/Storage/graphroot",
  "graphRootAllocated": 9881518080,
  "graphRootUsed": 6160384,
  "graphStatus": {
    "Backing Filesystem": "zfs",
    "Native Overlay Diff": "true",
    "Supports d_type": "true",
    "Supports shifting": "true",
    "Supports volatile": "true",
    "Using metacopy": "false"
  },
  "imageCopyTmpDir": "/var/tmp",
  "imageStore": {
    "number": 1
  },
  "runRoot": "/run/containers/storage",
  "volumePath": "/var/mnt/POOL-NVME/Podman/Storage/graphroot/volumes",
  "transientStore": false
}

Running simple Podman container, e.g. podman container run --rm -it docker.io/library/busybox seems to work.