Ahh. You know we are learning more and more about kubevirt ourselves as we go through this. What I’ve found out over time is that a “containerdisk” in kubevirt land isn’t something that’s persistent. i.e. it’s kind of like containers where when you stop a container and start it again anything that wasn’t in a Volume is gone. That’s what you are hitting here.
You can try to start a virtual machine type and use the data importer stuff to get the container imported into an actual PV (i.e. the OS will be persistent). Try with something like this:
---
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: fcos
spec:
runStrategy: Always
dataVolumeTemplates:
- metadata:
name: fcos-os-disk-volume
spec:
storage:
volumeMode: Block
resources:
requests:
storage: 10Gi
accessModes:
- ReadWriteOnce
source:
registry:
url: "docker://quay.io/fedora/fedora-coreos-kubevirt:stable"
template:
spec:
domain:
devices:
disks:
- disk:
bus: virtio
name: fcos-os-disk
- disk:
bus: virtio
name: cloudinitdisk
rng: {}
resources:
requests:
memory: 2048M
volumes:
- dataVolume:
name: fcos-os-disk-volume
name: fcos-os-disk
- name: cloudinitdisk
cloudInitConfigDrive:
secretRef:
name: ignition-payload
Depending on your cluster you may need to set storageClassName: name
under dataVolumeTemplates.spec.storage
.