In Fedora 25 and older, Docker used the devicemapper storage driver. In 26, this was changed to overlay2. If that system was one you had upgraded, it may still have been using devicemapper, while the new install is using overlay2.
Maybe you had also been intentionally using a different driver.
Check your backups for an /etc/docker/daemon.json, and if present, check the storage-driver value and compare it to the output of docker info.
Dec 18 16:33:41 phil.pricom.com.au dockerd-current[11093]: unable to configure the Docker daemon with file /etc/docker/daemon.json: the following directives are specified both as a flag and in the configuration file: storage-driver: (from flag: overlay2, from file: devicemapper)
The only place I can find mention of âoverlay2â is in:
devicemapper/devicemapper/data
so I am not sure where the flag is . .
While I was waiting I recycled an old 500GB SATA drive and installed F25 from LiveUSB. After installing everything else, I copied my backup docker dirs and started docker happily and found that I could then list all my images and containers! However, when I try start any of the containers I get something like:
Error response from daemon: updating the store state of sandbox failed: failed to update store for object type *libnetwork.sbState: json: cannot unmarshal object into Go value of type string
Error: failed to start containers: jekyll_baco
So now I have two machines, F25 and F29, almost working but not quite . . but I seem to be getting closer!
Hmm, just to confirm something a sec first: if you grab a config.json from any container in /var/lib/docker/containers, what storage device does it use?
failed to update store for object type *libnetwork.network: json: cannot unmarshal bool into Go value of type string ¡ Issue #20869 ¡ moby/moby ____ https://github.com/moby/moby/issues/20869
with this solution:
I got the same problem after system update, and after I deleted /var/lib/docker/network/files/* and restarted docker service, all the private networks were gone and I had to create all of them prior restart the containers
I could try that option but I will have to reread my notes about creating the Jekyll and Rails networks . .
Update1: I couldnât get this to work - I can recreate a network but still canât start a container - in fact using âdocker start xxxâ does nothing and I have to CTRL-c out of it . .
Update2: After some messing around and some more thinking I tried again:
This basically confirms what I thought⌠Can you check /etc/sysconfig/docker-* and see if you need to change anything there from overlay2 to devicemapper?
Many thanks for all your help - although I could have done without all the extra time and effort needed to recover from this crashed disk - at least I have learnt a bit more about docker!
Also, FYI, Fedora people were the only ones who responded at all to my request for help on the various docker fora . .
FWIW for future reference, docker export is the way to go if there are images/containers you really need to preserve, or if it takes too much space & youâre not particularly tied to Docker in particular, switch to podman and docker export container-name | sudo podman import, then you can push everything to an OSTree repo, which uses less disk space due to automatic deduplication and such.
I always thought that exporting all the container every night was clumsy and tedious . . a one-line rsync backup is much easier and nicer . . but with some limitations obviously . .
Now that is very interesting and I have been trying to find time to set up a test machine for something like that - in fact I posted a note to the Atomic list in April:
Atomic Host: Rails, Jekyll plain HTML containers; Qmail & EZMLM; Discourse
and actually attempted to change my workstation just before this most recent crash but couldnât install from the LiveUSB for some reason (a few other people had the same problem) but I eventually ran out of time and went with XFCE LiveUSB again . .
I just looked at the intro podman video and it looks interesting but it is hard to see how any tool exporting mechanism is going to be as fast as a rsync incremental backup of changed files - the export tool will always export ALL the data EVERY time but rsync only copies the parts of individual files that are changed - which will always be much less data and therefore be a faster backup - although with the consequent problems of the changed docker version during a restore I have already found of course.
Well of course, but you just run into the problem that Docker wasnât necessarily built with rsyncing containers in mind. It sort of works, but not always (as you already found outâŚ). That being said, youâll probably have better luck in the future if you save `docker configâs output with each backup.
When I mentioned âswitchâ to podman, I literally meant âexport everything from Docker and start using podman insteadâ, that way OSTree backups would still take a bit but youâd at least have less space usage vs docker export.
Itâs probably going to be a bit of a lose-lose scenario regardless, since most container systems were built around the idea of containers being easily disposed and recreated.