[SOLVED] Disk crash - restoring from backup - no images or containers found

People,

From Fedora 27 to Fedora 29 x86_64:

After a catastrophic disk crash, on a new server, I restored these directories from backup:

/etc/docker
/var/lib/docker

but after starting docker I get no images or containers listed - what am I missing?

Thanks,

Phil.

That should probably work. Does journalctl -u docker -b show anything interesting?

1 Like

@refi64 ,

Many thanks for responding! - I really appreciate it . .

Ah . . yes:

  • 1252 lines
  • – Logs begin at Sat 2018-12-15 08:55:06 AEDT, end at Mon 2018-12-17 16:29:18 AEDT. –
  • 32 uniq time stamps
  • 63 occurrences of msg=“Cannot load container xxx because it was created with another graph driver.”
  • 21 uniq container ids
  • Original docker engine in Fedora 27 was not older than: docker-1.13.1-26.gitb5e3294.fc27
  • Current Fedora 29 docker: docker-1.13.1-62.git9cb56fd.fc29.x86_64

Interesting . . does this sound like it is hackable into something working again?

Thanks again,

Phil.

Were you using devicemapper before and now switching to overlay?

To expand what @dustymabe said:

In Fedora 25 and older, Docker used the devicemapper storage driver. In 26, this was changed to overlay2. If that system was one you had upgraded, it may still have been using devicemapper, while the new install is using overlay2.

Maybe you had also been intentionally using a different driver.

Check your backups for an /etc/docker/daemon.json, and if present, check the storage-driver value and compare it to the output of docker info.

Dan Walsh on the Fedora Cloud list, where this question was also asked, wonders if it was SELinux related?

restorecon -R -v /etc /var/lib/

Ah . . I see.

OK - I do have the backup of /etc/docker but daemon.json only has in it:

{
“debug”: true
}

Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 1.13.1
Storage Driver: overlay2
Backing Filesystem: extfs
WARNING: You’re not using the default seccomp profile
Supports d_type: true
Native Overlay Diff: true

Is it possible to change to the older driver with the setup in F29?

Thanks!

Phil.

No unfortunately, I had/have SELinux disabled in both cases . . I replied to the list pointing the discussion here.

Thanks,

Phil,

Add this to /etc/docker/daemon.json inside the braces:

"storage-driver": "devicemapper"

If it doesn’t exist, create it and make sure you put braces around the storage-driver setting:

{
  "storage-driver": "devicemapper"
}

It did already exist - now:

{
    "debug": true,
    "storage-driver": "devicemapper"
}

but trying to start docker with systemctl I get:

Dec 18 16:33:41 phil.pricom.com.au dockerd-current[11093]: unable to configure the Docker daemon with file /etc/docker/daemon.json: the following directives are specified both as a flag and in the configuration file: storage-driver: (from flag: overlay2, from file: devicemapper)

The only place I can find mention of “overlay2” is in:

devicemapper/devicemapper/data

so I am not sure where the flag is . .

While I was waiting I recycled an old 500GB SATA drive and installed F25 from LiveUSB. After installing everything else, I copied my backup docker dirs and started docker happily and found that I could then list all my images and containers! However, when I try start any of the containers I get something like:

Error response from daemon: updating the store state of sandbox failed: failed to update store for object type *libnetwork.sbState: json: cannot unmarshal object into Go value of type string
Error: failed to start containers: jekyll_baco

So now I have two machines, F25 and F29, almost working but not quite . . but I seem to be getting closer!

Thanks.

Hmm, just to confirm something a sec first: if you grab a config.json from any container in /var/lib/docker/containers, what storage device does it use?

@refi64 ,

I always have /var/lib/docker symlinked to /home/docker:

F29:

lm /var/lib/docker/containers
total 92
drwx------ 23 root root 4096 Apr 17  2018 .
drwx--x--x 15 root root 4096 Apr 13  2018 ..
drwx------  4 root root 4096 Nov  5 02:24 13dde83ccdc4654184e27d9e8eb00a06ea56ec77578a5e77d818535b1611d051
drwx------  5 root root 4096 May 11  2018 1b15d00313ccff57c265e923396c6804fd8039b49989a75ad164b49a312c4356
drwx------  5 root root 4096 May 11  2018 1f5fba5fc9503037b80a5ac8bc72693c5a1d48ef163f115baedafb67ff62bd24
drwx------  4 root root 4096 May 11  2018 30945d97b557e222a88888186eb6676f0f312052ea6332753ba494bb2e441561
drwx------  5 root root 4096 May 11  2018 3823471418ebd370fca723f63088427dacd36bf106cf6c3973331ee21c900d03
drwx------  4 root root 4096 May 11  2018 5f699b98ce800009c568247c7ace1705cf5d14db87615d63cd71879df1ebe264
drwx------  4 root root 4096 May 11  2018 6923c514bddac22ffe77b05895099446784f96da96123044ba831c1921e6b16f
drwx------  4 root root 4096 Nov  5 02:24 71990d28510db9501f94d0d89ef45437c3f809a2682f230f706ab9c57775ca6d
drwx------  4 root root 4096 Nov  5 02:24 78bb743f96193119f1fb2621c951cf542476d2fb063e6686ac462f11946865dc
drwx------  5 root root 4096 Apr 13  2018 7d77b7f6342d7138bdd8d3df8e9edd76d3921f160457052b42c63414dcd9a038
drwx------  4 root root 4096 Nov  5 02:24 7ee643f75b7c90468df25da8d14304297c1c5893365c137f1bd52785a6a4432c
drwx------  5 root root 4096 May 11  2018 9706857b919ac7586614459a20f8c7794ed024307cc0842bed7900270aa5404a
drwx------  5 root root 4096 May 11  2018 a961cf1ba878497ec1263ce9fa0b160f6eabf0fa5db8e7518d6751c71462ce09
drwx------  4 root root 4096 May 11  2018 ae38bfc456513f0a4ff54bc173e1135aba71dd8a9db5bc79d8055c1a9484d906
drwx------  4 root root 4096 Nov  5 02:24 cc1a014a2270243c54cbcadb0f6fa77a238246da5685185737718c8d3dd7c447
drwx------  5 root root 4096 May 11  2018 cc5f652dac144771202154523cf9a863fd2590ead67cfe33fa892882b199aa0f
drwx------  5 root root 4096 May 11  2018 e281a08d0a2ecd0121238acc25888e30b13b87853ed1e37a60b539cc14865f6e
drwx------  4 root root 4096 Nov  5 02:24 e76f5eacf1b3452a0355df0b98f79271af1387394a0ba21a485fa5832a102948
drwx------  4 root root 4096 May 11  2018 e8d0f30eb6d3fa29a5ec273e6a3ae256dcea349636308e7ac69d7dd4296cde3b
drwx------  4 root root 4096 May 11  2018 e99e13d3ce0b291c0d6af0315837be1d1db36e93f11902154aa8ebb40e506c21
drwx------  4 root root 4096 May 11  2018 ebdf5229ba8881408416a911d259790195b1f308c1f8ea03042ace642131042b

F25:

lm /var/lib/docker/containers/
total 92
    drwx------ 23 root root 4096 Apr 17  2018 .
    drwx--x--x 14 root root 4096 Apr 13  2018 ..
    drwx------  4 root root 4096 Dec 18 15:20 13dde83ccdc4654184e27d9e8eb00a06ea56ec77578a5e77d818535b1611d051
    drwx------  5 root root 4096 Dec 18 15:20 1b15d00313ccff57c265e923396c6804fd8039b49989a75ad164b49a312c4356
    drwx------  5 root root 4096 Dec 18 15:20 1f5fba5fc9503037b80a5ac8bc72693c5a1d48ef163f115baedafb67ff62bd24
    drwx------  4 root root 4096 Dec 18 15:20 30945d97b557e222a88888186eb6676f0f312052ea6332753ba494bb2e441561
    drwx------  5 root root 4096 Dec 18 15:20 3823471418ebd370fca723f63088427dacd36bf106cf6c3973331ee21c900d03
    drwx------  4 root root 4096 Dec 18 15:20 5f699b98ce800009c568247c7ace1705cf5d14db87615d63cd71879df1ebe264
    drwx------  4 root root 4096 Dec 18 15:20 6923c514bddac22ffe77b05895099446784f96da96123044ba831c1921e6b16f
    drwx------  4 root root 4096 Dec 18 15:24 71990d28510db9501f94d0d89ef45437c3f809a2682f230f706ab9c57775ca6d
    drwx------  4 root root 4096 Dec 18 15:24 78bb743f96193119f1fb2621c951cf542476d2fb063e6686ac462f11946865dc
    drwx------  5 root root 4096 Apr 13  2018 7d77b7f6342d7138bdd8d3df8e9edd76d3921f160457052b42c63414dcd9a038
    drwx------  4 root root 4096 Dec 18 15:24 7ee643f75b7c90468df25da8d14304297c1c5893365c137f1bd52785a6a4432c
    drwx------  5 root root 4096 Dec 18 15:20 9706857b919ac7586614459a20f8c7794ed024307cc0842bed7900270aa5404a
    drwx------  5 root root 4096 Dec 18 15:20 a961cf1ba878497ec1263ce9fa0b160f6eabf0fa5db8e7518d6751c71462ce09
    drwx------  4 root root 4096 Dec 18 15:24 ae38bfc456513f0a4ff54bc173e1135aba71dd8a9db5bc79d8055c1a9484d906
    drwx------  4 root root 4096 Nov  5 02:24 cc1a014a2270243c54cbcadb0f6fa77a238246da5685185737718c8d3dd7c447
    drwx------  5 root root 4096 Dec 18 15:20 cc5f652dac144771202154523cf9a863fd2590ead67cfe33fa892882b199aa0f
    drwx------  5 root root 4096 Dec 18 15:20 e281a08d0a2ecd0121238acc25888e30b13b87853ed1e37a60b539cc14865f6e
    drwx------  4 root root 4096 Dec 18 15:24 e76f5eacf1b3452a0355df0b98f79271af1387394a0ba21a485fa5832a102948
    drwx------  4 root root 4096 Dec 18 16:27 e8d0f30eb6d3fa29a5ec273e6a3ae256dcea349636308e7ac69d7dd4296cde3b
    drwx------  4 root root 4096 Dec 18 15:24 e99e13d3ce0b291c0d6af0315837be1d1db36e93f11902154aa8ebb40e506c21
    drwx------  4 root root 4096 Dec 18 15:24 ebdf5229ba8881408416a911d259790195b1f308c1f8ea03042ace642131042b

Can’t find any “config.json” file in any of those . .

Update: I found:

/var/lib/docker/containers/*/config.v2.json

files . .

→

"Driver": "devicemapper",

My F25 problem looks related to this:

failed to update store for object type *libnetwork.network: json: cannot unmarshal bool into Go value of type string ¡ Issue #20869 ¡ moby/moby ____ https://github.com/moby/moby/issues/20869

with this solution:

I got the same problem after system update, and after I deleted /var/lib/docker/network/files/* and restarted docker service, all the private networks were gone and I had to create all of them prior restart the containers

I could try that option but I will have to reread my notes about creating the Jekyll and Rails networks . .

Update1: I couldn’t get this to work - I can recreate a network but still can’t start a container - in fact using “docker start xxx” does nothing and I have to CTRL-c out of it . .

Update2: After some messing around and some more thinking I tried again:

  • rm /home/docker/network/files/*
  • docker network create --subnet=172.18.0.0/16 railsnet

manually created the socket using the newly created id:

  • python -c “import socket as s; sock = s.socket(s.AF_UNIX); sock.bind(‘./d7a23d5114bc6979b8608d73e18a1a9a7547db18d40b85e472f9a871c6eaa4a0’)”

and now I can start a Rails container! I will try something similar for the Jekyll containers.

Although this should get me operational again, I would still like to know how to do a simple rsync restore to a later version of fedora / docker . .

This basically confirms what I thought… Can you check /etc/sysconfig/docker-* and see if you need to change anything there from overlay2 to devicemapper?

YES!

Changing:

/etc/sysconfig/docker-storage
/etc/sysconfig/docker-storage-setup

now has my F29 machine running too!

Many thanks for all your help - although I could have done without all the extra time and effort needed to recover from this crashed disk - at least I have learnt a bit more about docker!

Also, FYI, Fedora people were the only ones who responded at all to my request for help on the various docker fora . .

2 Likes

FWIW for future reference, docker export is the way to go if there are images/containers you really need to preserve, or if it takes too much space & you’re not particularly tied to Docker in particular, switch to podman and docker export container-name | sudo podman import, then you can push everything to an OSTree repo, which uses less disk space due to automatic deduplication and such.

1 Like

@refi64 ,

I always thought that exporting all the container every night was clumsy and tedious . . a one-line rsync backup is much easier and nicer . . but with some limitations obviously . .

Now that is very interesting and I have been trying to find time to set up a test machine for something like that - in fact I posted a note to the Atomic list in April:

Atomic Host: Rails, Jekyll plain HTML containers; Qmail & EZMLM; Discourse

and actually attempted to change my workstation just before this most recent crash but couldn’t install from the LiveUSB for some reason (a few other people had the same problem) but I eventually ran out of time and went with XFCE LiveUSB again . .

Thanks again!

1 Like

You shouldn’t even need an atomic host for that. podman and OSTree can be installed on regular Fedora.

@refi64

I just looked at the intro podman video and it looks interesting but it is hard to see how any tool exporting mechanism is going to be as fast as a rsync incremental backup of changed files - the export tool will always export ALL the data EVERY time but rsync only copies the parts of individual files that are changed - which will always be much less data and therefore be a faster backup - although with the consequent problems of the changed docker version during a restore I have already found of course.

Well of course, but you just run into the problem that Docker wasn’t necessarily built with rsyncing containers in mind. It sort of works, but not always (as you already found out…). That being said, you’ll probably have better luck in the future if you save `docker config’s output with each backup.

When I mentioned “switch” to podman, I literally meant “export everything from Docker and start using podman instead”, that way OSTree backups would still take a bit but you’d at least have less space usage vs docker export.

It’s probably going to be a bit of a lose-lose scenario regardless, since most container systems were built around the idea of containers being easily disposed and recreated.