Zincati Airlock integration

Hello Here,

I am currently working with the coreOS fleetlock update strategy using zincati and the airlock mechanism here GitHub - coreos/airlock: Minimal update/reboot orchestrator for Fedora CoreOS clusters

But, after comfiguration and getting the service up and running i am currently getting this error:

[ERROR] lock-manager steady-state failure: client-side error: relative URL with a cannot-be-a-base base.

How do i resolve this and what does it mean? I would also like a walkthrough on using airlock and the exporter for collecting zincati metrics. Thanks.

@lucab

It sounds as though you may have configured a relative base URL for the lock manager in Zincati. Could you post your Zincati config?

Here:

[updates]
strategy = “fleet_lock”
[updates.fleet_lock]
base_url = “private ip:3333”

The airlock server is on the same network as the server using zincati, so they could connect.

The documentation for this feature is at Updates strategy - coreos/zincati.

As stated in the docs (and implied by the name) you have to pass a non-empty URL in base_url, which is used as the base endpoint to reach the remote lock manager.
A bare “address:port” is not a proper URL as it is missing the protocol scheme. The value should start with https:// or http://, as shown in the example in the docs.

1 Like

Thank you!

Currently getting this from the airlock logs:

time=“2021-08-30T12:03:40Z” level=debug msg=“etcd3 configuration” endpoints=“[http://127.0.0.1:2379]”
time=“2021-08-30T12:03:40Z” level=debug msg=“lock groups” groups=“map[controllers:1 default:2 workers:2]”
time=“2021-08-30T12:03:40Z” level=info msg=“status service” address=127.0.0.1 port=2222
time=“2021-08-30T12:03:40Z” level=info msg=“main service” address=127.0.0.1 port=3333
time=“2021-08-30T12:03:43Z” level=warning msg=“consistency check, manager creation failed” reason=“context deadline exceeded”
time=“2021-08-30T12:03:46Z” level=warning msg=“consistency check, manager creation failed” reason=“context deadline exceeded”
time=“2021-08-30T12:03:49Z” level=warning msg=“consistency check, manager creation failed” reason=“context deadline exceeded”
time=“2021-08-30T12:04:52Z” level=warning msg=“consistency check, manager creation failed” reason=“context deadline exceeded”
time=“2021-08-30T12:04:55Z” level=warning msg=“consistency check, manager creation failed” reason=“context deadline exceeded”
time=“2021-08-30T12:04:58Z” level=warning msg=“consistency check, manager creation failed” reason=“context deadline exceeded”

msg=“consistency check, manager creation failed” reason=“context deadline exceeded”

The service (lock manager) is not working properly, as it is unable to talk the etcd3 cluster.
Its client connections stall and eventually reach a timeout (context deadling exceeded) after a few seconds.
This likely means you have a misconfiguration in your infrastructure, possibly because the etcd3 is not on http://127.0.0.1:2379 in the environment where airlock is running.

The etcd3 is on http://127.0.0.1:2379, also must the etcd3 be on the same environment? Can i host the etcd3 on a different environment and point the url to the config.toml?

plus, i thought isn’t an etcd3 client bundled with the airlock?

Yes, airlock is an etcd3 client and the etcd3 cluster can be hosted in whichever environment you prefer. As long as it can be reached over the network, you only need to configure the endpoints value in the relevant way.

In this case though the etcd3 cluster cannot be reached on http://127.0.0.1:2379, so you possibly have a misconfiguration in your infrastructure somewhere.

Hi @lucab Thank you for your response.

Here is the log to show etcd runs locally:

peer-urls":[“http://localhost:2380”],“advertise-client-urls”:[“http://localhost:2379”],“listen-client-urls”:[“http://localhost:2379”],“listen-metrics-urls”:}
{“level”:“info”,“ts”:“2021-09-01T12:46:47.257+0200”,“caller”:“embed/etcd.go:552”,“msg”:“cmux::serve”,“address”:“127.0.0.1:2380”}
{“level”:“info”,“ts”:“2021-09-01T12:46:47.295+0200”,“logger”:“raft”,“caller”:“raft/raft.go:779”,“msg”:“8e9e05c52164694d is starting a new election at term 5”}
{“level”:“info”,“ts”:“2021-09-01T12:46:47.295+0200”,“logger”:“raft”,“caller”:“raft/raft.go:721”,“msg”:“8e9e05c52164694d became pre-candidate at term 5”}
{“level”:“info”,“ts”:“2021-09-01T12:46:47.295+0200”,“logger”:“raft”,“caller”:“raft/raft.go:839”,“msg”:“8e9e05c52164694d received MsgPreVoteResp from 8e9e05c52164694d at term 5”}
{“level”:“info”,“ts”:“2021-09-01T12:46:47.295+0200”,“logger”:“raft”,“caller”:“raft/raft.go:705”,“msg”:“8e9e05c52164694d became candidate at term 6”}
{“level”:“info”,“ts”:“2021-09-01T12:46:47.295+0200”,“logger”:“raft”,“caller”:“raft/raft.go:839”,“msg”:“8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 6”}
{“level”:“info”,“ts”:“2021-09-01T12:46:47.295+0200”,“logger”:“raft”,“caller”:“raft/raft.go:757”,“msg”:“8e9e05c52164694d became leader at term 6”}
{“level”:“info”,“ts”:“2021-09-01T12:46:47.295+0200”,“logger”:“raft”,“caller”:“raft/node.go:332”,“msg”:“raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 6”}
{“level”:“info”,“ts”:“2021-09-01T12:46:47.296+0200”,“caller”:“etcdserver/server.go:1753”,“msg”:“published local member to cluster through raft”,“local-member-id”:“8e9e05c52164694d”,“local-member-attributes”:“{Name:default ClientURLs:[http://localhost:2379]}”,“request-path”:“/0/members/8e9e05c52164694d/attributes”,“cluster-id”:“cdf818194e3a8c32”,“publish-timeout”:“7s”}
{“level”:“info”,“ts”:“2021-09-01T12:46:47.296+0200”,“caller”:“embed/serve.go:98”,“msg”:“ready to serve client requests”}
{“level”:“info”,“ts”:“2021-09-01T12:46:47.296+0200”,“caller”:“etcdmain/main.go:47”,“msg”:“notifying init daemon”}
{“level”:“info”,“ts”:“2021-09-01T12:46:47.296+0200”,“caller”:“etcdmain/main.go:53”,“msg”:“successfully notified init daemon”}
{“level”:“info”,“ts”:“2021-09-01T12:46:47.298+0200”,“caller”:“embed/serve.go:140”,“msg”:“serving client traffic insecurely; this is strongly discouraged!”,“address”:“127.0.0.1:2379”}

During troubleshooting, i had to exec into the container, then i noticed the airlock container only uses its default configuration – Meaning, it ignores my custom configurations within it config.toml…

I used this command to run the container:

docker run -p 3333:3333/tcp -v "/etc/airlock/config.toml:/etc/airlock/config.toml" quay.io/coreos/airlock:main airlock serve -vv

What i have tried:

  1. Ran etcd3 locally and exposed it to airlock via the config.toml file
  2. Ran etcd3 on a standalone server and exposed it via config.toml

What do you it would likely be the issue?