Centralized logging on FCOS?

Currently, there is no centralized logging package installed (e.g., rsyslog, syslog-ng). What do you use for this? And I think this will not be possible on toolbox and I don’t want to add new layer using rpm-ostree.

My use case is for host security like (sshd, sudo calls, custom firewall logging).

There is journald. I recommend fluentbit, personally.

Here’s the journald/systemd input for it to ship your FCOS los to a central logging system(s) of your choice:

This will also work for toolbox and other podman containers since logs end up in journald as well.

sudo dnf install fluent-bit to install it

I wan’t to run it like daemon without allocated tty more of like a systemd process. Ah yes fluentbit, I’ve tried it earlier but its unable to output the SSHD logs inside, it only outputs user journald logs. I am still trying to figure the config out for that.

I think it should possible to run any log forwarding daemon as a privileged container.

Yes, that’s what I’m doing but it seems there is a problem on my config for fluent-bit. It’s running on --privileged mode but I only get user journald logs.

podman run --rm -it --privileged \
  -v /etc/machine-id:/etc/machine-id:ro \
  -v /run/log/journal:/run/log/journal:ro \
  -v /var/log/journal:/var/log/journal:ro \
  docker.io/fluent/fluent-bit:latest \
  fluent-bit -i systemd \
    -p read_from_tail=on \
    -p systemd_filter="_SYSTEMD_UNIT=sshd.service" \
    -p tag=host.* \
    -o stdout

Found the solution as the journald gets ingested in fluent it renames the _SYSTEMD_UNIT to different session names so best is to debug it using a busybox or any container that has bash access. For sshd I used _COMM=sshd.

I take this to mean you figured out the problem?

2 Likes

Fluentbit absolutely can be ran as a service. I test it by running the command and when the output looks right, add it to the /etc/fluentbit config. The fluentbit docs give examples for both CLI and conf file syntax. If you really want to use a container instead of the native packaged fluentbit in Fedora (via rpm-ostree install fluent-bit), then you can use podman generate systemd to generate the systemd service for it as a podman container.

I have used quadlet instead of the generate systemd as that one is already, I think deprecated. Instead of going for fluent-bit I did go for timberio/datadog vector as log collector. As this also collects metrics.

TIL:

1 Like

My 2 cents : I’m using a containerized promtail service to collect logs from journald and send them to a loki instance, then, I can query logs with grafana

1 Like

This is the solution I have too. Works well.

How are you getting your host journald logs to the container?

This is my butane file for promtail :

variant: fcos
version: 1.5.0

systemd:
  units:
    - name: promtail.service
      enabled: true
      contents: |
        [Unit]
        Description=Podman promtail.service
        Documentation=man:podman-generate-systemd(1)
        Wants=network.target
        After=network-online.target
        RequiresMountsFor=/run/mount/configs

        [Service]
        Environment=PODMAN_SYSTEMD_UNIT=%n
        Restart=on-failure
        Type=forking
        KillMode=none
        ExecStart=/usr/bin/podman run \
          --label="io.containers.autoupdate=registry" \
          --detach \
          --log-driver=journald \
          --replace \
          --name=%N \
          --privileged \
          -p 9080:9080 \
          -v /etc/machine-id:/etc/machine-id:ro \
          -v /run/mount/configs/loki/promtail.yaml:/mnt/config/promtail.yaml:ro \
          -v /run/log/journal:/run/log/journal:ro \
          -v promtail-data:/tmp \
          docker.io/grafana/promtail:latest \
          -config.file=/mnt/config/promtail.yaml
        ExecStop=/usr/bin/podman stop %N

        [Install]
        WantedBy=multi-user.target default.target

you will need to provide a config file, e.g. promtail.yaml like :

server:
  http_listen_port: 9080
  grpc_listen_port: 0

positions:
  filename: /tmp/positions.yaml

clients:
  - url: http://your_loki_instance:3100/loki/api/v1/push

scrape_configs:
  - job_name: journal
    journal:
      json: false
      max_age: 12h
      path: /run/log/journal
      labels:
        job: systemd-journal
    relabel_configs:
      - source_labels: ['__journal__systemd_unit']
        target_label: 'unit'
      - source_labels:
        - __journal__hostname
        target_label: hostname
      - source_labels:
        - __journal_syslog_identifier
        target_label: syslog_identifier

:wink: