Systemd-oom dies due to cgroup v1

systemd-oom is failing to start due to “ConditionControlGroupController=v2 was not met”
Does anyone know how I can get more information to track down the problem. I don’t see any useful information in the journal.

○ systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer
     Loaded: loaded (/usr/lib/systemd/system/systemd-oomd.service; enabled; preset: enabled)
    Drop-In: /usr/lib/systemd/system/service.d
             └─10-timeout-abort.conf
     Active: inactive (dead)
TriggeredBy: × systemd-oomd.socket
  Condition: start condition failed at Wed 2023-04-05 23:01:43 EDT; 2min 29s ago
             └─ ConditionControlGroupController=v2 was not met
       Docs: man:systemd-oomd.service(8)

systemd-oomd uses cgroups v2 (see man systemd-oomd).

In the service file, there is a condition set ConditionControlGroupController=v2 (see man systemd.unit). If your cgroup is not v2, the unit won’t start.

Fedora uses cgroup v2 by default since F31. Did you change it to v1 previously (perhaps for Docker)?

2 Likes

Using “podman info” I can see that it is V1

  cgroupManager: systemd
  cgroupVersion: v1

How do I set this to v2 ? I don’t have docker installed now (I may have installed it at one stage)

It looks like the system is using both …

# grep ^cgroup /etc/mtab
cgroup2 /sys/fs/cgroup/unified cgroup2 rw,nosuid,nodev,noexec,relatime,nsdelegate 0 0
cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,xattr,name=systemd 0 0
cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,nosuid,nodev,noexec,relatime,cpu,cpuacct 0 0
cgroup /sys/fs/cgroup/hugetlb cgroup rw,nosuid,nodev,noexec,relatime,hugetlb 0 0
cgroup /sys/fs/cgroup/pids cgroup rw,nosuid,nodev,noexec,relatime,pids 0 0
cgroup /sys/fs/cgroup/net_cls,net_prio cgroup rw,nosuid,nodev,noexec,relatime,net_cls,net_prio 0 0
cgroup /sys/fs/cgroup/perf_event cgroup rw,nosuid,nodev,noexec,relatime,perf_event 0 0
cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset 0 0
cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0
cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices 0 0
cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer 0 0
cgroup /sys/fs/cgroup/memory cgroup rw,nosuid,nodev,noexec,relatime,memory 0 0
cgroup /sys/fs/cgroup/misc cgroup rw,nosuid,nodev,noexec,relatime,misc 0 0

This should revert to using cgroup v2:

sudo grubby --update-kernel=ALL \
    --remove-args=systemd.unified_cgroup_hierarchy=0

If the issue persists, check the output:

cat /proc/cmdline /etc/default/grub
3 Likes

Yes, that must have been left over from the old docker installation.
I’ve removed it with grub-customizer and it’s now on v2.
Thanks!

● systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer
     Loaded: loaded (/usr/lib/systemd/system/systemd-oomd.service; enabled; preset: enabled)
    Drop-In: /usr/lib/systemd/system/service.d
             └─10-timeout-abort.conf
     Active: active (running) since Thu 2023-04-06 09:23:14 EDT; 3min 14s ago
TriggeredBy: ● systemd-oomd.socket
       Docs: man:systemd-oomd.service(8)
   Main PID: 1249 (systemd-oomd)
     Status: "Processing requests..."
      Tasks: 1 (limit: 19020)
     Memory: 1.6M (min: 64.0M low: 64.0M)
        CPU: 532ms
     CGroup: /system.slice/systemd-oomd.service
             └─1249 /usr/lib/systemd/systemd-oomd