Why is sssd-kcm ever installed by default?

I dunno. Whatever works for you. Bash scripting is, obviously, the most flexible of the existing tools.

I didn’t even find Kickstart documentation to be good enough to even learn of all its capabilities. And you just have to do it a certain way, blah blah. I can’t run it from a running system too?

After reading this thread i noticed it was running (active) on my system (Workstation F40), i’ve since uninstalled it.

(it indeed had used cpu cycles, although not many)

It is installed on my Sway workstation, but I don’t see it listed as running with systemctl list-units --type=service --no-legend | grep kcm.

Edit: systemctl status sssd-kcm.service shows that it did run briefly when I last signed in.

It had been running since i resumed from suspend at 16:30 (it’s now 21:30) and had used about 2.5s of cpu time. It was using about 5MB of ram if i recall correctly.

There! I’d think the same thing. But it runs. And I never use it. And as you can see from that bugzilla you linked it says that you can’t have the socket off because it provides a krb5.conf.d snippet along with sssd-kcm.socket file, which is a very bad justification to have something on by default that consumes CPU and leaves a socket open.

This is so weird. Why would there be divergent behavior? And yes, it’s not a lot of CPU cycles, but it’s more than would be if it was off. I don’t know, it might even bring an HDD in standby mode back online for nothing?

@soconfused,

If you would be kind enough to outline the workflow you use I would appreciate it.

A while back I used to automate the install of LFS purely with bash scripting and it was a blast. Recreating that workflow now would not be something I would want to take the time to do today. Feel free to ignore this reply.

There’s no workflow, I don’t do this for work, it’s for private use at the moment. If nothing made you take the time to do custom install scripting today, then there’s no need for you to go that way.

@soconfused,

Thanks for taking the time to reply. You sure live up to your handle with another non-answer:-)

I’m just so confused, how did I not answer?

The kickstart can be unnecessary if you are building containers on an existent Linux system. I do this sort of thing semi-regularly. FWIW, here is a redacted script (mkcon) that I use on one of my servers to spin up containers for occasional projects. It is highly specialized to my environment, so I doubt you will get much from it, but here it is anyway. :slight_smile:

#!/usr/bin/bash
# vim:set ts=3 sw=3:

set -e

NAME="$1"
POOL="zfs1"
ROOT="/var/lib/machines/$NAME"
CONF="/san/$(hostname -s).nspawn/$NAME.nspawn"
SNAP=$(zfs list -H -o name -t snapshot "$POOL/vm-99" | tail -n 1 | cut -d '/' -f 2)

if ! [[ $NAME =~ ^[a-z][a-z0-9\.-]+[a-z0-9]$ ]]; then
	echo -e "\e[0;31merror. bad container name.\e[0m"
	exit 1
fi

if [[ -e $ROOT ]] || \
	ping -q -c 1 "$NAME.example.edu" &> /dev/null
then
	echo -e "\e[0;31merror. name exists.\e[0m"
	exit 1
fi

cat <<- END > "$CONF"
	[Exec]
	PrivateUsers=false

	[Files]
	BindReadOnly=/var/lib/sss/pipes
	BindReadOnly=/var/lib/sss/mc

	[Network]
	MACVLAN=bond1
END

ln -sft "/etc/systemd/nspawn" "$CONF"

zfs send "$POOL/$SNAP" | zfs receive -v -o mountpoint="$ROOT" -o context="system_u:object_r:container_file_t:s0" "$POOL/san/$NAME"
systemd-firstboot --force --root="$ROOT" --hostname="$NAME.example.edu" --setup-machine-id

rm -f $ROOT/etc/ssh/ssh_host*
chroot "$ROOT" ssh-keygen -A

mkdir -p "$ROOT/root/.ssh"
printf 'ssh-rsa XXXX admin@example.edu\n' > "$ROOT/root/.ssh/authorized_keys"

cat <<- END > "$ROOT/etc/systemd/network/50-csnet.network"
	[Match]
	Name=mv-bond1

	[Network]
	LinkLocalAddressing=ipv6
END

[[ $NAME =~ ^vm-[0-9]{2}$ ]] || exit 0

INDX="${NAME#vm-}"

cat <<- END >> "$ROOT/etc/systemd/network/50-csnet.network"
	Address=fdXX:XXXX:XXXX:1:766d::$((10#$INDX))/64
	Address=XXX.XXX.XXX.$((10#$INDX+100))/24
	Gateway=XXX.XXX.XXX.XXX
	DNS=XXX.XXX.XXX.XXX
	DNS=XXX.XXX.XXX.XXX
END

read -r -n 1 -p "Start $NAME [y/n]?: " ANSWER; echo
if [[ $ANSWER == y ]]; then
	machinectl start $NAME
fi

read -r -n 1 -p "Enable mkhomedir [y/n]?: " ANSWER; echo
if [[ $ANSWER == y ]]; then
	systemctl -M $NAME enable oddjobd.service
	chroot $ROOT /usr/bin/authselect enable-feature with-mkhomedir
fi

There is also some extra systemd-networkd configuration on the host system so the networking will work in those containers.

P.S. That part with binding the sssd sockets from the host into the container is a similar concept to what sssd-kcm is doing. It allows the users that are known to the sssd service running on the host system to sign in to containers (e.g. if the containers are running sshd).

The base system that that script clones and tweaks was created using dnf --installroot=/some-mount-point --releasever=<N> <some-package-list>.

2 Likes