Updating an FCOS node’s configuration

So… what’s the “right” way to change the configuration of a node? I ask because I’ve seen some things saying the answer is to re-provision it. Here’s my use case for context:

  • I have a 3 node K3s cluster
  • For initial setup I did not configure raid but the intention was to have everything on a mirrored pair of disks (the only ones in each boxes)
  • I also didn’t setup one of the network interfaces in the original ignition, but it is now needed

My question is this: what’s the recommended way to get from where I am to the desired destination of an additional network interface between configured and everything being on a raid mirror?

I think for RAID it depends on the initial disk configuration i.e. whether /var is a separate partition, etc. What is the current layout of your disks?

I think some of this is “it depends” and you’ve provided two great examples that we can use to illustrate that.

For network config you can easily just change the configuration on the nodes in place (make sure to make it persistent so it sticks around after reboot), but you’ll want to also attempt to update your Ignition config so that if you ever lose your cluster (for whatever reason) you’ll be able to spin up the exact same config again.

The RAID config on the other hand. It’s pretty hard to migrate an existing OS install to RAID. That’s one that would clearly to me be better if you just reprovisioned with the correct configuration.

1 Like

@hricky Here is what I have in my butane config for partitions and such:

storage:
  disks:
    - device: /dev/disk/by-id/coreos-boot-disk
      wipe_table: false
      partitions:
      - number: 4
        label: root
        size_mib: 8192
        resize: true
      - label: var  # not specifying "number", so this will go after the root partition
        size_mib: 0 # means "use the rest of the space on the disk"
  filesystems:
    - path: /var
      device: /dev/disk/by-partlabel/var
      format: xfs
      wipe_filesystem: false # preserve /var on reinstall (this is the default, but be explicit)
      with_mount_unit: true  # mount this filesystem in the real root

That seems to have resulted in this:

$ sudo fdisk -x /dev/nvme0n1
Disk /dev/nvme0n1: 894.25 GiB, 960197124096 bytes, 1875385008 sectors
Disk model: SAMSUNG MZ1L2960HCJR-00A07
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 131072 bytes / 131072 bytes
Disklabel type: gpt
Disk identifier: BD34AE0F-4F97-4974-8BE5-F1706A8CD843
First usable LBA: 2048
Last usable LBA: 1875382960
Alternative LBA: 1875385007
Partition entries starting LBA: 2
Allocated partition entries: 128
Partition entries ending LBA: 33

Device            Start        End    Sectors Type-UUID                            UUID                                 Name       Attrs
/dev/nvme0n1p1     2048       4095       2048 21686148-6449-6E6F-744E-656564454649 5778BFFA-E9B8-4EC0-B682-AB23A71A0DFB BIOS-BOOT
/dev/nvme0n1p2     4096     264191     260096 C12A7328-F81F-11D2-BA4B-00A0C93EC93B A29ACCA5-EF76-4D62-BF1E-CD49D0A9022E EFI-SYSTEM
/dev/nvme0n1p3   264192    1050623     786432 0FC63DAF-8483-4772-8E79-3D69D8477DE4 6349C087-7B38-481B-A78A-CAD2856585D8 boot
/dev/nvme0n1p4  1050624   17827839   16777216 0FC63DAF-8483-4772-8E79-3D69D8477DE4 B746A186-6A49-4E59-B83D-621CFE1AB0A6 root
/dev/nvme0n1p5 17827840 1875382960 1857555121 0FC63DAF-8483-4772-8E79-3D69D8477DE4 E58785F3-5C52-48B2-8386-C7C6D509797E var

I’ve done a similar RAID1 setup, so if you’re not sure how to do it, I can try to replicate your disk layout and test some Butane configs for reprovisioning.