Exception while creating partitions in the Fedora CoreOS with ignition file

I am trying to setup a fedora on my virtualbox VM.

variant: fcos
version: 1.5.0
passwd: # setting login credentials
  users:
    - name: core
      ssh_authorized_keys:
        - ssh-rsa AAAAB3Nz...
    - name: rajendra
      groups:
        - wheel
      password_hash: $y$j9T$B4...   
      ssh_authorized_keys:
        - ssh-rsa AAAAB3Nza...
storage:
  disks:
    #device: /dev/disk/by-id/coreos-boot-disk
  - device: /dev/sda    
    wipe_table: false
    partitions:
    - number: 4
      label: root
      size_mib: 10240
      resize: true    
    - number: 1
      label: swap
      size_mib: 4096
    - label: containers
      size_mib: 3096
  - device: /dev/sdb
    wipe_table: false
    partitions:
    - size_mib: 0
      start_mib: 0
      label: data
    
  filesystems:
    # - device: /dev/disk/by-partlabel/root
    #   wipe_filesystem: true
    #   format: xfs
    #   label: root
    # - device: /dev/disk/by-partlabel/swap
    #   format: swap
    #   wipe_filesystem: true
    #   with_mount_unit: true  
    - path: /var/lib/containers
      device: /dev/disk/by-partlabel/containers
      format: xfs
      with_mount_unit: true
    - path: /data
      device: /dev/disk/by-partlabel/data
      format: xfs
      with_mount_unit: true
  files:
    # CRI-O DNF module
    - path: /etc/dnf/modules.d/cri-o.module
      mode: 0644
      overwrite: true
      contents:
        inline: |
          [cri-o]
          name=cri-o
          stream=1.29
          profiles=
          state=enabled
    # YUM repository for kubeadm, kubelet and kubectl
    - path: /etc/yum.repos.d/kubernetes.repo
      mode: 0644
      overwrite: true
      contents:
        inline: |
          [kubernetes]
          name=Kubernetes
          baseurl=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/
          enabled=1
          gpgcheck=1
          gpgkey=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/repodata/repomd.xml.key
          exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
    # configuring automatic loading of br_netfilter on startup
    - path: /etc/modules-load.d/br_netfilter.conf
      mode: 0644
      overwrite: true
      contents:
        inline: br_netfilter
    # setting kernel parameters required by kubelet
    - path: /etc/sysctl.d/kubernetes.conf
      mode: 0644
      overwrite: true
      contents:
        inline: |
          net.bridge.bridge-nf-call-iptables=1
          net.ipv4.ip_forward=1

This is the content of ignition file.
What could be wrong with this.
I am getting exception as follows:

I see you are trying to write a swap partition on partition 1. That definitely won’t work as we make assumptions about at least the first 4 partitions on the disk: Configuring Storage :: Fedora Docs

Try removing the number: 1 there.

Oh, that’s a big miss from my side. I’ll update it and get back with the results.
Thank you.

Looks like swap worked this time but got this new exception for /data partition:

How can I make sure /data gets mounted on device: /dev/sdb ?
Am i missing label on filesystem:

    - path: /data
      device: /dev/disk/by-partlabel/data
      format: xfs
      label: data
      with_mount_unit: true

is this what i need?

try by using /var/data/ as the path. Top level paths aren’t supported easily.

This is not OS disk. data is a partition on different disk /dev/sdb. should we still need to have /var/data/ ?

From the Fedora CoreOS Documentation, slightly modified to match your case:

As OSTree is used to manage all files belonging to the operating system, the / mountpoint is not writable.

Adding top level directories (i.e. /data) is currently unsupported and disallowed by the immutable attribute.

The real / (as in the root of the filesystem in the root partition) is mounted readonly in /sysroot and must not be accessed or modified directly.

The recommended location for creating mountpoints, directories, and other user data is /var. That is, all data must be kept under /var and will not be touched by system upgrades.

Thank you for the information. I’ll try to mount all data partition to /var/data-ssd and /var/data-hdd.
I have one more question. I am using this bare-metal server as a Kubernetes worker node, this server has a HDD with 2 TB storage and a SSD with 1Tb storage. Is this a right use case? What I believe that a worker node generally do not need to have additional storage, having storage for image store and some storage if we ever need a host mount then we might need some. What is the best practice?

Kubernetes supports a range of storage solutions, so it depends on the applications requirements and clusters configuration.

1 Like

Seems good to me. Fedora CoreOS supports that just fine.

1 Like

One more and last question. Which option is better in this situation when I have this server with 32Cores, 128GB ram and 3TB storage.?

  1. Having 4 instances (namespaces) of my application stack running on a single bare metal Kubernetes worker node.
    Or
  2. having 4 VMs with k8s worker nodes running as part of the cluster and still running 4 namespaces of my application.

considering the factors like resource sharing, management, best utilization of resources etc

I think having such resources implies a multi-node cluster configuration. This way you will be able to add/remove and scale not only the worker but also the control plane nodes.

Since I am currently studying Kubernetes, I would like to point out that creating a highly available, scalable and secure cluster can be challenging, but the knowledge gained is more rewarding.

1 Like

Added kubernetes