How to trigger expand on root partition?

For CoreOS CL:

On first boot, the ROOT partition and filesystem will expand to fill any remaining free space at the end of the drive.

How do I trigger this on FCOS? I have expanded the disk for the ova to 100GB, but my root partition remains at the original size. I can expand the partition manually using growpart and xfs_growfs, but is there a way to trigger a resize via FCOS or ignition?

This is true for Fedora CoreOS as well. The root partition should get expanded on first boot. This is done by the ignition-ostree-growfs.service.

Is it not running on boot for you? What does the journal show?

journalctl -u ignition-ostree-growfs.service

Also, are you resizing the disk after you’ve booted the machine?

My build procedure runs packer to create a new image and then expand the disk. Doing it in this order creates a smaller image. With CL, that first boot after creating the new image expanded the disk. With FCOS, it does not.

I’m not sure why CL worked given that technically it was the second boot of that image where /sysroot got expanded due to packer being the first boot. We were using Cloud Config, so perhaps it was something due to how that worked. I can see FCOS running the ignition-ostree-growfs.service in the logs during the packer boot. So it is working… just differently than CL.

Nevertheless, is there a way I can trigger FCOS to resize the disk or do I need to make my own oneshot service to do this check?

Thank you for all your help!

In FCOS it only runs on first boot, just like ignition. I guess the growpart bits that ran in CL ran on every boot and not just the first boot? We probably need to evaluate this whole packer workflow in the future to see how we can either support that better or provide users with a better way to do what they’re doing with packer.

Regarding growing the FS you can can probably just run growpart and xfs_growfs just like coreos-growpart does, but you’ll probably have to experiment a bit.

I took your advice and made my own growpart script which ran on startup and it worked fine… until we update FCOS. It seems that sysroot now mounts as readonly which causes xfs_grow to fail:

xfs_growfs: XFS_IOC_FSGROWFSDATA xfsctl failed: Read-only file system

If I remount the filesystem it works:

sudo mount -o remount,rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota /dev/sda4 /sysroot

However, trying to get it back to readonly yields:

mount: /sysroot: mount point is busy.

At this point, the only way I found to get the volume remounted correctly is to reboot. Is there an easier way to accomplish this?

We hit some issues with the read-only sysroot moutning so we disabled it for now (see https://github.com/coreos/fedora-coreos-tracker/issues/488). I tried to resize things on the latest testing release (32.20200715.2.2) and it works fine (should work on stable too):


[core@localhost ~]$ lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0  376K  0 disk 
sdb      8:16   0   50G  0 disk 
|-sdb1   8:17   0  384M  0 part /boot
|-sdb2   8:18   0  127M  0 part /boot/efi
|-sdb3   8:19   0    1M  0 part 
`-sdb4   8:20   0 39.5G  0 part /sysroot
sdc      8:32   0   10G  0 disk 

[core@localhost ~]$ sudo growpart /dev/sdb 4 
CHANGED: partition=4 start=1050624 old: size=82835423 end=83886047 new: size=103806943 end=104857567
[core@localhost ~]$ echo $?
0

[core@localhost ~]$ lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0  376K  0 disk 
sdb      8:16   0   50G  0 disk 
|-sdb1   8:17   0  384M  0 part /boot
|-sdb2   8:18   0  127M  0 part /boot/efi
|-sdb3   8:19   0    1M  0 part 
`-sdb4   8:20   0 49.5G  0 part /sysroot
sdc      8:32   0   10G  0 disk 
                    
                    
                    
[core@localhost ~]$ sudo xfs_growfs /sysroot
meta-data=/dev/sdb4              isize=512    agcount=71, agsize=146432 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1
data     =                       bsize=4096   blocks=10354427, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 10354427 to 12975867

[core@localhost ~]$ echo $?
0

On this version you tested, was sysroot read-only? If not, is there going to be a solution to easily grow the filesystem with a read-only sysroot when it is reenabled?

No. Since the read-only sysroot was disabled recently, the version I was using did not have read-only sysroot.

I don’t think we have any specific plans, but you can use a separate mount namespace to get around your current problem.

[core@localhost ~]$ cat /proc/mounts | grep sysroot
/dev/sdb4 /sysroot xfs ro,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,prjquota 0 0
[core@localhost ~]$ sudo su -
[root@localhost ~]# unshare --mount
[root@localhost ~]# cat /proc/mounts | grep sysroot
/dev/sdb4 /sysroot xfs ro,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,prjquota 0 0
[root@localhost ~]# mount -o remount,rw /sysroot 
[root@localhost ~]# cat /proc/mounts | grep sysroot
/dev/sdb4 /sysroot xfs rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,prjquota 0 0
[root@localhost ~]# xfs_growfs /sysroot
meta-data=/dev/sdb4              isize=512    agcount=74, agsize=141760 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1
data     =                       bsize=4096   blocks=10354427, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 10354427 to 12975867
[root@localhost ~]# exit
logout
[root@localhost ~]# cat /proc/mounts | grep sysroot
/dev/sdb4 /sysroot xfs ro,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,prjquota 0 0
[root@localhost ~]# exit
logout
[core@localhost ~]$ cat /proc/mounts | grep sysroot
/dev/sdb4 /sysroot xfs ro,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,prjquota 0 0

If you’re running the growfs from a systemd unit you should be able to use MountFlags=slave and the growfs should work I think.

1 Like

The unshare command never occurred to me! I added the following to the script my growfs service calls on startup and it works fine now:

if grep -q "/sysroot.*[[:space:]]ro[[:space:],]" /proc/mounts; then
  export -f remount_and_grow
  unshare --mount bash -c -- "remount_and_grow"
else
  ...
fi

You saved me from a really kludgey solution involving a reboot. Thanks again!

1 Like

FCOS Version: 32.20200809.3.0 (stable)
We have been facing the same issue of not being able to expand disk AFTER the first boot.
Using /usr/lib/dracut/modules.d/40ignition-ostree/coreos-growpart /sysroot as mentioned in the issue errors with read-only filesystem.

While the solution you posted works, but we also noticed that the same /dev/sda4 is mounted at /, /var and /sysroot
So we tried: /usr/lib/dracut/modules.d/40ignition-ostree/coreos-growpart / since / is mounted as rw. It works. Is there anything wrong in this approach that we might have missed?

Thanks!

coreos-growpart isn’t technically officially supported for manual invocation (that’s why it’s in a deeply nested non-obvious path; it’s meant to be run on firstboot only from the initramfs). While it currently works, I wouldn’t in general use it directly until we finish fleshing this out.

We do plan to make that functionality available for day-2 resizing. Discussions are happening upstream at https://github.com/coreos/fedora-coreos-tracker/issues/570.