I need to free inodes and the “lowest hanging fruit” is /usr/share. Would symlinking and removing the original directory solve it? FYI I’ve ext4 filesystem and full disk encryption so my options are limited.
Thanks Rob
As long as /home
is being mounted automatically at boot and there is nothing in /usr/share
that will cause selinux anxiety, It should work.
Depending on your disk layout it might be safer to use a bind mount instead of a symlink so that systemd mounts things in the correct order but it will probably work either way for most use cases.
It is easy enough to test. Copy the files without deleting and see if it works. If it does, delete the old files.
That being said, I can’t say I remember seeing a filesystem run out of inodes in recent memory.
You did not say which file system is in use.
I have noted some discussion of a similar nature where the btrfs file system itself needed clean up and reorganizing, which freed up about 30% of the previously occupied space. I don’t use btrfs on my daily driver so did not note the exact commands used though it is likely you can find those threads and the solution with a search here
Probably btrfs balance
Could be.
I am old school and hold to the belief that If it ain't broke don't fix it
. Thus I continue to use EXT4 and LVM rather than the new-fangled BTRFS on my drives that still work after several years use.
I’ve ext4 and LVM config
can we see df -h /
, df -i
, sudo find . -xdev -type f | cut -d "/" -f 2 | sort | uniq -c | sort -n
df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/fedora-root 49G 37G 9.8G 80% /
of which 8,4G is swap
$ df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
devtmpfs 1048576 596 1047980 1% /dev
tmpfs 2027206 548 2026658 1% /dev/shm
tmpfs 819200 1289 817911 1% /run
/dev/mapper/fedora-root 3276800 3268147 8653 100% /
tmpfs 1048576 95 1048481 1% /tmp
/dev/nvme0n1p2 65536 40 65496 1% /boot
/dev/mapper/fedora-home 11763712 560149 11203563 5% /home
/dev/nvme0n1p1 0 0 0 - /boot/efi
tmpfs 405441 221 405220 1% /run/user/1000
4119 5.19.16-200.fc36.x86_64
4148 6.0.5-200.fc36.x86_64
I’ve to create a startup script for that bind mount right?
You can put it in /etc/fstab
Using ext4 and LVM it is easy to (with /home not mounted) shrink the /home volume by an amount needed for other use.
This can be done by logging into a virtual terminal as root with ctrl + alt + F[3456]
. You may need to create a password for root before logging in that way.
It also can be done by booting to live media so the main system is not mounted.
Then once space in the VG is available you can grow / to the desired size.
If you already have unused space available in the VG then simply expand the root LV.
Commands that may be useful include
vgdisplay
which will list the VGs and show the free space available if any
lvdisplay
which will give info about the LVs.
lvreduce
which will shrink a logical volume (and should be used with the -r
option to resize the file system contained at the same time.
lvextend
which may (should) also be used with the -r
option to enlarge an LV and resize the file system at the same time.
lvresize
which can be used to either enlarge or shrink an LV and takes some of the same options as the previous 2 commands.
Use the man page for each of the above commands to understand how to use them. There is a -t
option available for each as well to test your command and see the changes that will be done before actually making changes.
I’ve a space, but I’m short on inodes
According to that you have no space in the root file system.
That was the ouput of df -i
, thus no free inodes; except root has a few more available.
Thanks, I missed that.
On my system I see
$ df -i /
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/fedora-root 9830400 1173156 8657244 12% /
$ df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/fedora-root 147G 67G 74G 48% /
so it seems really strange that you have all the inodes used with space available.
You could copy the content of /usr/share to a directory under /home, but it makes more sense to me to expand the /dev/mapper/fedora-root logical volume then work on how to free up those extra inodes.
In my experience the only time that occurs is when an ext4 file system is running close to full for an extended time and the file system actually fragments. Fragmentation is not a normal occurance with ext4 but it can happen when used for some time in nearly full conditions.
I have a script that I use to effectively defragment a file system by copying the fragmented files so they are no longer fragmented and occupy as few filesystem extents as possible. I only needed it once when my /home filesystem was at 95+% full for quite some time and many files that could fit inside one extent were scattered. The inode count was abnormally high at that time as is yours now. It worked well after I expanded the file system.
I could possibly send it to you by email if you wish, but there are no guarantees. I can only say it worked for me when needed.
Can you run
sudo find / -xdev | sed -e 's;/;;' -e 's;/.*;;' | sort | uniq -c |sort -n
That will count the total number of files in each top level directory. The total number of files contained in a file system should be roughly equal to the number of inodes used.
If you find a directory with an unusual number of files, check if they should all be there.
You can also run sudo e2fsck -f -v -n /dev/device
to find out info about a file system.
The -n
option must be used on a mounted file system to ensure no changes are made (changes to an active file system can cause corruption) but it reports the details.
This is the summary report from the above command on my /home volume (6TB in size)
/dev/mapper/fedora_raid1-home: ********** WARNING: Filesystem still has errors **********
1082345 inodes used (0.54%, out of 201326592)
18391 non-contiguous files (1.7%)
780 non-contiguous directories (0.1%)
# of inodes with ind/dind/tind blocks: 0/0/0
Extent depth histogram: 1070930/1001
1347829286 blocks used (83.68%, out of 1610612736)
0 bad blocks
456 large files
963010 regular files
96755 directories
0 character device files
0 block device files
7 fifos
4950 links
22558 symbolic links (10393 fast symbolic links)
2 sockets
------------
1087286 files
Thanks, I’ll try your script and if you can put it on Gist that’d be great (so others can use it too) and paste the link here as reply.
And I had run close to full in the past, so could be the case.
2878 opt
4974 etc
102626 root
192197 usr
2919660 var
var has only 28772 items and ~half the size of /usr.
I can only think off flatpack (flathub repo) “messing” that up somehow.
Then run
sudo find /var | sed s';[^/]*$;;' |sort |uniq -c |sort -n
to further find the subdirectory with too many files.