So, autoremove is: “Removes all packages from the system that were originally installed as dependencies of user-installed packages, but which are no longer required by any such package.” (from man dnf) This simply means that these were installed as dependencies of some other package, which is no longer on the system. A lot of these look kde related—are you running kde?
Yes. KDE/Plasma spin. Also noticing a lot of kf5 pkgs which I assume are plasma5, so perhaps no longer needed.
But also make, ostree…I don’t compile things much, but surprised it’s there.
I read the warning on the upgrading fedora site, so thought I should get an opinion from someone more knowledgeable than me.
DNF decides that a package is no longer needed if you haven’t explicitly asked to install it and nothing else requires it. However, that doesn’t mean that the package is not useful or that you don’t use it. Only remove what you are sure you don’t need
Unless you have experimented with dnf5, your dnf should have a reliable picture of what you need and what you requested. That list looks legit.
make etc might have been pulled in by compiling kernel modules or by building packages locally.
After a couple upgrades I end up with many packages required to build my own software or perform tests. Eventually, many of those packages are made obsolete by newer packages. When I find my software “works for me” but not others with current Fedora or other distros (because I used a library that is no longer widely available) it is time to do a fresh install and either find replacements for the problem libraries or add the sources to my projects.
Another issue is some important scientific libraries (hdf5, gdal, netcdf) with build options that differ across distributions and over time. The trend has been to include more optional features over time, so I have to check libraries that were added to my projects to see if the current distro version can be used.
You should really try containers, virtualization, or Fedora Copr to isolate the building environment, as running it directly on the main system is generally unwise from both maintenance and security perspective.
I do use VM’s for some throwaway test experiments, but others are resource intensive, and VM’s need management. I deliberately use less capable hardware to ensure things are usable for colleagues in developing nations with limited resources (linux is used systems that don’t support current Windows and have been rescued from the storage closet).
Thanks for the advice! I went ahead and ran it. No problems so far, so I imagine things are fine, and I can reinstall anything I think I actually need. Not a programmer so I don’t have to worry about environments too much, thank goodness.