OK, so it sounds like we’ll need 1500–2000 packages. That is still an improvement.
Please correct me if I’m wrong, but I’d expect Rust packages to be a) self-contained in the Rust stack, and b) resilient to architecture details (because of the strict fixed-width typing). So I’d think that Rust packages don’t typically fail on one architecture but not another.
Yes. Though I’m likely the person least affected by that improvement
Yes, with some exceptions - like dependencies on system libraries like glibc, OpenSSL, SQLite, zlib, or cURL. Though those are almost certainly already pulled in by other things too since they’re all very widely used C libraries. Other than those, the stack of Rust packages is almost entirely self-contained, and has only minimal footprint (i.e. the default Python interpreter and one no-dependency noarch-only Python package - cargo2rpm). There are some exceptions (like rust-ring and, in the future, rust-aws-lc-rs) that pull in … more stuff (in these two cases, the Perl interpreter and a Go compiler, respectively).
Note that the Rust compiler and cargo themselves also have … more dependencies (refer to rust.spec for those).
Mostly yes, but also no. I’ve seen a surprising amount of architecture-specific breakages that were caused by a) “our code assumes pointers are always 8 bytes wide” (i.e. different sizes of the usize and isize types), b) “our code assumes that this type from the C standard library has the same size (or signedness!) everywhere”, or c) “our code assumes page size is always 4KB everywhere”.
In addition to those “obvious” portability issues - the address space on 32-bit architectures is just … really small. We have a (not small) number of Rust packages that hit OOM issues when building on i686 with our default compiler flags, and we need to apply workarounds in those to keep the peak memory usage < 3GB (or whatever the actual limit is).
That address space problem would be solved by cross-compiling on an x86_64 system, but that’s just opening another can of worms …
It might work sometimes, but I don’t think this would work in general. For example, packages for 64-bit executables would also pull in packages fir 64-bit libraries, and other 64-bit userspace stuff - which is often not co-installable with 32-bit things (i.e. because they conflict at the file level) that would also be pulled in as build dependencies.
RHEL9 includes 32 bit libraries, so RH will do at least some 32 bit maintenance until RHEL9 EOL. Could Fedora continue supporting 32 bit libraries until the final RHEL9 release (approximately mid 2027)?
I don’t want to make RH’s life harder than necessary, and neither Fedora’s. But RHEL 9 was based on Fedora 34, RHEL 10 on Fedora 40. Whatever we decide for upcoming Fedoras will hardly affect RHEL 9, unless they do backports across 9 Fedora releases.
I know this was discussed more at length in the previous thread, but it turns out that Valve does have a pretty comprehensive list of what packages they expect the distro to provide – at least for the Steam usecase:
I bring up the steam usecase because, as someone who works in game development (and knows people in adjacent fields), these are the sort of libraries that developers “expect” to be provided at large with 32 bit compatibility.
For instance, if I were writing something that (for some reason) needed a 32 bit environment, I would happily provide all libraries except for things like glibc, libgl, and so on. Put another way, from developers I’ve spoken to on this, the lower level a library is – the less developers want to provide their own bundled version.
I’d say that while this list is specific to the steam usecase, it’s a list that I feel will be extremely common among other usecases as well.
this list is close to what my blind tarball steam install experiment came up with.
Note that right now Fedora doesn’t provide libudev.so.0 from that recommended list . And a quick dnf provides check in my enabled repos doesn’t turn it up either. Something for whatever this new SIG is gonna be called will need to pick up maintenance for both 64bit and 32bit arches.
Its interesting that this is the only backwards compat shim library in the list, I was expecting more.
libudev compat is probably for some older controller drivers, I’d imagine. 99% of users won’t need it, but their list of recommended runtimes is to cover all situations, I imagine
But the dead software use case that you care most about.. game binaries… compat libraries like this become more and more important over time as dep ABI’s shift but the old dead software, like game binaries, are not rebuilt against the new library ABIs.
Compat libraries or at least shims to paper over the ABI changes are directly in the scope of what you want to see maintained. if libEGL or more likely libvulkan(because its under active development) for some reason needed an so version bump in the next couple of Fedora releases, everything in Fedora itself could be rebuilt and the project could just move on and not need a compat library shim for the old ABI. But the things you care about absolutely would, because they are NOT getting rebuilt against the new library ABI..ever.
I’m just shocked there isnt more of them than just that one for Steam right now, considering its age. The fact that udev is the only host recommended library that has seen an ABI bump in steam’s lifetime so far is sort of remarkable to me.
A lot of those can be provided by the software, or steam’s runtime, or lutris, etc. The problem I was worried about was these low level libraries/runtimes not being provided, as generally developers won’t provide those. Everything else is typically provided for, from what my research suggests.
I’m going to try one more time with a historical example out of that list..
In the Fedora 27 / 28 timeframe there was a jump from lbva.so.1 to libva.so.2, dropping libva.so.1 entirely from the distro. And around Fedora 30 libstdc++.so.5 was also dropped, it had been a compat package for a while prior to that.
If steam had been around in … checks notes.. 2018ish before those libraries got bumped to a new ABI versions. …someone would have to show up and commit to maintaining the backwards compact ABIs to keep steam working in the Fedora 30 and beyond timeframe.
Luckily for all of us, steam didn’t exist then. But there was definitely binary software that was pinched by the ABI change, the internet seems to remember if you ask it nicely to dredge up old forums and mailinglist posts from that period.
Moreover, any dead software from the pre-2019 timeframe that could not be rebuilt for Fedora 31 and beyond that expecting either libva.so.1 or libstdc++.so.5 to be provided by Fedora has been unable to function now for years. So if this list represents a reasonable minimal stack of backwards compat, then historically we’ve only been able to achieve about 6 or 7 years of backwards compat. And I think we’ve only achieved that because we haven’t seen a lot of development in the base elements that would result in an ABI bump. I think we are overdue honestly for something deep to ABI shift.
My point is, you and everyone else who cares about the backwards compat situation for dead binary software, have to expect everything in that list to have a version bump again. and existing maintainers won’t be providing the shim or taking on the burden to maintain it. Someone will have to stand up and affirm to do the work to keep backwards compat. The expectation that the existing maintainers contributing to Fedora will keep backwards compact for this short list forever is not realistic now, nor was it realistic historically.
Okay. If there is a major ABI shift, obviously, there will be problems. I’m not concerned so much with these specific versions as I am with 32 bit builds of these runtimes being somehow available. If there is a major ABI shift, Valve and others will have to make adjustments and it will be everyone’s problem, not our’s. But even after an ABI shift, developers will expect 32 bit programs to still be runnable on an x86_64bit platform because that is the entire point of x86_64, and the entire reason it was adopted over other 64 bit implementations; backwards compatibility with 32 bit software.
Whether that expectation is reasonable or not from our perspective is immaterial to some extent. That expectation is placed by years of precedent, and the very spec of x86_64 which accounts for 32 bit compatibility. It’s a philisophical question on what the developers should provide for their own runtimes vs what a distro should provide, but the idea of developers providing their own glibc appears odd; conventionally, stuff like core runtimes are provided by an Operating System/distro. There’s technical reasons too, I’m sure, but convention alone can be a powerful thing and we can’t change it, not on our own.
Put another way; if there is an ABI shift that breaks software, someone else can provide runtimes to make it compatible. But if that software is 32 bit, those compat runtimes will still expect some baseline glibc to be in place to work with. If you cannot run 32 bit software period out of the box on a distro, those compatibility layers won’t change much, since they generally interact with an existing 32 bit set of libraries.
Sure, we can’t maintain, say, so.6 of a package forever, but maintaining the latest 32 bit version will be necessary even for others to make compatibility tools. Installing these low level runtimes on an already functional distro that’s up and running has problems. That’s why they are among the first things installed with a distro image. That’s why they are expected to be provided.
In general moving forward with technology and abandoning backward compatibility has been hugely successful in the open source world. There is no way things would have evolved so far had the IBMs and MSs of the world had their way locking progress to their proprietary models.
Yet to some extent backward compatibilty is helpful and desirable. Running on 15 year old hardware allows for a much larger population to benefit from recent innovations and that that would not be possible otherwise might be a good case-in-point.
But why oh why would Fedora go down a path where supporting some proprietary platform would be important in any way let alone block real progress?
Neither Fedora nor anyone in the open source world will ever have the power to fix the propriatery platform. If attracting certian types of users is a goal where these proprietary platforms become a necessary enticement then open source computing has shifted in a bad direction.
The proprietary software in question is one extremely common example, that happens to be emblematic of what 32 bit compatibility developers expect. It is not unique, however; wine 32 bit is desirable over wow64 in many cases still due to its unstable nature, and it depends on 32 bit libraries as well. Wine is open source. Lutris has compatibility tools that depend on 32 bit runtimes. Lutris is open source.
See also, alma linux, one of our sister projects, which also needs 32 bit packages for their FEX emulation on apple silicon. GCC developers expressed on the mailing list that they need 32 bit runtimes as well to keep using Fedora as their dev distro. So no. It’s not “proprietary software” getting in the way. It’s also not “blocking progress”. Again, x86_64 as a spec caught on over other 64 bit implementations because it could run 32 bit software. It’s an expectation inherent in the spec. You want to change that, go back to 1999 and tell AMD not to make AMD64, I guess.
That x86_64 would never have happened in '99 without 32-bit support is undeniable. I remember the churn happening in '99 at my employer at the time: Intel.
But it is no longer '99 or 2009 or 2019.
Today CPUs are RISC internally with complex microcode allowing for CISC externally. But this topic is not about simplifying modern CPUs no matter how beneficial that would be.
This topic is simplifying Fedora infrastructure and contributor time demands on building the next iteration of a valuable distribution in that it’s contributions to furthering the state of art in the open source realm helps at many levels.
What are the other solutions available to the 32-bit problems that affect open source progress? Qemu or Xen? Back in the day Solaris had “brand z” zone containers for running older software on newer hardwary. In the IBM AS400 there was SLIC that ran on radically different hardware without changing the os (OS was 128 bit and ran on both 48 bit and 64 bit CPUs at the time I worked with them).
My point is that finding ways forward are necessary and backward compatibility should never be forever.