Understanding the "xz" security vulnerability's implications


Regarding the CVE-2024-3094 exploit in xz, I have a few layman’s questions:

  1. What would/could have happened, had the exploit not been detected at this stage? Who were the targets (only public servers/systems, or anyone infected)?

  2. Was there some degree of chance in the detection of the malicious code? Could it have passed easily undetected?

  3. I am aware that open-source software development works differently to software created and distributed by companies/corporations as closed-source, but I am wondering how are contributors accepted into projects, and how do they gain trust. What are the security measures in place (if any), besides code review and QA?

  4. Is it normal for a compression utility to be allowed to interfere with such a critical piece of software for a system as sshd (be it malicious code or not)? Shouldn’t such a privilege be granted only by the distros’ package maintainers, or maybe the users of a system?

1 Like

Anyone, since this malware type allows remote code execution (RCE), if not in the current version, then in the future releases for sure, and whether it is a server or not would not matter much.

I guess this is inevitable due to the wide scope of the attack as the malware would eventually need to send data which can be detected by intrusion detection systems (IDS) monitoring transit traffic passing through routers and gateways and notifying about suspicious activity, or someone would notice one of the mistakes in the code made intentionally to disable security checks and hide traces.

This depends on the specific project, but the main criteria is typically the number and quality of already accepted contributions including related FOSS projects, so it’s basically reputation that is relatively difficult to build and easy to destroy, which allows creating a web of trust that can be technically verified with GPG.

It doesn’t even matter whether SSH is installed, since multiple other system components depend on this library and can trigger the malware at any time.


I think this was specifically caused by:

  • having strange “binary testfiles” in the project. What are those used for?
  • not building the package from source but using the release archive
  • having a core dependency of like everything being maintained by a single person. Just like the xkcd…
1 Like

Thanks for the answers.

I’m just wondering, could it have made it into stable releases?

What are the rules here? Building the package using release archive was the proper way to go for this particular situation?

You may want to read through these for additional information.




This looks like a good timeline of what happened:



The “just-in-time” detection could well be misdirection to cover the fact that one of the TLA’s knew about the exploit and was quietly watching. The attacker would have expected the exploit to be detected, but hoped for wide deployment (e.g., in Fedora 40 and 3rd party applications) before that happened. There are always sites that don’t take the required actions to clean up after a malware incident.

Seems like Rust got it now in rust crates that found in April 9 and fixed April 10

"The current distribution (v0.3.2) on Crates.io contains the test files for XZ that contain the backdoor," Phylum noted in a GitHub issue raised on April 9, 2024.

Full article by hacker news

Wow, that was a great summary, thanks! :100: