Adding encryption for disks at install and the strength of the crypto used do not have to be tightly linked.
You can loosely couple these improvements right?
Adding encryption for disks at install and the strength of the crypto used do not have to be tightly linked.
You can loosely couple these improvements right?
It will be helpful if the plan will document a method to de-crypt the installation permenantly.
What is the reason for needing this as a requirement?
I do not believe that post quantum cryptography is very important in this case. Shor’s algorithm can only at best speed up breaking symmetric cryptography with a factor of 2. So in the case of Luks 2 using AES 256, in the event that one day we have a true working quantum computer who is error proof, AES 256 would be equal to AES 128 so still safe beyond any expectations. My two cents worth.
Hmm, I don’t quite follow this.
Say that I have a hypothetical PCR7-without-dbx measurement of a booted system. If a dbx record is added that revokes one of the components of the boot chain, then that boot component and the encryption key tied to PCR7 better be updated before the system is rebooted, or the system will be unbootable. Once the PCR7 value has been updated, the system can no longer be unlocked with the old revoked binary.
But if some other boot component not involved in the boot chain is revoked, what’s the harm? If an attacker downgrades the dbx (say by moving the boot drive to an un-updated system), then that still won’t let the attacker fake-out the dxb-free PCR7 value, right?
I ask, not because we can change the behavior of PCR7, but because I’d like to understand the threat - if we’re trying to come up with some PCR7-free way of handling things (like Lennart’s idea that shim would extend PCR11 and other non-standardized PCRs), then is that going to be robust against the same threat?
A dual-boot system is obviously challenging. Even if we somehow figure out how a method to avoid needing the recovery key when Windows updates the dbx, the opposite problem will still exist - if the dbx is updated from Linux, Windows will need to be recovered.
Hi Mark!
Supporting OEM and other preinstalls is definitely a requirement (I should actually list it under requirements rather than just having a separate section.)
Proving that the preload is non-tampered with is pretty much a separate task. I’d imagine that any solution for that would involve us maintaining a database that would map:
Chain of trust for firmware+bootloader+kernel => expected preload image checksum
And we’d need some way for OEM’s to submit items to that database, perhaps with some verification that Fedora actually considers the image “genuine Fedora”, whatever that means.
We don’t want the disk to be automatically unlocked if an insecure boot component is used, because an attacker could compromise the boot process and gain access to the decrypted data. dbx isn’t protected against an attacker with physical access (you can just rewrite the contents of flash), so the attacker can just remove the revocation record from dbx and boot the insecure component. Measuring dbx into PCR7 protects against this, since the PCR7 measurement will change and the TPM will refuse to release the disk encryption secret.
The use of PCR7 is, unfortunately, vital unless you can assert that the first stage in your boot process is guaranteed to be perfectly secure. Otherwise, if an attacker can subvert that first stage loader, they can just fake up the rest of the measurements and you’ll have no way of knowing - and no way to know if they’ve rolled back dbx in order to boot a revoked version.
Thanks for explaining this again! Repeated use of phrase “if Matthew thinks this is a full explanation what am I missing?” revealed my mistake. I was somehow thinking that dbx contained only contained revoked keys. Realizing that it can contain specific image hashes makes everything clear.
Yes, I agree. Even if we could somehow count on “perfectly secure shim”, presumably the dbx can also contain entries for firmware components as well.
The way I’d envision how this all works is that the system by default on first boot just enrolls TPM2-based unlocking of the rootfs or /var. And then the user after boot-up can enroll more, i.e. one or more of password, recovery key, fido2 token or pkcs11 token. It might be nice if the UI would occasionally remind the user to do so.
LUKS keyslot overwrites aren’t guaranteed to be destructive through an SSD’s flash translation layer, so this potentially weakens security vs a status quo encrypted install, if the normal install flow doesn’t give people who enable encryption a password-only system from the outset.
Again, I caution against allowing the TPM to become a single point of failure for use cases that don’t require it.
It’s actually slightly more subtle than that - db needs to contain a certificate in the trust chain for a binary, but a common scenario is that db contains an intermediate cert and binaries are signed with a leaf cert that isn’t explicitly in db but chains to an intermediate that’s in db. This way if the signing key is compromised you can revoke that leaf cert and then have the intermediate sign a new leaf - there’s no need to add any new entries to db to have the new signing key work. That means an entire cert can be invalidated in dbx without there needing to be a new db update, and the PCR7 measurement of the db cert used to sign the bootloader won’t change (although the dbx measurement will after the revocation update has been installed)
Hi; here’s some feedback on the topic from a random Fedora-using systems administrator:
Here’s an article from September 2021 with an analysis of what was available for full-disk TPM-enabled encryption on Linux:
I’m hoping we can at least protect the boot process from tampering for all systems, and let the person doing the installation decide if they want the protection and risk of full-disk encryption. If the choice can be deferred and/or reversed later, even better.
One idea (source forgotten) required the user to create and boot from a recovery usb device to do the installation of the OS, which insured that the user had created it and that it would be able to decrypt the data when it came time to recover the system.
Thank you for making Fedora more secure!
I’d like to hear from a cryptographer or cybersecurity expert, a risk assessment of what information both users and decision makers might care about is leaked with non-encrypted metadata (current fscrypt implementations).
My lay person expectation is quite a lot could be inferred about installed software. While those things aren’t secrets (the code is published), it could have legal impact for individuals in certain countries, such as journalists and dissidents. Whereas user data like documents, cache files, there’s quite a lot of noise and banality that I’m not certain whether much can be drawn from modification dates or file sizes. Other than the obvious: these are database files, these are LibreOffice documents. How much of this kind of leakage is too much? How do we even go about determining that? And to what degree (and how) should we inform the user of the difference in confidentiality provided by fscrypt vs dmcrypt?
I think in Fedora the advantage of fscrypt simplicity, and the ability to do very cheap encrypted incremental send/receive to a Fedora Server for backup, overwhelms the minor confidentiality advantage of dmcrypt fully encrypted the fs metadata. But perhaps that’s naive, hence wanting a domain expert on the subject to clearly describe what use cases, workflows, individual for instances that are at risk with metadata leaks.
fscrypt will not crypt file metadata. In theory an attacker can identify local files using metadata and look for same files in public internet. Having contents of publicly available files can help with/speed up breaking your encrypted data as bad actors will have very specific sample to attack similar way as Turing cracked Enigma’s code.
The documentation says extended attributes are not encrypted. SELinux stores information about files/directories in extended attributes, including labels. SELinux labels are closely related to the contents of the file, so an attacker might be able to infer the file, or the source of the file.
This is a good point. Most everything in home is unconfined_u:object_r:user_home_t:s0 whereas items in /usr and /var vary quite a lot. So I think this makes it more likely system files can be identified; but I’m still cautiously optimistic it’s low risk exposure for the contents of user home.
This is a good point. Most everything in home is
unconfined_u:object_r:user_home_t:s0
That’s true for now, but in the future more a fine-grained $HOME would improve security. We already set SELinux for .gpg, so it is not hard to imagine this expanding to other files and directories.
So I think this makes it more likely system files can be identified; but I’m still cautiously optimistic it’s low risk exposure for the contents of user home.
There are situations where, even if the content is unknown, the file type can be enough for an attacker. As SELinux policies get more fine-grained, more information would leak.
The general class of problem is that an adversary can infer the presence of any collection of files (or sufficiently large single file) that they know about from another source. For example,
Suppose you record a video of the local police doing something illegal and/or immoral, and send it to a news reporter. The news reporter edits your video down to just the important parts and publishes a story about it. Being spiteful bastards, the police raid the reporter’s home and search his unencrypted computer, finding the original video. Through normal detective work, you are identified as an associate of the news reporter whose daily commute takes you near the location where the video was recorded. Alternately, cellphone records identify you as one of only a few people near the location at the right time. The police seize and search your computer. Even though it is “encrypted”, they are able to see that you have a file of the same size as the original video (nearly random in the last 12-14 bits), and 2 hours older than the one on the reporter’s computer.
You can imagine similar scenarios involving dumps of classified documents sent to Wikileaks, persons involved in reverse-engineering efforts possessing NDA’d documentation, going through customs with data that is illegal to export, etc.
A file encryption scheme that exposes metadata is only strong if the files never leave your disk.
In information security science, security is of often split into security of …
… confidentiality
… integrity
… availability
I think this framework is a bit simplistic, but it captures an issue when it comes to disk encryption: the disk encryption that is currently used by default can be itself a threat to the security of availability because on some systems (those without AES-NI), it can waste resources (that can be also linked to sustainability efforts), which can strongly decrease the performance but also minimize battery life time.
As an initial incentive, in the subsequent links, you might checkout the (non-academic) comparison of AES-XTS with Adiantum - be aware that on weak systems, this difference can develop worse, while the throughput has relations to the power consumption (although it is not necessarily a 1:1 relation). Also, the massive use of resources in this case has to be done by the CPU, which keeps its resources busy. AES-XTS also cannot exploit much instruction sets if there is no AES-NI.
Without repeating the whole content, feel free to checkout these incentives:
https://bugzilla.redhat.com/show_bug.cgi?id=2077532
Generally, feel also free to check out the discussion in the kernel about adiantum , which was introduced in kernel 5.0.
I also use it on some Fedora installations. Feel free to make your own tests / benchmarks. If you have systems without AES-NI (find out with lscpu | grep aes), you can also compare “no encryption” with AES-XTS (the default encryption on Fedora and most Linux distributions). In many use cases, e.g., when you need access to your system to work with your data, a dead battery can cause more trouble than other risks. And with AES-XTS on given ineligible hardware (“worst case”), it can be a risk with a very high likelihood (or, regularly likelihood) to occur (and potentially, a high impact).
Just some complementary thoughts for consideration ![]()
With the conclusion to use fscrypt to encrypt home directory with the user password, one of the concern of mine is user password is usually not strong enough to withstand brute force dictionary attack in case of laptop being stolen. In systemd-cryptsetup, it support using TPM with an additional PIN which locks the TPM if several attempts to brute force the password failed. Can this be implemented in fscrypt?
What about an option to use a Yubikey, Nitrokey or other Hardware key for that? Maybe also as additional security?
I know the “nitro laptop”, which is a Corebooted Thinkpad T430, has this option. Would be great to have as a possibility