Future of Encryption in Fedora desktop variants

For quite a while, the Workstation Working Group has had open tickets (#82, #136) to improve the state of encryption in Fedora - and in particular get to the point where we can make the installer encrypt systems by default.

In order to move that forward, I’ve been working on a requirements document and draft plan. In very brief summary, the plan is:

Use the upcoming btrfs fscrypt support to encrypt both the system and home directories. The system by default will be encrypted with an encryption key stored in the TPM and bound to the signatures used to sign the bootloader/kernel/initrd, providing protection against tampering, while home directories will be encrypted using the user’s login password.

This plan is dependent on the on-going Unified Kernel Image support, since currently Fedora uses unsigned initrds, and substituting the initrd would allow an attacker to bypass all encryption. It represents a big change where we go from having secure boot be something we spend a lot of effort on, but actually doesn’t do much, to something we’re depending on in a big way to provide an extra layer of security to the user.

I’d be interested in hearing, among other things:

  • Are there requirements that the document doesn’t capture?
  • Are there other threats that we should be trying to address?
  • Is the focus on integrity a good idea?
  • Do we actually need to separately encrypt home directories?
  • Do we need to support a model where we bind the encryption key to the current kernel and initrd without having the combination signed by Fedora?
  • Should we be more seriously considering systemd-homed? What advantages would we get by doing that?

Thanks for any feedback!
– Owen


Is there also a passphrase enrolled that I use when I cannot boot the system?
I’m think disk move to new systems when hardware fails or when system fails to boot and need TLC to make it work.

Regarding the Recovery Key questions in the doc:

Could this be an opportunity to engage with a third-party like Bitwarden to provide users the option to store their recovery key in a password vault? (Bitwarden supports both the hosted SaaS option and a self-hosted version). This allows us to help users maintain a backup of their key and retain it in offsite storage.

Are we explicitly choosing to abandon support of pre-TPM2 devices? Or would we just fall back to current approaches. Or something else?

That’s the “Recovery Key” discussed in Tree - fedora-workstation - Pagure.io

1 Like

Given the fiasco with lastpass, I really don’t want to see Fedora Linux dependent on any such third-party service. Also, requiring a working internet connection in such recovery scenarios is undesirable IMO. I’d even go so far as to say that it should be a hard requirement that you not need internet access to recover your system.

Excerpted from pagure.io – /fedora-workstation/…/encryption.md:

Recovery Keys

But if we display at install time “Please write down this recovery key and store it in a safe place where you can find it”, we have to assume that many users will fail at the second step. Not clear how to address that best - perhaps suggesting taking a picture of the recovery key with a phone. (For many users, losing their laptop and having someone break into their online photo storage are independent threats.)

Can we require people who want to set this up to register multiple security keys, prove they work at least once, and then tell them to keep at least one locked in a secure place for recovery?

I don’t think there’s any way to provide a useful amount of tamper resistance on pre-TPM2 devices (the user can provide some themselves by setting a BIOS password and locking down boot options.) I don’t think we’d want to do any sort of system-wide encryption in that case, but home directory encryption would still protection confidentiality protection in the “lost laptop” case. Just don’t trust it if you get it back.

Providing a way to store your recovery key in a third-party system sounds interesting. Thinking about it, recovery key creation/display should not happen during anaconda, but during the first boot process, since we have so many more options for user interface at that point.

Can we require people who want to set this up to register multiple security keys, prove they work at least once, and then tell them to keep at least one locked in a secure place for recovery?

We should avoid assuming that Fedora users have a secure place they could store a written key and find it again - a typical college student probably does not. It sounds like a good idea to design the user interface to accommodate saving a key (keys?) via multiple methods.

Maybe I should have been more clear. I didn’t necessarily mean for this to be a requirement, but it could certainly be a useful option. It would be relatively simple from a UX perspective and would be notably more secure than “write it down and keep it somewhere safe”.

Lastly, Bitwarden also has the “paranoid” option (that I use) which has end-to-end encryption; even a breach at Bitwarden’s datacenter won’t expose my passwords.

At least in the US, safes are pretty cheap and easy to come by. I would expect most everyone has some way of storing valuables (e.g. currency) securely.

That is absolutely not the case, particularly for people living in a shared space (with parents, roommates, etc.).

The trouble is that it creates a big target for state actors and they potentially have access to much more powerful equipment for cracking the encryption on such storage if they can obtain a copy of it. I wouldn’t trust it even if it has e2e encryption.

Okay, but this would require that they already have physical access to the hard drive (or at least a clone of it) and a sufficient reason to expend considerable resources to brute-force the key. This feels like the right place for an XKCD callout: xkcd: Security

The reason is $$. They don’t necessarily target individuals. They target large caches of personal data, use their resources to crack the encryption in-mass and then sell the compromised data on the dark web where anyone can purchase a copy (and then after a time much of it is leaked and becomes available for free).

1 Like

They don’t necessarily target individuals. They target large caches of personal data, use their resources to crack the encryption in-mass and then sell the compromised data on the dark web where anyone can purchase a copy (and then after a time much of it is leaked and becomes available for free).

To repeat what Stephen said, the recovery key by itself is a low-value target. it only becomes valuable when combined with physical access to the device. As such, it is much less useful for most attackers then, say, an email or bank password.

People will have different threat models and risk tolerances, and we definitely should accommodate people who want to write down their recovery key and put it in a safe. But I also am concerned about the Fedora user who throws out the paper with the recovery key and then loses their class final project when their motherboard dies.

Related Should Fedora enforce drive encryption on new installs?

1 Like

For avoidance of doubt: these devices would just not have encrypted root filesystems by default, exactly the same as today.

There needs to be a bunch more detail regarding how the TPM sealing is handled. For instance, sealing against PCR 7 on its own is insufficient - if a user boots from a Fedora live CD, PCR 7 will be the same, and the TPM will happily release the secret even though there’s no user authentication performed. This is the reason for Bitlocker sealing against PCR 11 as well - once the Bitlocker key has been unsealed, PCR 11 is extended and the TPM will no longer release it again. The equivalent on Linux would be for the live CD to extend PCR 11 before any user interaction is performed in order to prevent this (which obviously makes the live CD useless as a recovery mechanism, but that’s kind of the point here).

And remember that this has to be done for everything that is signed using these signing keys! UKI may have a restricted initramfs, but if I have an old kernel and unsigned initramfs signed with the same signing keys, I can just boot those and have the same measurements and then bypass the encryption that way. Doing this properly implies rolling to a new set of signing keys for the kernel, which implies a new shim or support for shipping certs independently of shim.

And, of course, this has to be accompanied by restricting the kernel command line. If I can just pass rdinit=/bin/sh I’m going to have interactive control without a password, and the TPM is going to have a valid set of measurements for decrypting the drive, and our assumptions are broken. This is in-kernel, so can’t just be handled at the dracut layer.

We should also remember that PCR 7 will change if secure boot is disabled, or if an additional MOK is enrolled. If a user does either to install (for instance) the nvidia drivers, the measurements will change and their filesystem will no longer decrypt. There probably needs to be a robust mechanism for handling that.

I don’t mean to be negative here. I think this is, if done properly, a significant improvement in usable security. But there’s several subtle cases (and I don’t want to claim the above is necessarily comprehensive in any way!) that can result in either insecure outcomes or unexpected breakage, and I think those need to be more clearly documented before committing to implementation.


I’m not sure there’s really a huge difference between TPM1.2 and TPM2 in this respect? The quality of the TPM1.2 tooling is, well, pretty bad, but for the specific case being described here I think the appropriate security properties exist.

I think disk encryption should be widely used, but the draft plan is not quite nailed down, and some directions it could go give me pause. Particularly, storage of entire keys in TPM2, and fscrypt.

Storing an entire key(file) in the TPM is no doubt required for systems that (re)boot unattended, and is the best way I know to do that. However, if user data is protected by such a key, security depends on the TPM firmware being free of bugs and backdoors (that might deniably look like bugs). This secureboot fakery was not a TPM bug, but it does show that the space of possible problems may include, “let’s not do the cryptography and pretend we did”.

I tried to convince the UAPI group to avoid the TPM being a single point of failure by hashing a TPM-sealed secret salt with the user password on the host CPU, but I failed to communicate sufficiently well.

The problem with fscrypt (at least from what I remember of how it works on ext4) is that only file names and data are encrypted, but not file sizes or directory layouts. That makes it vulnerable to a kind of known-plaintext attack when the plaintext is a few large files or multiple files, such as an archived Youtube channel, a source code repository, a wikileaks dump, etc. If secret data ever leaves the device, an adversary can prove that the user is in possession of it.

Fscrypt is enough to protect browser session cookies, and saved passwords or credit card numbers, which satisfies Googles desires for Android. But I am dismayed to see it expanding elsewhere while that problem remains.

Honestly, I like systemd-homed. It solves the multi-user problem, LUKS prevents metadata leakage, and TRIM support keeps its volumes from consuming more disk space that the data within.

Whatever ends up implemented should be no less secure against any threat model than the status-quo with LUKS+password. That is, block-level encryption of /, but not /boot, with a key derived from a password stored nowhere else but inside the user’s brain, unless they explicitly write it down. The reason is that if some threats require changing the standard configuration in a visible way, users who face those threats will be marked as suspicious. As a particular example if something weaker replaces the status quo:

Prompting for decryption is a major driver of needing framebuffer support in the initrd.

“What do you need a custom initrd for, citizen?”

This is a bit of a problem with the status quo too, but a lot of people are using strong FDE who otherwise wouldn’t, to cover the lost laptop threat model.

Aside, on the subject of evil-maid attacks and their prevention:

An evil maid is allowed to hide cameras or microphones in the space around your computer, without tampering with it at all. This can even be done in advance before you check into the hotel. Such attacks are probably easier and more general than preparing evil initrds that imitate the correct bootsplashe for every common OS, and then guaranteeing enough time alone with victims’ computers to install them. There is no substitute for physical security.

For this reason, I don’t think schemes that provide resistance to evil-maid attacks should be chosen if they are less secure in other respects than less evil-maid-resistant schemes.

1 Like

The systemd measurement logic actually already measures the boot “phase” into the PCR we also measure the kernel into. Thus you can bind secrets to a specific phase of the boot, for example you can say that the root fs can only be unlocked via TPM while in the initrd, but not later. This should deliver exactly what you are asking for, already.