The best way to configure Ubuntu for an encrypted root filesystem is to tell the installer to do it when you’re installing the operating system. It’s complicated, and smart people have put a lot of effort into making the installer know how to do it properly.
Having said that, if you’re an experienced Linux sysadmin who is comfortable with low-level troubleshooting when things go wrong, the instructions here will guide you through the process of adding encryption to your root filesystem after the fact.
If you don’t fall into that category, then stop here. This really isn’t the kind of thing that should be attempted by inexperienced users. If you aren’t knowledgeable and experienced enough to get yourself out of any hole you dig yourself into following these instructions, then you shouldn’t be trying it. The author of these instructions offers no guarantees that they will work and bears no responsibility if you lose your data or brick your system.
Post a comment below if this turns out to be useful!
Go back and read the previous two paragraphs. Really, I’m not kidding, if you’re not an experienced sysadmin, you shouldn’t be attempting this. Don’t say I didn’t warn you.
These instructions have been tested on Ubuntu 17.04, though they will probably work with some other Ubuntu versions.
These instructions have been tested on machines with and without a separate /boot partition. For machines without a separate /boot partition, creating one while setting up encryption is recommended, and how to do that is described below.
These instructions have been tested with an MBR disk. They may or may not work with GPT partition tables. If you are successful using them with GPT, feel free to submit feedback to the author about what changes, if any, were necessary, so that he can update these instructions to benefit others.
Similarly, these instructions have been tested on BIOS systems. They may or may not work with UEFI or secure boot. Again, please let the author know if you have knowledge to share about this.
The instructions below should work both on systems where the root filesystem is in an LVM volume, and systems where it’s in a raw filesystem device. Steps in the instructions intended for the former are tagged [LVM], vs. [Non-LVM] for the latter. The LVM instructions only work when there’s only one physical disk partition in the volume group holding the root filesystem. If your root volume group has more than one physical disk partition in it, you’re on your own.
Here are the basic steps in this process:
- Make sure your existing (unencrypted) root filesystem has everything it needs.
- Boot from a rescue CD.
- Back up your root filesystem onto a different device.
- If you don’t have a separate /boot filesystem, split your root filesystem up into separate root and /boot filesystems.
- Create the encrypted filesystem.
- Restore your data onto the encrypted filesystem.
- Set up the /boot filesystem, if one was created above.
- Reconfigure the system to know how to boot from the new filesystem(s).
- Reboot and hope everything works.
- Clean up.
As noted above, you’re going to need to back up all the data from your root filesystem onto a separate device, because the process of creating the encrypted filesystem destroys all of the data on it. This separate device can be a separate data partition on the system, or an external hard drive, or a thumb drive, or even a network drive. If you’ve gotten this far after all the warnings above, then you should be able to figure this out. The instructions below refer to this separate device as your storage filesystem.
You need to know the filesystem device that your current, unencrypted root filesystem is on, i.e., what
df prints in the first column. These instructions use
$ROOT_DEV as shorthand to refer to this device.
You need to know the boot disk that
grub is installed onto, so that you can reinstall grub.
Before attempting this, run a regular backup of the system using whatever you usually use for backups (you do run regular backups that are stored somewhere other than on the system itself, right?), and test the backup to ensure that it is valid and files can be restored from it. The author uses and recommends CrashPlan.
In case this wasn’t clear before, you need to be comfortable with troubleshooting and fixing problems at a very low level. If you don’t feel comfortable doing advanced troubleshooting when things go wrong, stop here.
You should read over all the instructions below and make sure you understand what they all mean before you change anything on your system.
[LVM] If you don’t already have a separate
/boot partition and you’re using LVM, then you’re going to have to destroy and rebuild your LVM volume group, since you need to create a separate, non-LVM
/boot partition in part of the space that the volume group is currently taking up. Therefore, if there are any logical volumes in your root volume group other than your root and swap partitions, you’re going to need to back them up somewhere safe either before you start this process or at the same time as backing up the root partition as described below. The steps for backing up, recreating, and restoring these other filesystems are not documented below, so you’re on your own. Proceed cautiously.
- Make sure
cryptsetupis installed (
sudo apt-get install -y cryptsetup).
- Download SystemRescueCd and burn it onto a CD, DVD, or bootable USB thumb drive (you can create a bootable thumb drive on Ubuntu using Startup Disk Creator).
- Shut down your computer and reboot from the the rescue CD.
fsck $ROOT_DEVto ensure that your root filesystem is intact.
- Check and mount your storage filesystem on
fsarchiver -v savefs /mnt/storage/root.fsa $ROOT_DEV. Note: If your system has multiple cores, then you can add
-j #, where
#is the number of cores, to make the backup run faster.
- Everything up until now was non-destructive. YOU ARE ABOUT TO DESTROY YOUR ROOT FILESYSTEM. Pause here and think seriously about whether you wish to proceed. Are you confident that your storage filesystem (i.e., the thing that
/mnt/storageis on) is reliable? Did the
fsarchivercommand complete successfully? Have you backed up your important data off-system?
- [Non-LVM] If you don’t already have a separate
/bootfilesystem, it’s time to create one:
fdisk diskwhere disk is the disk containing the
$ROOT_DEVpartition. For example, if your old root device is
/dev/sda1, then you would run
fdisk /dev/sda. If you don’t know what to do here, see above regarding only experienced sysadmins attempting his process.
fdisk, remove your old root partition and replace it with a 250MB Linux partition with the boot flag set and a second Linux, non-boot partition filling the rest of the available space. For example, you might do something like this, though these commands are not meant to be copied verbatim, they’re just examples, make sure you understand what you’re doing here:
# Delete old partition
# New primary partition, 250MB, default type (Linux)
n, p, 1, Enter, +250M
# Set boot flag on new partition
# New primary partition, rest of available
# space, default type (Linux)
n, p, 2, Enter, Enter
# Print the new partition table and make
# note of the two devices you created.
# Write changes to disk and quit
$ROOT_DEVbelow is the path of the big partition you created above with
$BOOT_DEVis the path of the partition you created for
- [LVM] If you don’t already have a separate
/bootfilesystem, it’s time to create one:
lvsand make note of the volume group name that your root (and maybe swap) partitions live in, and of the names of the two partitions.
pvsand make note of the disk partition that is in the volume group.
- Follow the non-LVM steps above exactly to replace the disk partition you made note of in the
pvsoutput with two partitions, one 250MB bootable partition and a second partition taking up the remaining space that was in the old partition.
rm -rf /dev/VG-name(necessary because the system rescue CD doesn’t clean up symlinks in
/devproperly when you delete a volume group)
cryptsetup luksFormat $ROOT_DEV. Enter “YES” when it asks. Enter a passphrase and remember it!
cryptsetup luksOpen $ROOT_DEV rootencdev(here and later, “rootencdev” means that literal string; it’s not a variable)
cryptsetup resize rootencdev
- [Non-LVM] For the remainder of these instructions,
- [LVM] Time to recreate the volume group and logical volumes on top of the encrypted device.
vgcreate VG-name /dev/mapper/rootencdev
lvcreate --name name-of-old-swap-LV -L size-of-old-swap-LV VG-name
lvcreate --name name-of-old-root-partition -l 100%FREE VG-name
- If you did all the naming properly, then the device paths for your root and swap partitions are unchanged from when they were unencrypted. For the remainder of these instructions,
$ENC_ROOT_DEVis the full
/dev/mapperpath to the root logical volume.
fsarchiver -v restfs /mnt/storage/root.fsa id=0,dest=$ENC_ROOT_DEV
mkdir /mnt/root && mount $ENC_ROOT_DEV /mnt/root
/mnt/root/etc/crypttaband add the line
rootencdev $ROOT_DEV none luksto it. If
$ROOT_DEVis on a solid-state drive, then change
/mnt/root/etc/fstab. Find the line for the root filesystem and change the first field to
$ENC_ROOT_DEVif it isn’t already.
- In addition, if you created a
/bootfilesystem above, then add a line for it, above the line for the root filesystem:
$BOOT_DEV /boot ext4 defaults 0 0.
- [LVM] In addition, if you recreated an LVM swap partition above and it’s listed in
fstab, then make sure the first field is correct: either it needs to be the correct
/dev/mapper/whateverpath for the swap volume, or you can run
blkidon the swap volume and then put
UUID=blkidas the first field for the swap partition in
- In addition, if you created a
for dir in /sys /proc /dev; do mount --bind $dir /mnt/root$dir; done
- If you created a new
/bootpartition before, it’s time to set it up:
mkfs.ext4 $BOOT_DEV([LVM] It may warn you that the partition contains an LVM2_member file system, because there’s an old LVM signature there from before we repartitioned; that’s fine, as long as you confirm that you’re using the right partition name that isn’t actually part of the recreated volume group!)
mv -T /mnt/root/boot /mnt/root/boot.orig(if this fails, it’s because there’s already something in /mnt/root/boot.orig for some reason, so you’ll have to remove it if it’s not needed or pick a different directory name to use for this)
mkdir /mnt/root/boot && mount $BOOT_DEV /mnt/root/boot
rsync -av /mnt/root/boot.orig/ /mnt/root/boot/(note that the trailing slashes on the paths in this command are significant, as you should know if you’re the experienced sysadmin you claimed you were!)
ls /mnt/root/bootto confirm that it contains the files you expect, e.g., you didn’t screw up and leave one of the trailing slashes off in the
/mnt/root/boot.origaround for the time being, you can clean it up later after you’re sure everything is working.
chroot /mnt/root /bin/bash
- If you didn’t create and mount the
/bootpartition above because it already existed, then run
update-initramfs -u -k all
grub-install boot-disk, where boot-disk is the disk that grub boots from, probably the same device you specified to
update-grub([LVM] warnings about “Failed to connect to lvmetad. Falling back to device scanning.” are fine)
- Exit from the chroot.
shutdown -P now
- Unplug the thumb drive, or eject the CD from the drive as the system is booting up, and boot it up from the hard drive.
Don’t forget to post a comment below if this turns out to be useful!
Don’t forget to remove
root.fsa from your storage filesystem and
/boot.orig from your root partition when you’re confident that everything is working and they’re no longer needed.
If something goes wrong
If the system doesn’t boot up properly after the steps above, then you can boot from the rescue CD again and then do this to get your root filesystem back so you can troubleshoot further and fix the problem:
cryptsetup luksOpen $ROOT_DEV rootencdev fsck /dev/mapper/vgname-rootvolname mkdir /mnt/root && mount /dev/mapper/vgname-rootvolname /mnt/root
To chroot into your root filesystem, you would additionally do:
for dir in /sys /proc /dev; do mount --bind $dir /mnt/root$dir; done chroot /mnt/root /bin/bash fsck $BOOT_DEV mount /boot
256MB wasn’t enough for my /boot partition. Take note of the size ahead of time and give yourself plenty of headroom. You have to re-fdisk after restoring your .fsa file otherwise which makes this take much longer.
Since I’m on EFI, I also had to also mount /boot/efi and run grub-install with special args:
grub-install –target=x86_64-efi boot-disk
because grub complained about refusing to use blocklists or something.
There were some disconcerting error messages but everything rebooted properly.
Now running with encrypted disk. Woo!