Although SSD prices have been falling, for various reasons you might still want to use "those nasty platters of spinning rust" (Torvalds' words) for your system (for example, a 2TB SSD might double the cost of your notebook). However, thanks to LVM's dm_cache it is easy to add an encrypted SSD storage cache to your system, and you can do it on a live system (no need to use a live CD, etc). It provides a significant performance improvement for random reads, which is helpful when multiple VMs are contending for disk access. This is especially helpful if you are using a notebook with a 5400 RPM 2.5 inch drive - just swap out your DVD drive with a optical bay second drive caddy (such as
http://www.amazon.com/Protronix-Optical-Drive-Universal-12-7mm/dp/B004XIU4T2) to add and SSD cache.
The below instructions assume /dev/sdb is the SSD drive. Adjust according to your configuration. The instructions also assume you are using encrypted storage. Some of the below commands would be omitted or modified if you are not using encryption.
1) Determine the size of the data and metadata portions for the SSD cache. Keep in mind that you do not have to allocate the entire SSD drive for caching. 'sudo fdisk -l /dev/sdb' can be used to determine the size of the drive (although there are many other commands). Then determine the number of megabytes of storage (for example, num bytes / 1024 / 1024), rounded down to the nearest multiple of 8MB. The (conservative) rule of thumb generally being used for determining the size of the metadata portion is 1/1000th of the data size, rounded up to the nearest multiple of 8MB. Then, the size of the data portion is along the lines of TOTAL_SIZE - (2 x META_SIZE) - 4MB.
A concrete example:
- 'fdisk -l /dev/sdb' reports 120034123776 bytes: TOTAL_SIZE = 114473.46 MB -> 114472 MB (rounded down to nearest multiple of 8MB)
- META_SIZE = 114.172 MB -> 120 MB (rounded up to nearest multiple of 8MB)
- DATA_SIZE = 114472 MB - (2 x 120 MB) - 8MB = 114224M
2) Now you can run the following commands. They have to be run as root, so you might just do a 'sudo bash' first. NOTE: rather than package the below commands into a script (as provided in the original instructions), I suggest cutting and pasting the following commands into a terminal one by one, to watch out for anything unexpected.
Revise the first three lines based on what is discussed in step 1.
Once you perform the second 'lvconvert' command, you have to see things through to the end!
CACHE_DISK=/dev/sdb
META_SIZE=120M
DATA_SIZE=114224M
DISK_NAME=$(basename $CACHE_DISK)
CRYPT_VOLUME=${DISK_NAME}_crypt
CACHE_PV=/dev/mapper/${CRYPT_VOLUME}
cryptsetup luksFormat $CACHE_DISK
cryptsetup open --type luks $CACHE_DISK $CRYPT_VOLUME
pvcreate $CACHE_PV
vgextend qubes_dom0 $CACHE_PV
lvcreate -L $META_SIZE -n cachemeta qubes_dom0 $CACHE_PV
lvcreate -L $DATA_SIZE -n cachedata qubes_dom0 $CACHE_PV
lvconvert --type cache-pool --poolmetadata qubes_dom0/cachemeta --chunksize 64k --cachemode writeback qubes_dom0/cachedata --yes
lvconvert --type cache --cachepool qubes_dom0/cachedata qubes_dom0/root
DISK_UUID=$(ls -al /dev/disk/by-uuid/ | grep $DISK_NAME | awk '{ print $9 }')
echo "luks-${DISK_UUID} UUID=${DISK_UUID} none luks,discard" >> /etc/crypttab
NOTE: setting the same passphrase for the SSD cache will allow you to enter a single passphrase on boot, otherwise you will have to enter separate passphrases (which I do not believe works well with the graphical boot).
After you run the second 'lvconvert' command, the SSD cache will be active - with LVM there is no need to reboot, etc. Everything after that is to make sure you can boot up again with the cache.
The above commands store the cache within a LUKS container, so that data is cached on the SSD encrypted. Here is a example 'lslbk' output after the above commands have been run:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 931.5G 0 disk
├─sda1 8:1 0 500M 0 part /boot
└─sda2 8:2 0 931G 0 part
└─luks-7eb01e05-11de-819c-0033-399a8ccf9123 251:1 0 931G 0 crypt
├─qubes_dom0-swap 251:2 0 15.3G 0 lvm [SWAP]
└─qubes_dom0-root_corig 251:5 0 915.8G 0 lvm
└─qubes_dom0-root 251:6 0 915.8G 0 lvm /
sdb 8:16 0 111.8G 0 disk
└─sdb_crypt 251:0 0 111.8G 0 crypt
├─qubes_dom0-cachedata_cdata 251:3 0 111.6G 0 lvm
│ └─qubes_dom0-root 251:6 0 915.8G 0 lvm /
└─qubes_dom0-cachedata_cmeta 251:4 0 120M 0 lvm
└─qubes_dom0-root 251:6 0 915.8G 0 lvm /
3) Get the UUID for the new LUKS container (from the last line of /etc/crypttab, or 'echo DISK_UUID'), and edit /etc/default/grub to add to GRUB_CMDLINE_LINUX:
rd.luks.uuid=${DISK_UUID} rd.luks.allow-discards=1
As a result, there will be two rd.luks.uuid items - this is correct!
NOTE: if you opted to use a different passphrase for the SSD cache than your primary storage, also remove 'rhgb' from GRUB_CMDLINE_LINUX so the text-based boot process prompts you for both passphrases.
4) Create a new .conf file in /etc/dracut.conf.d (for example, ssd-caching-drivers.conf) with the following contents:
# make sure all of the drivers for dm_cache are present for SSD caching:
add_drivers+=" dm-cache dm-cache-mq dm-persistent-data "
5) Run 'dracut -f' to regenerate initramfs so that the drivers needed for the SSD cache to be present when booting
6) Run 'grub2-mkconfig -o /boot/grub2/grub.cfg' to regenerate the GRUB config to have the boot process process the new LUKS container.
Now it is safe to reboot. If something seems to go weird with entering passphrase(s) from the graphical boot process, just press the Escape key to pop over to a text-based boot.
Bets,
Eric