|The Perfect Storage Solution with LVM, dm-crypt, and XFS|
|I present several recipes I used to construct an encrypted XFS filesystem on top of LVM.|
|FeedbackBy Preston Hunt, 19 June 2007|
My motivation in using LVM was to be able to add and remove hard drives from a storage pool without losing the file system stored on the drives.
My first incarnation used Ubuntu 6.06 LTS with LVM, dm-crypt, and ext3. I successfully formatted a file system, added and removed drives, grew the file system, shrank the file system, etc., all while preserving the existing data. With all data encrypted too. So far so good.
But I ran into a few issues with ext3. First, every time you want to grow the file system, you have to do a painful e2fsck error check (which takes forever) and then the actual resize process (via resize2fs) takes a long time as well. Second, I ran into a weird situation where resize2fs wouldn't expand the filesystem any more even though I had expanded the logical volume it resided on.
To address these issues, for my second incarnation of The Perfect Storage Solution[TM], I switched to XFS for the file system. I have been using xfs on other systems for years and find it to be a well-designed, mature product. The xfs_growfs command appears to be more robust and usable than the ext3 equivalent -- and it's certainly much faster.
I had to piece together a lot of disparate information from the web to make this happen. To ease the way for others interested in doing the same thing, I now present my recipes for building an LVM-based, encrypted XFS file system on Ubuntu:
I'll present this recipe in three parts -- first, I'll start with a subset of a single drive. Then, I'll expand to use the entire single drive. Finally, I'll add a second drive. For each recipe, I'll list all the commands first, and then explain each step below. All examples assume a stock Ubuntu build, which should have LVM, dm-crypt, and XFS support built-in already. Execute all commands as root/sudo. If they aren't already installed:
apt-get install lvm2 cryptsetup
Note: This recipe has been updated to use gdisk instead of fdisk since gdisk works better with large drives.
Part 1: Build an LVM partition with half the space of a single drive
01# gdisk /dev/sda 02# pvcreate /dev/sda1 03# pvdisplay 04# vgcreate vg /dev/sda1 05# vgdisplay 06# lvcreate -l 60000 -n pool vg 07# cryptsetup luksFormat /dev/mapper/vg-pool 08# cryptsetup luksOpen /dev/mapper/vg-pool pool 09# cryptsetup resize pool 10# mkfs.xfs /dev/mapper/pool 11# mount /dev/mapper/pool /mnt/pool 12# df /mnt/pool
1. Take note of the hard drives you want to put in the array. In this example, I'll use /dev/sda. LVM can use the hard drives as block devices, but I prefer to create a partition. By doing this, if the drive ever gets separated from the system, I'll have some clue as to what is stored on the drive. Create a single large partition of type 0x8e (Linux LVM).
2. pvcreate initializes the hard drive for use in LVM.
3. pvdisplay lists all the physical volumes, including the newly added sda1 volume. Make note of the line "Total PE"; for my system, it was 119234.
4. vgcreate creates a new volume group called "vg" using sda1 as the first drive member.
5. vgdisplay lists all the volume groups, including the newly added vg volume group.
6. lvcreate creates a new logical volume named "pool" using about half of the available space from volume group vg (recall the "Total PE" number from step 3). This step creates a special block device called /dev/mapper/vg-pool that we can treat just as a normal hard disk partition even though it lives on top of LVM.
7. "cryptsetup luksFormat" initializes the logical volume for use as an encrypted LUKS container. You'll be prompted for a passphrase. If you lose the passphrase, your data will be gone forever! Be careful... For more information on LUKS, please see the LUKS project page.
8. "cryptsetup luksOpen" opens the encrypted volume. Enter the passphrase you used above. This step creates a new special block device called /dev/mapper/pool that we can treat as a normal hard disk even though it actually sits on top of the encrypted LUKS container.
9. Due to a quirk with the way xfs works, we now need to make sure the LUKS volume takes up the entire logical volume partition. The "cryptsetup resize" command does this. Note that this step must be performed on an open LUKS container, so we do it after step 8.
10. mkfs.xfs formats the encrypted container for use as an XFS file system.
11. This mounts the XFS volume for use by the operating system. You may need to create the /mnt/pool directory if it doesn't already exist. For file systems over 2TB, the "inode64" mount option may be useful.
12. We can now use the file system on /mnt/pool like any regular file system. If you run df, you'll see the size of the partition.
Part 2: Expand the LVM array to take the rest of the space from a single driveContinuing on from part 1...
13# lsof /mnt/pool 14# umount /mnt/pool 15# cryptsetup luksClose pool 16# vgdisplay 17# lvdisplay 18# lvextend -l +59234 /dev/vg/pool 19# lvdisplay 20# cryptsetup luksOpen /dev/mapper/vg-pool pool 21# cryptsetup resize pool 22# mount /dev/mapper/pool /mnt/pool 23# xfs_growfs /mnt/pool 24# df /mnt/pool
13. Use lsof to make sure that no processes are using files on the mounted file system.
14. Unmount the filesystem.
15. "cryptsetup luksClose" closes the encrypted container and removes the special block device /dev/mapper/pool.
16. Note the "Free PE / Size" line. This is how many free extents are available for adding to our volume. In my case, the value was 59234.
17. Note the "LV Size" for the pool logical volume. We'll compare it again after the next step to double check that the LV did indeed get expanded.
18. Extend the specified logical volume by the indicated number of extents. Here we use the value from step 16.
19. If we compare the output of this step from step 17, we should see that the size of the logical volume is bigger.
20. Re-open the encrypted container.
21. Resize the encrypted container to fill the newly added space.
22. Mount the XFS partition.
23. Expand the XFS file system. Unlike resize2fs, xfs_growfs is very quick.
24. Re-run "df /mnt/pool" to verify that the partition is indeed bigger. We can now continue to use /mnt/pool as before, but with more space. Yay!
Part 3: Adding another drive to the LVM arrayContinuing from part 2...
Before adding the drive, it is a very good idea to do a burn-in period since if a drive is going to fail, there is a good chance it will fail right away. Install the drive and test it with a SMART long test (smartctl -t long /dev/sdb). Run "badblocks -c 2048 -sw /dev/sdx" to run a destructive 4-pass test on the drive. Then leave the drive in your system for at least a week (preferably a month), running badblocks every day. This will stress it out a little bit and hopefully expose any early defects. At the end of the period, repeat the SMART long test. If everything is still working, proceed with step 25.
25# lsof /mnt/pool # optional 26# umount /mnt/pool # optional 27# cryptsetup luksClose pool # optional 28# gdisk /dev/sdb 29# pvcreate /dev/sdb1 30# pvdisplay 31# vgextend vg /dev/sdb1 32# vgdisplay 33# lvextend -l +119234 /dev/vg/pool 34# lvdisplay 35# cryptsetup luksOpen /dev/mapper/vg-pool pool # optional 36# cryptsetup resize pool 37# mount /dev/mapper/pool /mnt/pool # optional 38# xfs_growfs /mnt/pool 39# df /mnt/pool
25-27. These are the same steps as in part 2 for unmounting an active volume.
28-30. These are the same steps as from part 1 for creating a new physical volume.
31. vgextend adds the new physical volume to the volume group.
32. Verify the new volume group size with vgdisplay. Make note of the "Free PE" (for me, it was 119234).
33. Add all of the space from the new drive to the "pool" volume group
34. Verify the new size with lvdisplay.
35-39. These are the same steps as in part 2 for expanding the size of the encrypted container and growing the XFS partition.
Part 4: Removing a drive from the LVM arrayFor this to work, perform steps 25 through 32 from part 3 to add a drive. Don't expand the logical volume or file system (steps 33 through 39) since the blocks need to moved off of the old drive first.
For example, if you wanted to remove sda.
33a# pvmove -v /dev/sda1 /dev/sdb1 34a# vgreduce vg /dev/sda1 35a# smartctl --all /dev/sda1
Note that in step 33a, the pvmove will take a very long time (probably overnight for a big drive).
Once step 34a has completed, you can now remove the drive from the system. Running step 35a can be helpful in determining the serial number and manufacturer information to determine which drive to remove.
blog comments powered by Disqus