Building an encrypted, redundant network storage device
How to build your own RAID5 encrypted network file server
FeedbackBy Preston Hunt, 11 February 2006

Overview

About 4 years ago, I built up a RAID5 file server using an aging ATX system that I had sitting around plus 4 hard drives and the software RAID support in the Linux kernel. I built the whole thing up on Gentoo Linux. It's run flawlessly over the years, so much so that I'm also using it as my house's DHCP server, firewall, router, and mail server.

The capacity of my original system was 240GB of redundant storage. This was a lot of storage 4 years ago when I built it, but I've outgrown it. The excellent performance of this system has proven to me the reliability of the Linux RAID5 code, so much so that I now consider it superior to many low-cost hardware RAID solutions.

So, I decided to build a next-generation file server. This web page documents that project.

Project Goals

My ultimate goal is to have a 1TB encrypted, redundant, network-attached file server. I investigated purchasing a solution, but most of the off-the-shelf products (like the Buffalo TeraStation) are around $1 per GB and did not offer the file encryption. I decided that building my own on Linux would still be cheapest and most flexible way to go.

Design Requirements

Hardware

First I started obtaining the necessary hardware:

Total spent: $525, total redundant storage 900GB, $0.58/GB. The cost is so low because I was able to re-use existing equipment. If I had had to purchase the motherboard, processor, memory, CD-ROM, etc., probably add another $100-200.

Operating System

Gentoo is my distribution of choice for servers and any system that touches the Internet. I used it on my last RAID5 server and it has worked perfectly, although keeping the system updated took a little more time than I wanted. This time around, I decided to give Ubuntu Linux a try.

This machine was intended for use behind a router/firewall, so building a hardened system was not a priority.

I did a standard Ubuntu 5.10 install and then added in support for ssh, samba, vnc-server, rsync, and ivman.

Configuring the RAID Array

Once Ubuntu is up and running, the next step is to use fdisk to create equal-sized partitions for each drive that is going to belong to the RAID array.

Next, create the raid array:

mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/hdb1 /dev/sda1 /dev/sdb1

The raid device is now up. Check that everything looks ok by "cat /proc/mdstat" and look for a line that says "md0: active raid", followed by a list of the drives in the array. The next line should contain something like this "[UU]", with each "U" indicating an "up" drive (a "down" drive is indicated by "."). As long as you have the same number of U's as drives, everything is working fine.

You can also run "mdadm --detail /dev/md0" and read the report.

Note that an "mdm --create" automatically starts the array. In the future, the kernel should detect that the drives have a persistent RAID superblock and automatically start the array. If this doesn't happen for some reason, you will need to add a command like "mdm --start /dev/md0" somewhere in your startup scripts. If you want to change the array at all, you will need to do "mdm --stop /dev/md0".

Creating a Secret Key

Before we get to creating the encrypted filesystem, first we'll need a passphrase with at least 128 bits of entropy (this passphrase will be hashed into a 128-bit key later by cryptsetup). Either use Diceware or create a completely random password using /dev/urandom. You could also use English text (i.e., favorite passage from a book), although you will need at least 75 characters.

For the /dev/urandom option, if we only use letters and numbers, we get ln(2*26+10)/ln(2) = 5.95 bits of entropy per character. Thus, we need 128/6 = 22 characters minimum for at least 128 bits of entropy. Unfortunately, there appears to be a bug in cryptsetup that requires at least 32 characters.

We don't want to store the passphrase used to encrypt the data anywhere on the system with the encrypted data. It's even dangerous to create the passphrase on the system and then later try and erase it. Better if the passphrase never exists on the system. So, insert a USB flash drive first and run these command there:

cd /media/REMOVABLE
cat /dev/urandom | tr -cd '[:alnum:]' | head -c32 >passphrase

If you decide to use one of the other options, please follow the same precautions. The truly paranoid among us will run the command on a completely separate machine so that there is never any risk of recovering the information.

Creating the Encrypted Container

Using the passphrase from the last section, we now create the encrypted container on the RAID array:

cryptsetup create --key-file passphrase --key-size 128 md0_crypt /dev/md0

This command stores the encrypted data on /dev/md0, and makes the plaintext available on /dev/mapper/md0_crypt.

Warning: cryptsetup is not the most friendly program when it comes to error messages. If a device is already mounted, you aren't running as root, etc., you will get ambiguous error messages like "Command failed: Invalid argument".

Format and Mount Filesystem

I've always used ext3 in the past, but have heard many good things about the speed and efficiency of Reiser (especially on small files), so decided to give it a try for this system. Unfortunately, I encountered some stability problems with my setup, so I switched to XFS instead. I also noticed that Reiser filesystems take a long time to mount, whereas XFS mounts much more quickly.

mkfs.xfs /dev/mapper/md0_crypt
mkdir /mnt/raid5crypt
mount /dev/mapper/md0_crypt /mnt/raid5crypt

At this point, the filesystem is mounted and ready for use. Feel free to write files just like on any other filesystem by using the /mnt/raid5crypt mount point.

Auto-mounting the Encrypted Volume When the USB Storage Is Inserted

At this point, we have the USB drive with out passphrase on it, and we have the encrypted filesystem up and running. Everything is up right now, but if the system is rebooted, we will need to provide the passphrase again so that the encrypted filesystem can be remounted. We could do that manually, but that would be cumbersome. What we really want is to just stick in a USB drive after the system has booted, and have the system automatically pull the necessary info off of the flash drive and set up the encrypted container.

Ivman already has done all of the hard work for us -- it will detect when a new USB storage device has been inserted and run a script. All we need to do is add the following lines to /etc/ivman/IvmConfigProperties.xml:

<ivm:Match name="ivm.mountable" value="true">
  <ivm:Property name="hal.volume.is_mounted">
    <ivm:Action value="true" exec='/sbin/scan_flash "$hal.volume.mount_point$"' />
  </ivm:Property>
</ivm:Match>

Now we need to create the file /sbin/scan_flash, which will scan a mounted volume for any file ending in a certain extension, check if the file was encrypted and signed for the ivman user, and--if so--execute it.

#!/bin/bash
FLASHDRIVEPATH=$1
LOOKFOR=*.dat
export GNUPGHOME=/home/ivman/.gnupg
for i in $(find $FLASHDRIVEPATH -name $LOOKFOR); do
  echo $i
  DIRNAME=$(dirname "$i")
  gpg --verify-files $i &>/dev/null && gpg --quiet --decrypt $i | bash -s
done

GPG

TBD.

Securing the OS

Recovering a degraded array

If one of the drives goes down, the array becomes "degraded". Depending on the problem, the array may no longer auto mount at boot.

For this section, assume a RAID5 array of four members (sda, sdb, sdc, sdd), of which sdd has failed.

The following command will search all partitions on your system and determine which ones contain RAID set members:

mdadm -Ebsc partitions

To mount an array that is missing a drive, use the following command. Don't include the missing drive sdd.

mdadm -A --run /dev/md0 /dev/sda /dev/sdb /dev/sdc

After replacing the bad drive, you would then hot add it to the active array:

mdadm -a /dev/md0 /dev/sdd 

The array will now rebuild the missing drive, which will take some time.

Growing the array

One advantage of the Linux software raid is the ability to grow the size of each of the devices in the raid array. One at a time, you replace each drive in the old array with a new bigger drive and let the array rebuild. After replacing all the drives, you can use "mdadm --grow" and "xfs_grow" to get the space online.

References

These articles helped me in developing this project.

Articles "to do"

blog comments powered by Disqus