[Guide] Home NAS Upgrade – From Unraid to MDADM

A bit more than three years ago, I built and documented my custom Unraid NAS build. It involved rolling together spare parts, a rat’s nest of SATA cables, a large number of drives of various sizes, all into a magic cauldron, which provided one drive of parity, at a fraction of the cost of an off-the-shelf system. For my network, and arguably for anybody’s, the storage infrastructure is pretty important – everything resides on there, from media collections, to system images, to backups and everything in-between. Being highly available, resilient and performant is crucial. Generally I was happy with this arrangement – I swapped drives in and out as I outgrew them, or they failed. The oldest drive in the array had clocked up more than 35000 hours on the odometer, still going strong with zero reallocated sectors. Keep those temps low.

But over the years, I discovered a few quirks about Unraid. The NFS shares were occasionally flaky, either resulting in dropped connections, or stale file handles. I resorted to CIFS connections wherever possible, but AD/LDAP integration was also flaky at best, by Lime Technology’s own admission. On large media collections, it would sometimes drop out as well. And of course, the well-documented Achilles Heel with Unraid is the poor write speed. I experimented with an SSD cache drive, but it seemed like a clunky workaround, and I found better results with using SSD space to store VMs directly.

I also found over time, through dozens of disk changes, that my most recent setup had iterated to the point where I had 4 x 3TB drives (which was the best compromise between price/capacity/redundancy I found), and 1 x 2TB drive. A 120GB Samsung 830 SSD was also there, for primary VM duties. The hypervisor on the server booted off a USB stick, so no HDD space was required. As the server has very limited internal space, I utilized an external SATA enclosure (Vantec HX4) for housing 4 drives. It works extremely well – one eSata connection to the main board, all drives are detected as if they were connected directly to the board. It also looks vaguely futuristic, and the exhaust fan, which vents upwards, is adjustable if you’re sensitive to noise. Highly recommended.


So the issue of performance had to be addressed. There was simply no way to get around the inherent limitations of Unraid. Without being striped in any way, performance was going to be a hurdle. There’s a healthy community around Unraid, and after all it’s an ambitious project by one guy effectively, but the plugin-system and custom scripts/workarounds for various simple tasks seemed counter-intuitive and cumbersome. I briefly investigated a btrfs system, on both Ubuntu and CentOS, but ran into various stability issues, including some annoying kernel panics, though admittedly this was on 3.13. I believe btrfs will eventually become mainstream, but it’s not yet ready for production. It’s main rival, ZFS, is more mature on OpenSolaris variants, and there’s also a ZFS on Linux project. Both of these offer enterprise level features (like COW, de-duplication and compression on the fly), with very flexible volume management tools.

But in the end, I settled for a setup I knew quite well, an mdadm array on Ubuntu server. Mdadm has been around for a long time, is extremely well polished and reliable. It also helped that my drives were nearly all the same size, so I could just RAID them together without using LVM for sub-groups. If I dropped in another 3TB drive, how could I migrate data across? I didn’t have enough spare drives to copy everything off, or even to copy from one array to another. I had to do it in parts. The nitty gritty: the current Unraid array used 6TB out of 11TB (4 x 3TB [1 as parity], 1 x 2TB). All drives were mounted using raw device mappings in ESXi 5.5U1 (vmkfstools -z), so it was easy to switch them between machines. The sequence of actions to solve this was as follows:

  • Consolidate data internally within Unraid entirely into 3 drives (2 x 3TB / 1 x 2TB). Data is written sequentially, not striped, so this is easy. Unraid also offers direct disk shares.
  • Dismount the parity 3TB drive and 1 x empty 3TB drive. Try to minimize any writes at this time, as data is vulnerable without the parity drive.
  • Purchased and install 1 x new 3TB drive in the machine.
  • Mount all 3 x 3TB empty drives as a new RAID5 array. Rebuild for this step took around a day or just over.
  • Data can already be copied on before the rebuild is complete. Note that you can create a two drive RAID5 array as well, it acts just like a RAID1.
  • This new array has 6TB of usable space, with 3TB of parity information.
  • Transfer data on from Unraid array. For quicker transfers, the Unraid disks can be directly mounted on Ubuntu (install reiserfsprogs). This part took about 2 days @ 40MB/s. The new array is nearly entirely full.
  • Now, the entire remaining Unraid array can be shut down. I removed and decommissioned the 2TB drive.
  • Add the 2 x 3TB drives onto the main array, whilst turning it into a RAID6, expand storage. This part took around 3 days.
  • The final result is 5 x 3TB in RAID6, with 2 x 3TB for parity.

No data was lost in the process, but the rebuild and reshape speed is quite slow, often taking days to complete. This is with bitmaps turned on, as well as various mdadm tuning options applied. Note that RAID 5/6 can also be quite CPU intensive, so it’s worthwhile to throw some additional CPU resources at it. The actual disk read and write performance is light years ahead of Unraid, with sequential writes ranging from 150-200MB/s (and 1500+ IOPS). With Unraid, it would max out around 30-40MB/s on writes, on a good day, and would completely asphyxiate under more than 2-3 simultaneous requests if it hit the same disk. If you have RAM to spare, mdadm utilises it as a write cache (see dirty buffers), which can significantly increase speed and reduce the need for SSD-backed caching like dmcache or bcache.

Overall, quite pleased with the result, performance-wise, I’ve booted half a dozen VMs directly off the NFS share and it’s had no issues. It’s also more resilient. In RAID6, if one drive fails, an email warning is dispatched immediately (mdadm actually installs postfix alongside, but you can use ssmtp), though the array is still protected with another parity drive, so it can sustain two complete drive failures. This is an important distinction – this second drive gives you time to purchase and install a replacement drive, while preserving redundancy. In Unraid, if two drives fail, a large chunk of data is irreversibly lost. But, I would concede that a RAID5 setup is more risky than an Unraid setup, since striped data cannot be easily recovered, unlike the directly written disks of Unraid which can be mounted on another machine. Just to be sure, I tested a sudden power loss to the array as well, and it rebooted and remounted without issue, no rebuild was required, and an e2fsck didn’t pick up any errors. CIFS network shares didn’t even miss a beat.

As for expandability, this is mainly hardware based, it comes down to the number of eSATA ports, which in turns relies on the number of PCI-E ports. Two more PCI-E slots, each breaking out to another HX4, could house another 8 x 3TB drives easily, or 24TB (16TB usable). That’s before you consider multiple eSATA outputs per PCI-E slot, or 4TB+ drives, or enclosures which house more than 4 drives (though they are significantly more expensive). Mdadm will grow and expand more or less ad-infinitum.

Goodbye Unraid, and thanks for the good times. I’d still recommend you (in some circumstances).







2 thoughts on “[Guide] Home NAS Upgrade – From Unraid to MDADM

  1. The HX4 is rather cool looking! It sounds like you are working on VMware. I have been playing around with Hyper-V clustering at my house and have had a hard time finding good, cheap, shared storage for testing. A jury-rigged NAS could maybe just fit the bill :)

    1. Yes correct, the host is running ESXi, with the 5 spinning disks passing through directly to the appropriate VM, which performs far better than creating virtual stores on the disks themselves. Many of the off the shelf enclosures I found were at least $200-$300 just for the case without HDDs, so the HX4 was the best bang-for-buck option.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s