[Guide] Budget Home Server Upgrade (Pt. 4): NAS Storage Upgrades

This article will outline the latest upgrade in my home NAS setup, including replacing the external 4-Bay SATA storage enclosures with a standalone DIY SAS enclosure. To read the previous chapter, click here. UPDATE: Additional components listed at bottom.

The Setup:

My budget NAS setup has been running solidly for years now – a Dell T5500 dual-Xeon server/tower, with additional SATA PCI-E controllers for ESATA ports, going to a pair of 4-bay SATA enclosures. Together, 8x 3TB drives ran in my MD RAID 6 setup (18TB usable space), with regular off-site backups and versioning being performed by rsync and Syncthing. So far so good. But after recently upgrading to NBN 100m/bit and upgrading a number of TV shows to UHD, my space was again running low – it was time for an overhaul. More space and performance was needed.

The Problem:

SPEED – First and foremost, running a 4-bay enclosure off a single ESATA port is a large bottleneck. A theoretical 375MB/s, which I found in use barely yielded 40MB/s per drive, bidirectionally, as all drives have to share the one connection. The lesser quality of port multipliers and controllers in use (generally budget off-the-shelf JBOD enclosures) were a problem. For small reads and writes, this wasn’t an issue, but for an array reshape or rebuild (such as when a drive had to be replaced or added), it was taking about 4-5 DAYS to complete, even with bitmaps enabled. Simply stacking more enclosures on was my original plan, and by far the cheapest option, but it would continually yield poorer results. Something much faster was needed.

SPACE – The enclosures hold 4 drives each, and the T5500 can hold 3 internally, possibly up to 5 if you use 5.25 => 3.5″ brackets in the ‘Flex’ bays, though airflow is a bit iffy at the top. Ideally I would like to keep all drives together in one box, not in the server, well-cooled and easy to hot-swap.

COMPLEXITY – I briefly contemplated setting up a second independent machine, built out of old parts just to run the MD array, and share it across to the Dell server, which does all the heavy lifting, such as running KVM with a dozen VMs, transcoding, downloading, configuration management and so on. The MD array would only need to be minimally powered, maybe a Pentium (the current Kaby Lake has hyper-threading, for 4 threads total) and a simple motherboard, but this option would require it’s own boot drive, management, troubleshooting and so on. However, the gigabit network to the VMs would be a limitation, especially for transcoding and heavy transfers. The server also has 48GB of mostly unused ECC RAM which Ubuntu uses for write-caching to great effect.

The Solution:

It turns out, these problems had long been solved, in the form of SAS (Serial Attached SCSI), the successor to SCSI of the old days. SAS allows for attaching a large number of storage devices to a machine, through the use of Host Adapters (HA), expanders and daisy-chaining. Standard desktop SATA drives are compatible with SAS, but not vice-versa, which suits most home users. SAS also brings other benefits from the enterprise world – better diagnostics, tuning and reliability. SAS drives themselves, however, even as good as they, are often prohibitively expensive, so using SAS adapters and cabling, combined with cheaper consumer SATA drives is the best compromise – RAID provides the redundancy in the event of drive failure.

The idea was to get a SAS card to run up to 24-32 drives. These drives could live in an entirely different enclosure(s), known as a DAS (Direct Attached Storage) – as long as they had power and a data connection, the enclosure didn’t even need to have a CPU/motherboard inside. Just a SAS expander and some cables to supply enough ports. At a later date, I can add 1 or 2 more SAS cards to have more ports/enclosures, and it would cost a fraction of the cost of an off-the-shelf NAS.

The Ingredients:

1 x IBM M1115 (LSI 9223-8i) SAS Host Adapter – Approx $80 AUD

The M1015/M1115 are more or less identical, they are PCI-E x8 cards with 2 connectors [SFF-8087], each can directly host 4 channels each for a total of 8 channels (up to 8Gbit/s) and they can be joined to expanders, with a maximum of 24 drives. The PCI-E 2.0 connection supports up to 4GB/s, which is sufficient to run 24 spinning disks at close to maximum speed (rare), but naturally SSDs will require far more bandwidth so adjust to suit. There are plenty of LSI and similar SAS options on the market, but for simplicity I chose the venerable M1015/M1115 as they can easily be flashed to allow pass through.

NOTE: The card must first be flashed to IT mode (Initiator Target mode), which allows for simple pass through of drives to the host system, allowing for software raid like MD or ZFS to work best. I followed the excellent instructions here to convert the device to an LSI 9211-8i. Another good resource here. Note that there is a newer version of the BIOS ( and Firmware (, as of Apr 2016, but the instructions remain the same. The updated files can be found here (the 9211 and 9210 downloads are interchangeable).

1 x ATX Case – Approx $50 AUD

I chose a Fractal Design Midi R2 (second hand) as it has a generous 8 bays, with sliding / dampened trays and generous fan placement with an in-built fan controller. Other possible case options include the Silverstone DS380B (8 bay), Fractal Node 804 (12 bay), or many people opt for rack mount solutions if you have space for a server rack, it’s all personal preference. Heat is the biggest issue in the enclosure, so two 120mm intake fans and one 120mm exhaust with the built-in case fan controller set to medium/high handle cooling. The middle 5-trays can be rotated 90 degrees for more airflow, but I kept them facing out to more easily facilitate hot-swapping if a drive died. Two 5.25″ bays at the top can fit more drives with brackets. There is so much empty room without a CPU/motherboard, that you could conceivably mount more than 10 drives in here if you wanted to rig something up.

1 x ATX PSU – Approx $50 AUD

Nothing fancy is required, low power is fine (like the Seasonic 450W pictured above), reliable and efficient (Gold rating or above) is preferred. Each drive takes between 7W at idle (WD Red) to maybe 16-18W for the more high performance drive, even with 10 drives running at maximum speed (180W) and allowing for inefficiencies, we’re still well under even the weakest power supply. Since there’s no motherboard to trigger the PSU to turn on, a simple $5 ATX switch allows you to connect up to the 24-pin ATX output and manually switch on the power supply, powering on all the drives in the enclosure.

Cabling / Brackets:

2 x SFF-8087 to SFF-8088 ATX brackets – Approx $30 AUD for both

This allows for the HA cabling to run outside the case, by converting the internal SFF-8087 cable, to the SFF-8088 cable, which has far better shielding and is better suited for longer runs. If you want to save a few bucks, you can just run the internal cable out a gap without converting, directly to the drives in another enclosure. But for ease of re-cabling later, I opted to do it properly.

2 x SFF-8088 external cables – Approx $25 AUD for both

The external cables needed to run to the storage enclosure, these are Tyco branded, but any will do. You can either use one or two, depending on your bandwidth requirements. Don’t break the pull tabs.

4 x SFF-8087 internal cables – Approx $35 AUD for all

Two for the host side, two for the enclosure side (to go to the expander).

2 x 5.25 => 3.5″ brackets – Approx $10 AUD

These have been around for decades, they convert a 5.25″ bay to take a 3.5″ or 2.5″ drive. Use up every last slot possible, with these two bays, that brings storage for the case to 10 drives.

1 x Intel RES2SV240 6-Port SAS Expander – Approx $100 AUD

This multiplies the number of ports available, by connecting any number of inputs and outputs. In this case I joined two inputs (allowing for four outputs = 16 drives), but many people also run with just a single input, leaving five outputs = 20 drives) available. This expander (and some of the Chenbro ones), are the few that do NOT require a PCI-E slot for power, as they can be powered off a single Molex 4-pin, ideal for an enclosure.

4 x SFF-8087 to 4x SATA cables – Approx $30 AUD for all

The actual final step of the cabling to go to the drives themselves. I ordered extras for future expandability within the server itself, as I will outline later.


Power Consumption:

A big consideration for those upgrading their storage significantly is whether drive sizes should be upgraded. If a large number of slots become available, this is a prime opportunity to upgrade drive sizes, which has benefits in long-term data density and power consumption. The average WD Red drive consumes around 7-15W of power depending on load, so I ran some calculations based on a 9W average.

The 5 main drive sizes available at the moment, along with the number of drives needed to reach the storage goal, total power consumption and associated power costs, calculated at $0.25 kW/hr. The result is quite clear – if you’re building from scratch, it’s a no brainer to pick a larger drive size, taking into consideration parity required for RAID, the savings could amount to hundreds of dollars a year in power for a very large array. The GB/$ for most drives varies, there is no clear size for best BFYB at this time. The main differences are in the brand and model – to which I would recommend either WD Red or Seagate Ironwolf drives for 24/7 operation. I have a few WD Red 3TB drives in my array that are currently still going strong after well over four years. The Reds also have a slower rotational speed and power consumption, ideal for RAID arrays where performance is distributed.

The final consideration I had was since my entire array was 3TB, then switching to 8TB or 10TB drives would incur a large upfront cost (about $1.5-2K) – How long would it take before this was recouped through power savings? The graph shows how long it would take, at current HDD and electricity prices, for the switch to a larger HDD size to recoup itself. The answer is somewhere between 8 months to 3.5 years, depending on the number of drives in the array. For my needs (<30TB array), it would make more sense to just add more 3TB drives on as required. Also note that HDD prices (3TB and upwards) have not significantly dropped in price over the past few years.

The Build:

Fairly straightforward, the SAS expander mounts nicely and the biggest hurdle was finding enough SATA power plugs from the power supply to power all the drives. I am using some 4-pin Molex to SATA convertors, which are highly discouraged as they can short-circuit and sometimes cause fires, but I have some additional SATA modular power cables on the way. If you do use these Molex adapters, try to aim for the crimped type (you can see the joints), not the injection moulded type, good explanation video.

I initially powered it up without any drives connection so I could flash the HA to a newer firmware and the correct mode, as outlined in the instructional links earlier. A short while later, I plugged in all the drives and they were successfully detected on the controller. The host OS (Ubuntu Server) simply sees the drives as normal, and the previous software RAID array was assembled again. Note that it’s a good idea to have some airflow running over the controller, as they can run quite hot and are designed to work in high-airflow servers.


I only ran two benchmarks before and after, performance improvement on both is significant. However I found reshaping was running at a maximum of about 50MB/s (up from 25MB/s), with 100% IO – there is only so much that spinning disks can do with simultaneous read/writes. Orange bar shows the new SAS setup.

Total Cost Comparison:

The total cost of upgrade – just over $400 AUD total since I ordered everything at the same time. If you spend a bit more time searching, it’s possible to get all the parts for half the price listed. What you get is the ability to comfortably run up to 20 drives in an enclosure (5 ports used on an expander) at very fast speed, with expandibility for up to 3 more enclosures (through two SAS HBAs) as required, for a total of 80 drives. All these drives are still connected to the main server, so they benefit from the write-caching and processing power, such as on-the-fly file encryption. If you set something similar up, your total space in one enclosure would range between 30TB (3TB x 10 drives) or 100TB (10TB x 10 drives). Costs of adding subsequent enclosures reduces dramatically, since you already have the HA and cabling. There are also standalone DAS units with SFF-8088 connectors available which might be an easier option for some.

For comparison with off-the-shelf NAS options, a Synology DS1817 8-Bay NAS (no drives) costs around $1300 AUD, a Drobo 8-bay storage array (no drives) costs $2490, and at the high end, a Lacie 48TB external enclosure (with drives) runs about $6900. The Synology and similar NAS options are self contained, usually with underpowered CPUs and small caches. If you opt for a similar SAS setup to this, it compared quite favourably in every way, built mostly from eBay second-hand parts and allowing for a large amount of future expandability. Best of all, it’s incredibly satisfying. If you have any questions or comments, feel free to leave them below.


To address future expandability, I also ordered a few more inexpensive components just in case:

  • Generic 5-bay cage (Approx $30 AUD)
  • 5-port SATA power splitter (Approx $5)

These additional 5 bays, will bring total drive capacity up to 15 drives in the case. The cage has a multitude of mounting holes, and can be mounted to existing or new holes in the backplate or anywhere in the case. If required, an additional drive could be screwed onto the top of the cage as well. With 15 or 16 drives, this would consume 4 ports on the SATA expander, leaving 2 ports to lead back to the HA. Combined with the 5 maximum drives the T5500 can hole, this leads to a maximum of 57TB usable space, significantly more than double what I currently have. That should be plenty for the foreseeable future, all it requires is just to keep an eye out for second-hand or cheap 3TB sales to keep on adding drives in as required.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s