Introduction

Sitting in the corner of the office is a Thecus N5200, dusty and filled to capacity – been around for more than five years. It was one of the first five disk consumer NASes, very quick for its time, as Tim’s review stated eloquently in late 2006:

My bottom line is that if you're looking for the fastest NAS for file serving and backup and don't mind a funky user interface, sparse documentation, and immature firmware then you might take a chance on the 5200…

Thecus has succeeded in raising the feature bar for "prosumer" NASes to include five bays, RAID 6 and 10 and speeds that make good use of a gigabit LAN connection. But its reach has exceeded its grasp in producing an all-around polished product that anyone could feel comfortable buying.

When initially unboxed and loaded up with five new shiny and quick drives, a home network RAID 5 NAS was then an uncommon sight. Flash forward to now and the dusty Thecus is slow, stores a little more than a single 2 TB drive, and with the rise of home theater automation, is now not unusual to see a NAS in someone's home. In other words, ripe for replacement.

Home media requirements have also started to exceed what most NASes can provide. Storage guru Tom Coughlin of Coughlin Associates predicts that by 2015 you’ll have a terabyte in your pocket and a petabyte at home. And there are now folks with multiple NASes at home. My own home network has well over 11 TB and growing, much of it unprotected by either redundancy or backups. Do you have the means to back up that much data?

To solve this, to quote Roy Scheider in Jaws, “We are going to need a bigger boat.”

A Fibre Channel SAN/DAS/NAS

The plan is to first build a disk array RAID NAS, then configure it as a SAN node and use fibre channel to connect it directly to a DAS server, which will attach the newly available storage to the network. A terabyte version of Tinkers to Evers to Chance – a perfect double play.

Figure 1: Block diagram of the Fibre Channel SAN/DAS/NAS

So what is the difference between a SAN and a NAS and what are the advantages of a SAN over a NAS? The primary difference is that in a SAN, the disks in the RAID array are shared at a block level, but with NAS they are shared at the filesystem level.

A SAN takes SCSI commands off the wire and talks directly to the disk. In contrast, a NAS utilizes file level protocols such as CIFS and NFS. This means that with a SAN, filesystem maintenance is offloaded to the client. In addition, SANs generally uses a wire protocol designed for disk manipulation, whereas NAS uses TCP/IP, a general purpose protocol.

Comparing the Ethernet to fibre equivalent, i.e. 10GbE to 10gig fiber channel, significantly more payload is passed with fibre because of the lack of inherent TCP/IP overhead. Combine that with a fat cable like fiber and you have one lean, mean transport layer.

Additionally, if you are running your NAS in a strictly Windows environment, you have the added overhead of converting the SMB protocol (via Samba) to the native filesystem, typically EXT3/4, XFS, etc. On a SAN, no such conversion takes place because you are talking directly to the server-assisted RAID controller. Less overhead, better performance.

The downside of this is that TCP/IP is ubiquitous, and fibre channel is expensive, requiring special hardware and software. Hardware being Fibre Channel host bus adaptors (HBA) and fiber switches and cables and software such as the SAN management suite from SANMelody. SAN infrastructure can run into the tens of thousands; a new single 10GB fibre channel HBA can be as much as $1500. The software is often twice as much.

That is unless you are willing to go open source, buy used equipment, and work with outdated interfaces. If you are, you can get the necessary hardware and software for a song.

Right now there is a glut of used fibre channel hardware on the market. You can pick up a 2 GB fibre channel HBA for $10 on EBay, $50 for 4 GB. The catch is that it is PCI-X hardware. PCI-X was an extended version of PCI for servers, providing a 64-bit interface at up to commonly 133 MHz.

The half-duplex PCI-X bus has been superseded by the faster full-duplex PCIe bus. This, plus the economy, and the move away from fibre, means that what was an exclusively enterprise technology is now available for your home network at reasonable prices; one-tenth its original cost.

Openfiler logoFor software, we are going with Openfiler, which dates back to 2004 and is based on rPath, a branch of RedHat linux. The compelling feature of Openfiler is SAN and fibre connect Host Bus Adapters (HBA) support. Openfiler is also a complete NAS solution.

The Plan

We are going to take a low risk approach to achieve our goals, which are:

1) Build a Fibre Connect SAN for less than $1K
On the cheap, one thousand dollars, excluding disks and embellishments. Think of the build as sort of a wedding: something old, something new, something borrowed, and something black. We’ll buy older generation used PCI-X server hardware including motherboard, CPUs, memory and interface cards mostly from EBay. The new components will be the power supply and our disks. Something borrowed are pieces from previous builds, the system drive, fans, and cables. Black of course is the hot swap system chassis.

2) Exceed the capacity of any consumer grade NAS
The largest consumer grade NAS is currently eight bay QNAP TS-809 Pro which will support 3TB drives. In RAID 5, that is about 18-19TB. We are shooting for a max RAID 5 capacity of 35TB.

3) Beat the Charts
Our goal is to exceed the current SNB performance chart leader, both as fibre channel and as a logical NAS, as measured by Intel’s NAS Performance Toolkit for RAID 5.

Low risk means putting the pieces together, insuring they work, and then bringing up a NAS array. Once we have an operational RAID array, we’ll look at NAS performance. We'll then add the fiber HBAs and configure the array as a SAN. Even if we can’t get fibre connect to light, we’ll still have a respectable NAS.

Part of the fun in these builds is in the naming. We’re calling our San/NAS array , after the wickedly fast ghostly demon dog of English and Irish legend (with eyes in the 6500 angstrom range). We did toy with calling our build Balto, after the famous Alaskan hero sled dog, but that was not nearly as cool.

The Components

Building this from pretty much scratch, we need to buy most major components. For each component I’ll provide our criteria, our selection and the price we ended up with. Price was always a foremost criterion throughout and hitting our $1K target does require some compromises. We have budgeted $700 for the NAS array portion of this project.

Motherboard

The motherboard was probably our most important buy, as it determined what other components, and the specs of those components, were going to be needed. We wanted as much functionality built into the motherboard as possible, with networking and SATA support as the more integrated, the less we’d have to buy, or worry about compatibility. The baseline requirements were:

  • Three PCI-X slots, preferably all running at 133MHz: one slot for the fiber card and two slots for RAID controllers. Very high capacity RAID controllers, support for 12 plus drives, are often more than twice as expensive as those supporting fewer drives. Generally we can get two 12 port controllers for less than a single 24 port controller.
  • PCI-e slots for future expansion. Good PCI-X hardware will not be around forever, and though unlikely, we might find cheap PCIe controller cards.
  • For CPU, the latest possible generation 64-bit Xeon processor, preferably two processors. Ideally, Paxville (first dual core Xeons) or beyond.
  • Memory, easy - as much and running as quick as possible. At least 8GB.
  • Network, if possible integrated Intel Gigabit support, dual would be nice.
  • SATA, preferably support for three drives, two system drives and a DVD drive.
  • Budget: $200

What we got:  SuperMicro X6DH8-G2 Dual Xeon S604 Motherboard, with:

  • Three PCI-X, two slots at 100Mhz, one at 133Mhz
  • Three PCIe slots, 2 x8 and one x4
  • Nocona and Irwindale 64-bit Xeon Processor support
  • Up to 16 GB DDR2 400MHz memory
  • Dual integrated gigabit Lan, Intel Chipset
  • Two Integrated SATA ports

Price: Total $170 from Ebay (offer accepted), Motherboard, Processors & Memory

We got very lucky here, we were able to get a motherboard that hit most of our requirements, but also included two Xeon Irwindale 3.8GHz processors installed. Additionally, the same helpful folks had 8x 1Gig DDR2 400MHz memory for $90.

What we learned is that you should ask Ebay sellers of this sort of equipment, what else they have off your shopping list that may fit your needs. Since they are often asset liquidators, they’ll have the parts you are looking for, without worry of compatibility. And you can save on shipping.

The RAID Controller

A high capacity RAID controller was key to performance, the criteria were:

  • Support for at least nine drives, to exceed the eight bay consumer NAS contender. Which may require more than one RAID controllers.
  • At least SATA II, 3.0Gbps
  • Support for RAID 5, or even better RAID 6.
  • Support for at least 2TB drives, and not be finicky about drive criteria so we can buy inexpensive 2TB drives
  • Have at least 128 MB cache, preferably upgradable
  • Industrial grade, preferably Areca, but 3Ware/LSI or Adaptec were acceptable. Need Linux support, and because the checksum is proprietary, be easy to replace if the controller goes belly up.
  • PCI-X 133 MHz or PCIe Bus Support
  • If Multilane (group cables), come with the needed cables
  • Budget: $150

What we got:  3WARE 9550SX-12 PCI-X SATA II RAID Controller Card, with

  • Support for 12 SATA II drives
  • RAID 5 Support
  • 256M Cache
  • 133Mhz PCI-X
  • Single ports, no cables
  • Support for up to just under 20 TB in RAID 5 (11x1.8 TB) with inexpensive 2 TB drives

Price: Total $101 from Ebay (winning bid)

This was a nail biter. Attempts at getting two 8 port Areca cards exceeded our budget, so we settled on the 3ware card and were quite pleased with the price.

The 3Ware 9550SX(U) controllers appear to be everywhere at a reasonable price, and one should be able to be had at around $125 (avoid the earlier, much more limited, 9500S cards). Here is a nice review, including benchmarking, of this generation of SATA RAID cards.

Drive Array Chassis

The case was a bit of a search, with an interesting reveal. Our criteria:

  • Support for at least 16 drives
  • Good aesthetics, not cheap or ugly
  • Handles an EATX motherboard
  • Budget: $180

The initial supposition was that to hit the budget, Ebay was the only option. Looking for a used chassis, we encountered multiple folks selling Arrowmax cases for less than $75. Checking with the source, Arrowmax.com, we found two cold swap cases, a 4U server case that handled twenty drives, and a beer fridge-like case that handled 24 drives. Both met our criteria and were dramatically under our budget, the larger case being $80.

We were all set to push the button on the 4U case when we encountered one of those weird internet tidal eddies, an entire subculture (mostly based over on AVS Forums for HTPC builders) arguing the ins and outs of builds using the Norco 4020 hot swap SATA case.

Take a look at The Norco rackmountable RPC-4020: a pictorial odyssey. On another site, I swear I found a guy who had carved a full size teak replica of the 4020 and another who had his encrusted with jewels – a religious fervor surrounds this case.

Norco RPC-4020 case

Figure 2: Norco RPC-4020 case

Well, that was the one for us, how could a case have better mojo? It is inexpensive by hot swap case standards and supports twenty drives with a built-in backplane and drive caddies. Bonus: a community more than willing to help and opine about various optimizations.

What we got: Norco RPC-40204U Rackmount Server Hot Swap SATA Chassis, with:

  • Support for 20 SATA drives, hot swappable
  • Clean Design, acceptable appearance
  • Ability to handle numerous Motherboard form factors
  • Drive caddies and activity lamps for each drive
  • Lost cost, high utility

Price: Total $270 from NewEgg (Retail)

I recommend this case. If you read the various write-ups about the case, its true virtues are its design and price. Every corner has been shaved to make this affordable: the card slots are punch out, the back fans sound like the revving engines of a passenger jet, the molex connectors are not solid. But for what you get, it is a great deal. Because less expensive options exist, the budget overage of $90 is considered an embellishment.

Power Supply

We went with a 750W single rail, modular power supply with an exceptional seven year warranty, a power supply that could handle system power needs and those of 20 drives. This was the Corsair HX750 which happened to be on sale. Less expensive PSUs at the same power rating are available.

Price: Total $120 from NewEgg (with Rebate)

Hard Drives

Though not included in our target cost of $1,000, we need nine drives for both performance testing and to reach our goal of exceeding the number of drives supported by the largest consumer NAS. We went with the least expensive Hitachi 2 TB drives we could find, across a couple vendors. They get high ratings and have a three year warranty. There were WD drives for a few dollars less, slower and with only a one year warranty, a penny-wise, pound-foolish alternative. Warning, the Green WDs have reported RAID card compatibility issues.

What we got:

6 x 2TB HITACHI Deskstar 5K3000 32MB Cache 6.0Gb/s and

3 x 2TB HITACHI Deskstar 7K3000 64MB Cache 6.0Gb/s

Price: Total $670 from Various sources

You may ask if why SATA III speeds, why not SATA II 3.0 Gbps drives? They were the cheapest. Given the age of the 3Ware card, we also couldn’t verify our drives against the last published list of compatible drives in 2009, although Hitachi and this family of drives were listed.

Incidentals

When buying new pristine components, you’ll find all the parts you need already included in the sealed boxes. But this is not the case when buying used. Luckily, most of the same parts pile up after several builds, or you end up with perfectly good parts that can be reused. You will probably still need to buy a few bits and bobs to pull everything together. Here is a table of other needed parts:

Slim SATA DVD Drive + Cable New, Media Drive $33, NewEgg
A-Data 32Gb SATA SSD Reused, System Drive $0
20 SATA Data Cables New, Wiring Backplane, Data $24, Local Store
Norco Molex 7-to-1 Cable New, Wiring Blackplane, Power $8, NewEgg
Molex to MB P4 Reused, Supermicro additional connector $0
IO Shield New, Supermicro MB specific $8, EBay
Table 1: Odds and Ends

Price: Total $73 from Various sources

Total Cost NAS RAID Array

Let’s add up our purchases and see how much the NAS portion of our project cost, are we still under our goal of less than $1000?

Component Paid
Motherboard, CPUs, Memory $170
RAID Controller $101
NAS/SAN Array Chassis $270
PSU $120
9 x 2TB HDD $670
Incidentals $73
Subtotal, Expended $1404
Adjustments, not against goal $760
Total $644

The total of $644 is an excellent price for 14GB+ RAID 5 high performance NAS. But, I’m getting ahead of myself.

Putting it All Together

Old Shuck with refreshment

Figure 3: Old Shuck with refreshment

Now that we have our components, we only have three steps to convert a bunch of brown boxes into a full-fledged NAS:

  • Assemble: Build our the server, put the hardware pieces together
  • Install: Set up the RAID controller and install Openfiler
  • Configure: Define our volumes, set up the network and shares

Assembly

The cobbling part of this went really smoothly, with only a couple of unpredictable gotchas.

Following the advice over in AVSForum for the Norco 4020, the fan brace was removed first, the motherboard installed and power supply screwed into place. While wiring the power supply to the motherboard, we ran into our first gotcha.

Supermicro server boards of this generation require the 24-pin ATX connector, an 8-pin P8 connector, and a 4-pin ATX12v P4 Connector. Our new Corsair power supply, which came in a weird black velveteen bag, could only handle two of the three—there is no modular connector specifically for the P4 jack. So we had to scramble to find an obscure Molex to P4 adaptor. Advice to Corsair: Drop the bag and add a P4 connector (Want to keep the bag? Drop the two Molex-to-floppy connectors.)

Rear view of drive bay

Figure 4: Rear view of drive bay

There is no documentation for the Norco chassis, which doesn’t help explain redundant power connectors on the SATA backplane. You only need to fill five, the rest are redundant.

SATA backplane ready for wiring

Figure 5: SATA backplane ready for wiring

The other gotcha was the CMOS battery, which I discovered was limping on its last legs and required replacement. This is a common problem if a motherboard has been sitting fallow for a number of months, like many swapped out servers. With a dead CMOS battery, the system wouldn’t even blink the power LED.

Additionally, care should be taken to exit the BIOS before powering down. It is tempting to do a power-up test of your just arrived parts, even though you don’t have everything you need to do the build. But powering off while in the BIOS may require a CMOS reset.

Inside view with Supermicro motherboard mounted

Figure 6: Inside view with Supermicro motherboard mounted

To avoid this, take my advice, wait for all the components to arrive before embarking on the build. I’m sure there is some suitable coital metaphor for the anxiety that ensues when you are left waiting for that last critical thing to come together. Just put the boxes aside and listen to some Blues.

Fair warning about the SATA cables: buy simple flat drive cables. Odd angles, catches, and head sizes often are not compatible with connector-dense RAID cards. When connecting your cables, threading them through the fan brace to your RAID card before mounting the card or the brace, eases things along.

Inside view of full assembly

Figure 7: Inside view of full assembly

You are going to want to put the RAID card in slot #1 which runs at 100 Mhz, leaving slot #3 for the fiber card. If you put the card in the faster slot, you are probably going to see a blast from the past, an IRQ conflict error. Disabling the onboard SCSI will resolve this, probably a good idea anyway, since it is an extra driver layer we won’t be using.

On the case, care should be taken with the Molex connectors, they are a bit flimsy. When connecting them, treat them gingerly and maybe tie them. I had a near meltdown when a loose connection took out the fan brace.

The drive caddies are not the tension type, and putting in the four eyeglass screws needed to mount each drive is a little time consuming.

Old Shuck with nine drives

Figure 8: Old Shuck with nine drives

Important! Even if you have a fiber card already, do not install it yet. It will cause conflicts at this point.

RAID Installation

First we are going to configure our RAID-5 array, and then install Openfiler on our system disk. The install process is long; it took about eight hours, largely waiting and more waiting. Have that music handy.

With Openfiler, you can go with software RAID, which requires processor muscle, especially with potentially twenty drives. The catch here is that as far as I know, there are no 20 port SATA motherboards. So you’d have to buy several SATA HBAs. Given the price of our 3Ware board, you’d end up paying more for less performance.

Stick with a quality used, processor-based RAID card (unlike High Point or Promise, which are more like RAID hardware assist cards) and you’ll get a highly reliable and performant solution. The downside is that you are tightly coupling your data to a hardware vendor. While RAID is standard, the checksum mechanism is proprietary to each vendor, such as 3Ware in our case.

The setup of the 3Ware card held no gotchas, and required just a few decisions, documented in Figure 9. I created a single unit of all nine drives as RAID 5. The largest decision is the stripe size and I went with the largest, 256K. I also selected write cacheing. Even without battery backup, power is not an issue here and my use patterns mean critical data loss is very unlikely. The Hitachi drives support NCQ, so drive queuing is enabled. We are going for performance, so I used the Performance profile, natch.

3Ware RAID array configuration

Figure 9: 3Ware RAID array configuration

There are long running discussions and mysteries associated with the selection of stripe size and its impact on performance. There is a complex calculus around drive speed, number of drives, average file size, bus speed, usage patterns and alike. Too much for me - I used the simplistic rule of thumb, “Big files, big stripes, little files, little stripes.” I plan on using the array for large media files. This decision, stripe size, does impact our performance as you'll see later.

Once you’ve configured the array, you’ll be asked to initialize the unit. Initialization can take place in the foreground or the background and the first time through I recommend doing it front and center. This allows you to see if there are any issues around your drives. But it does require patience - it took about six hours to initialize all the drives, and waiting for 0% to tick over to 1% is a slow drag. If you choose, you can cancel, and it will take even longer in the background.

Have your Openfiler Version 2.3 install CD (2.99 isn’t quite ready for primetime), downloaded from SourceForge, sitting in the server’s DVD drive so that once the RAID initialization is complete, it will boot.

Given the age of the 3ware card, and LSI’s acquisition of them, docs can take awhile to find. There are several useful PDF files available, linked below for your convenience.

Openfiler Install

Openfiler is a branch of RedHat Linux and is a breeze to install. Boot from CD. You have the choice of Graphical Installation or Text Installation. Installation directions are at Openfiler.com.

The simplest is the Graphical install (Figure 10) and you’ll be stepped through a set of configuration screens: Disk Set-up; Network Configuration; and Time Zone & Root Password.

Unless you have unique requirements, such as mirroring of your boot disk, select automatic partitioning of your system disk (usually /dev/sda). Be careful to not select your data array; the size difference makes this easy.

Automatic partitioning will slice your system disk into two small partitions, /Boot and Swap – allocating the remaining space for your root directory (“/”). You should probably go no smaller than a 10 GB disk for your system disk.

Openfiler disk setup

Figure 10: Openfiler disk setup

For network setup (Figure 11), you can choose either DHCP or manual. With DHCP you’re done. Manual is no more difficult than basic router configuration. You’ll need an IP address, netmask, DNS server IPs and a gateway IP.

Openfiler network setup

Figure 11: Openfiler network setup

Unless you are in one of those time zones that are oddball (for example Arizona), setting the time zone is just a matter of point and click on a large map. Zeroing in on a tighter location can be a pain.

Your root password, not that of the Web GUI, should be an oddball secure password. Once set you’ll see the commit screen shown in Figure 12.

Openfiler install confirmation screen

Figure 12: Openfiler install confirmation screen

Once you hit the Next button it is time to go out for a sandwich—it took more than an hour to format the system disk, and download and install all the needed packages.

When complete, you will get a reboot screen. Pop the media out of the drive and reboot, we have a couple more steps.

Once Openfiler is up, you’ll see the Web GUI address - note it, we’ll need it in the next section. Log in as root with your newly-set password, we are going to do a full update of the system software and change the label of the RAID array.

The version of Linux, rPath, uses a package management tool call conary. To bring everything up to date, we will do a updateall, by entering the following line at the command prompt:

conary updateall

This update for Old Shuck took about twenty minutes.

Once complete, we just need to change the label of the RAID array, which for some mysterious reason, Openfiler defaults to msdos, limiting our volume size to no more than 2 TB. The label needs to be changed to gpt via this command:

parted /dev/sdb mklabel gpt

Make sure you change the label on the correct volume! mklabel will wipe a disk, which can be useful for starting over, but catastrophic for your system disk.

After rebooting our prepped and updated Old Shuck, we are ready to do the NAS configuration, which means making a journey through Openfiler’s Web GUI.


Openfiler Configuration

We are going to share our entire array and make it visible to all machines running Windows on our network.

First let’s get to the Web GUI. If you look at your console, you’ll see just above the User prompt an IP address based https URL. Your browser might complain about certificates, but approve the link and sign in. The default user and password are openfiler and password.

The NAS configuration requires setting up three categories of parameters:

Network: Make the server a member of your windows network

Volumes: Set up the disk you are sharing, in our case half the array

Sharing: Create a network shared directory

Once you’ve logged in, you’ll have the Status screen (Figure 13):

Openfiler status screen

Figure 13: Openfiler status screen

We need to set the network access parameters to that of your Windows network, then set the Windows workgroup, and finally your SMB settings.

Go to the System tab and verify your network configuration is correct. At the bottom are the network access parameters you need to configure (Figure 14). These set the IP range you’ll be sharing your disk with. Enter a name, the network IP range, and a netmask, finally set the type to share. The name is only significant to Openfiler, it isn’t the name of your Windows workgroup or net domain.

Openfiler network access setting

Figure 14: Openfiler network access setting

A bit of a warning, in the middle of the page you’ll see your network interfaces. Since our motherboard has dual Gigabit network ports, there is a temptation to set up a bonded interface, not a bad idea. The problem is, if you start the bonding process, you have to finish it properly. Backing out or making a bad guess at settings will kill your network – and you’ll have to resort to the command line to reset your configuration.

We now have to set the Workgroup so our server can join the network neighborhood on each machine. Select the Accounts tab, the second set of options are those for your Windows domain (Figure 15), check the box to use Windows domains, select NT4-style Domain (RPC), set your workgroup name, and select Join domain. Submit at the bottom of the page.

.Openfiler Workgroup setting

Figure 15: Openfiler Workgroup setting

The final step in completing the network set up is to configure the SMB service. Select the Services tab and click SMB/CIFS Setup on the right-hand menu. Once there (Figure 16), all you need to do is set your Netbios name and select Use Default Domain as a Winbind policy. Don’t worry about any of the other settings.

Openfiler SMB service setting

Figure 16: Openfiler SMB service setting

Apply and return to the Services tab. Go ahead enable the SMB / CIFS service (Figure 17). You should now be able to see your server on your Windows machines.

Openfiler enable SMB server

Figure 17: Openfiler enable SMB server

Of course there is nothing there to share yet; we have to define some shares first. And to do that, we need to define a volume that we can share.

Volume Setup

Under Openfiler, volume groups are composed of physical volumes and contain user defined logical volumes that can have shared directories on them.

We are interested in setting up a single volume group, called nas, made up of all 14.55 TB of our RAID array, which has a single volume, called NetArray. NetArray will have one shared folder called Shares.

First we create the Volume Group by creating a new physical volume which will house our volume. Go to the Volumes tab.

Openfiler create volume group

Figure 18: Openfiler create volume group

Selecting Create new physical volumes leads us to our block devices (Figure 19). We want our RAID array, /dev/sdb.  Telling the difference is pretty easy, look at the sizes.

Two very different size block devices

Figure 19: Two very different size block devices

Clicking our array will bring us to the partition manager (Figure 20), where we can create a single primary partition composed of all available space. This is anachronistically done by cylinders. The defaults are the entire array, just click Create.

Partition creation

Figure 20: Partition creation

This will spin a bit, once done, go back to the Volumes tab so we can give a name to our volume group (Figure 21).

Naming volume group

Figure 21: Naming volume group

Here we need  to enter NAS as a name, and select our only partition. Commit by clicking Add volume group. Couldn’t be more straightforward.

Ok, we have a volume group, now we need a volume. On the right side menu select Add Volume. You’ll see an odd abbreviated page which allows you to select your volume group. Just click Change.

You’ll now get the Volume Creation page. To create our NetArray volume, enter the name, and push that slider all the way to right (….way past eleven), leave the filesystem type as XFS. Punch Create and the cursor will spin.

Logical volume creation

Figure 22: Logical volume creation

Share Setup

With our volume in hand, we can now create a network share. Go to the Shares tab. Clicking on our volume, NetArray, will get a dialog (Figure 23) that allows us to create a directory. Name the directory Shares.

Share creation

Figure 23: Share creation

Clicking the Shares directory will give us a different dialog (Figure 24), this is the doorway to the Edit Shares page where we can set a netbios name and set up permissions for our directory.

Share edit gateway

Figure 24: Share edit gateway

Just click Make Share and the edit page (Figure 25) will come up.

Each item is updated separately on this page. First we want to override the default name, which would look like a path specifier ( nas.netarray.shares) with just Shares. Once entered, click Change.

We want Public guest access, i.e. we don’t have any kind of authentication set up at this point. Check it and click Update. You’ll notice that the bottom of the page has changed, you can now configure the permissions for our WoofPack network.

Share edit page

Figure 25: Share edit page

We want to grant read & write permissions to everyone on the WoofPack network, and by extension everyone in our Windows workgroup. Under SMB/CIFS, push the radio button for RW. Click Update which will restart Samba with the new permissions. Our job here is finished.

As you can see we’ve taken the path of least resistance: one volume group, the whole singular array, one volume, one share, and read/write permissions for everyone. This makes sense because in the next installment of this article we are going to tear all this up. We took a low risk approach of creating our SAN, and this NAS is the first step. Creating the NAS allowed us to set up our initial disk array, familiarize ourselves with Openfiler, and create a performance baseline.

Now travel over to Windows, and you be able to mount Shares as a drive (Figure 26).

Share mounted from Windows

Figure 26: Share mounted from Windows

Our build, Old Shuck, delivers 14 TB. All that is left is to take a measure of the performance strictly as a NAS and sum up, so we can get to the cool part, the SAN configuration.

Performance

Using Intel’s NAS Performance Toolkit (NASPT), we are running the same tests that new entrants to SNB’s NAS chart go through. This will let us determine if we hit our performance goals. We are not going for the gold here, we are just getting a feel for the kind of performance we can expect with our future NAS to SAN tests, and so we can see the kind of improvement that moving to a SAN offers.

NAS Performance test configuration

Figure 27: NAS Performance test configuration

This is the first of three tests, NAS performance. The second will be SAN performance, and the last, SAN as NAS performance. All NAS performance tests are going to be done on a dual core 3GHz Pentium with 3 GB of memory running Windows 7, over a Gigabit Ethernet backbone. The SAN performance will be done from the DAS Server.

Each test will be run three times, the best of the three will be presented, a slight advantage, but you’ll get to see a capture of the actual results instead of a calculated average. Figure 28 shows the results of the plain old NAS test:

NAS Performance test results

Figure 28: NAS Performance test results

You may remember, when we formatted the 3Ware RAID Array we selected a stripe size of 256K, largely because the array is going to store compressed backups and media files, in other words, large files. You can see the hit we take in performance here. Media performance is outstanding, but the benchmarks around small files (Content Creation, Office Productivity) suffered. The oddest result was ‘File Copy To NAS’ which varied from 59 to 38 in our tests.

Let’s see how these numbers compare to some current SNB chart leaders in Figure 29.

NAS Performance comparison

Figure 29: NAS Performance comparison

Other than the odd File Copy To NAS results and a poor performance around the small file centric office productivity, we were with the pack throughout, besting the charts in directory copy from our NAS, and in the media benchmarks which is wholly expected.

Summary

Other than some disappointments with Openfiler, which we’ll cover in our conclusion, the NAS build was very straightforward. All of our components went together without a hitch and performance was more than acceptable – especially given the fact that the next closest consumer NAS in capacity is $1700 and delivers less performance. I don’t know about you, but I’m looking forward to the next set of tests.

In the next part of our series, we’ll buy and install our fiber HBAs, and configure Old Shuck as a SAN. Will it work? Will we be able to hit our price and performance goals? Stay tuned…


Introduction

So Verizon has this whole pitch, bringing fiber to the home. Well, we are going to one-up that, by doing fiber in the home. Taking advantage of the economy, demand for SAN hardware has slowed at the same time that less expensive PCIe components have made obsolete a whole swath of perfectly good hardware. Combine that with marketplaces like EBay, forcing what were local equipment liquidators to compete globally on price and we can score data center gear for less than a tenth of its original cost.

In Part 1, we built Old Shuck, an inexpensive twelve drive, twenty bay NAS for less than $700. A beast that was able to pretty much hold its own against equipment that costs twice as much and holds less. This time, we are going to blow the top off that, converting our NAS to a 4 Gbps fibre channel SAN, connecting to a DAS server running Windows and out to the network from there (Figure 1).

Figure 1: Block diagram of the Fibre Channel SAN/DAS/NAS

We need to purchase the needed components, Fiber HBAs and the cable to connect them, then install the boards and configure them. We will then reconfigure our Old Shuck NAS array to support SCSI and Fibre Connect – Old Shuck will morph into a shining SAN. Once we get it up and running, we will do benchmarking, just to see how hardcore the performance really is.

Inevitably with these sorts of articles, someone will point out that you just don’t need this kind of equipment, performance or capacity in your home. While that may be true, I’d argue that, quoting an old Richard Pryor joke, the same folks would never drop acid and try to watch The Exorcist, just to see what it is like. There is a thrill to accomplishing something like this from the ground up, it is a cool bit of kit, the geek’s version of the old quote – “You can never be too thin or too rich” – you can never have too much storage or too much performance. Double that if you can do it on the cheap.

Brief Fibre Channel Intro

Fibre Channel (FC) has been around since the early nineties, as a replacement for the awkward supercomputer based HIPPI protocol, and has become the de facto standard for connecting high speed storage arrays to host servers.

Fibre channel (which does not require an actual fiber cable, it can use copper), accepts SCSI block commands allowing direct read/write access to served storage. Each node requires a FC Host Bus Adapter (HBA), and can be connected to each other in three different topologies: looped like token ring, switched like modern Ethernet, and point to point. We are going to be focused on the least expensive, point to point, which requires no switch or multiport cards, hence less dollars.

Each HBA has a unique identifier (like an Ethernet’s MAC address) called a WWN (World-Wide Number) and come in speeds ranging from 1 Gbps to 20 Gbps. We are going to be focused on 4 Gbps (800MB/s), which offers the best bang for the buck, and high compatibility – the current sweet spot.

In the terminology, which iSCSI adopted, the array is the target node, and the DAS server is the initiator. Since there is no inherent routing, like with TCP/IP (and less overhead), each node needs to be configured separately.

The Parts

As we said, to accomplish a fiber connected SAN, we need two fibre channel HBAs, one for our SAN node, the other for our DAS server. And we’ll need a cable to connect the two. Our budget for these pieces is $300, two hundred for the PCIe card, and $75 for the PCI-X card, and leaving $25 for the cable. A tight fit.

Linux best supports QLogic 2xxx cards right now, so our SAN node needs a 4 Gbps QLogic 2xxx PCI-X card (not 2xx since the popular 220 cards have issues). Our DAS node, running Windows 7, doesn’t have the same vendor restrictions. For probably superstitious compatibility reasons, it would be ideal to get the same card for our DAS. But with a PCIe interface, the PCIe card nervously promises to be our most expensive item, and most likely to push us over budget.

Hitting eBay, we immediately found a QLogic QLE2460 4Gb PCI-X HBA within our budget. Score! Now we need to find a QLE2460 PCIe card. All of the “Buy it Now” 2460’s are outside our budget, so we join several auctions only to get sniped at a little over the $200 mark (Arrgh!).The second time around we get lucky with a price that falls just under our limit, if you include shipping.

The cable is easy, we need a LC/LC (the newer type of fiber cable connector) patch cable, say 6 feet and searching we find a 3 meter cable for less than we thought we could.

What we got:

QLogic QLE2460 4Gb PCI-X HBA $54, EBay Buy it Now
QLogic QLE2460 4Gb PCIe HBA $197, EBay Auction
Fiber Optic Patch Cable Cord 50/125 LC-LC 10M 33FT $10, FiberCables.com
Total $261
Table 1: Fibre Channel components

Great, we end up about $100 under budget, $56 under from the NAS parts, and $39 from here. Now I can sit back and wait for the sweet sound of the UPS van’s horn.

With the parts I have, I start building our NAS array, everything goes great until I get to the performance and stress testing phase. I’m seeing an odd message, something like “CPU #2 Now running within proper temperature range.” No previous indication that it wasn’t in range, but it appears as though we’re running hot. Darn.

Turns out that under stress our Norco case can be a little less than adequate at cooling, especially with passive cooling on the CPUs. This might be real trouble, isn’t like there’s a high demand for S604 CPU coolers. But lo, the builder’s favorite vendor, NewEgg, has active coolers for Nocona CPUs which will work fine on my 3.8 Ghz Irwindales. For what they are, they’re not cheap, $72 for two and a stall in performance testing. 

QLogic QLE2460 4Gb PCI-X HBA $54, EBay Buy it Now
QLogic QLE2460 4Gb PCIe HBA $197, EBay Auction
Fiber Optic Patch Cable Cord 50/125 LC-LC 10M 33FT $10, FiberCables.com
Previous Total $261
Dynatron H6HG 60mm 2 Ball CPU Cooler (X2) $72, NewEgg
Total $333
Table 2: Fibre Channel components, revised

Over budget on Part 2 by $33. But we now have all the components we need – done with the shopping.

Putting it All Together

We now have our FC HBAs, one for each end of the connection, and the cable to connect them. Bringing up the SAN is straightforward, but will require working at the shell level, and familiarity with a Unix editor.

We have four main steps needed to complete our big conversion:

  • Install – Install our two HBAs, one under Windows, the other under Openfiler.
  • Configure – Configure Openfiler to run as a SAN FC Target
  • Convert – Change over our RAID array to run as SCSI array.
  • Start-up – Connect Windows DAS server as an FC Initiator

Installing Fibre Connect HBA on DAS Server

We are going to start with the easiest part, configuring the PCIe QLogic FC HBA in our DAS server. We are starting here because we are going to need the Port Name / WWN of the adapter to configure Openfiler as a SAN. We’ll then install our second HBA in Old Shuck.

Remember these cards are used, you have no guarantee that the settings are in a known state, it’s probably a good idea to reset both them to their factory settings. You can do this at boot-up, once you see the banner for the QLogic card hit Control-Q, there you’ll find a BIOS menu selection for restoring the card to the default factory settings.

There are no real hard requirements for the DAS server, dedicating a machine is not necessary. But it goes without saying that the more capable the box, the better a gateway it’ll make performance-wise. Gigabit network and at least 2 GB of memory with a multicore processor would be a good start. 

We are using BlackDog, our mainline Windows desktop, a homebrew AMD Phenom II Black X3 720 with the fourth core unlocked and overclocked to the point that it emits a small subsonic tortured whine that every electric company loves. Effectively an X4 965, with 8 GB of memory and a SSD for paging, running Windows 7 x64.

You will need to get QLogic’s SANSurfer from their site. Select your model HBA to get a list of downloads - don’t worry about the driver, since Windows will automatically install them for you, all you need is the FC Manager. If, like us, you are on Windows 7, there isn’t a version for it, just grab the version for Vista, it’ll do the job.

Before installing the software, go ahead and shut down, then gently insert the HBA into your PCIe bus.

After bringing your server back up, Windows should recognize the card and install the driver. You now need to install SANSurfer, choose the manager GUI and Windows Agent (Figure 2).

SANSurfer Install

Figure 2: SANSurfer Install

The Windows Agent acts as the initiator, and the manager allows us to configure the card. Once installed, go ahead and run it.

First, you’ll need to connect to the HBA. From the toolbar, hit Connect, you’ll get a pop-up offering to connect to localhost (figure 3). Check the auto-connect box and hit Connect.

SANSurfer connect

Figure 3: SANSurfer connect

You should now see your HBA listed in the left hand pane (Figure 4), with everything marked Good. If not Good, drop to the QLogic BIOS, available at boot-up, and reset everything to factory settings, then try again.

HBA Status

Figure 4: HBA Status

From the manager GUI, select “Port 1” and from the top bar select Wizards -> General Configuration Wizard (figure 5).

HBA Configuration Wizard

Figure 5: HBA Configuration Wizard

Write down the Port Name field value, we are going to need it when we configure Old Shuck. Select the HBA and hit Next, skip past the informational page, and go to the connection settings page.

On this page set Connection Option to 1-Point to Point Only and the Data Rate to 4Gbps on the associated pull-downs (figure 6). 

Connection Settings Pulldown

Figure 6: Connection Settings Pulldown

Now, skip to the end by clicking Next, and confirm the configuration by clicking Finish on the last window. You’ll be prompted to save the configuration - this requires a password, the default password is config.

You should now see the status screen with all the details of your card (Figure 7).

Completed Configuration

Figure 7: Completed Configuration

Leave this up and connected for now, we’ll be coming back to it.

Installing Fibre Connect HBA On Old Shuck

I next powered down Old Shuck and installed the HBA in a free slot. Comparative benchmarking showed that the best performance can be had with the HBA in Slot #3, the 133 Mhz slot. This indicates that the HBA is a bottleneck, not our 3Ware card. 

Installing the 3Ware card in the faster slot, as we mentioned, does require disabling the onboard SCSI, otherwise you’ll see a resource conflict. What was surprising is not the problem, but Supermicro’s excellent support. We contacted them by email with a problem on a five year old motherboard, and within two hours they had responded with the correct solution. How many other vendors support even their current products so well? In comparison, on previous builds, real problems with both AMD and Asus saw no resolution, or more often no response at all. Impressive, and kudos to Supermicro!

After installing the card, boot the machine with one hand on the keyboard, when the QLogic BIOS banner appears, hit Ctrl-Q to enter its BIOS. We need to verify the Adapter Settings (Figure 8), since we want our two cards agreeing on settings. These three settings are:

Frame Size: 2048 (default)
Connection Option: 1 (Point to Point)
Data Rate: 3 (4Gps)

QLogic BIOS, Adapter Settings

Figure 8: QLogic BIOS, Adapter Settings

If there are odd values, or you have problems later, the QLogic card can be reset to factory defaults from the main BIOS screen. Once the values are set, go ahead and complete the boot up of the SAN server.

Take a moment now, once the card is in, to make the cable run between the DAS server and the SAN server. The cable has a nice snap to it, doesn’t it?

The QLogic card is now ready to be paired with our initiator. We just have to configure Openfiler, first for our SAN storage, and then associate that storage with our new HBA.

Undoing The Previous NAS settings

We need to reconfigure our NAS array as a SAN array. Remember, NASes share filesystem based storage, whereas a SAN shares direct block storage. To setup SAN storage, we need to reformat our array, ripping up the XFS filesystem, and making it iSCSI.

We are going to delete our NAS volume group and create a new SAN volume group, which requires unwinding our shares and volume configuration from the bottom up.

In the Web GUI, from the Shares tab, delete the Shares entry that you created in Part 1. Then from the Volumes tab (Figure 9), delete the netarray volume, then finally delete NAS volume group using the Volume Groups right hand menu selection. I’ve seen Openfiler silently have issues with removing a volume group. So if the NAS group doesn’t disappear, you’ll need to zap it from the command line. To do this, log in as root, and at the command line execute the vgremove command:

vgremove nas

We are now back to a blank slate. Following the same procedures in Part 1 for creating a volume group, create one called san, using the same physical volume that the NAS volume group used.

Openfiler Create Volume Group

Figure 9: Openfiler Create Volume Group

Once added our SAN volume group, we need an iSCSI volume. Under the right hand menu, select Add Volume (Figure 10) and create an iSCSI volume called FiberArray. Pushing that slider over to 14 TB is so cool.

Create Volume

Figure 10: Create Volume

Done. Now to the more difficult part, configuring and bringing up the SAN services on Old Shuck.

Configuring Openfiler as FC SAN

Openfiler uses the open-source Linux SCSI target subsystem (SCST), which handles iSCSI and our QLogic HBA. To make our array available by fiber, we need to tell SCST three things: 

  • That we have an FC HBA.
  • The volume it should serve.
  • Who’s going to be asking for our hunk of storage.

Configuring these details is straightforward, but regretfully Openfiler doesn’t provide a GUI to do so. We have to do it by hand, at the shell level.

You can work from the console, or probably more conveniently, install either Putty or Cygwin and connect via ssh (secure shell). Both will let you conveniently cut and paste. By default ssh is enabled under Openfiler. Either way, log in as root.

Initially, we have some groundwork to do; we have to change when SCST will start, and enable it to run.

First, change the run level to ensure volumes are mounted and required services are up before sharing. You’ll find, the commands towards the top of SCST initialization script, in /etc/init.d, a chkconfig directive. Doing a grep chkconfig /etc/init.d/scst should bring you to where you need to be. It should look something like this:

[root@OldShuck ~]#  grep chkconfig /etc/init.d/scst  #chkconfig: — 14 87

Using your favorite editor, change the first number, in this case 14, to 99. This will move the startup of the scst service to the end of the line, ensuring everything it needs has already been started. If you grep the file again, it should now read:

#chkconfig: — 99 87

We can now tell Old Shuck that it needs to start the scst service at boot-up, and we’ll go ahead and start the service now so it can be configured.

This should look like:

[root@OldShuck ~]#  chkconfig scst on  [root@OldShuck ~]# service scst start  Loading and configuring the mid-level SCSI target SCST. [root@OldShuck ~]#

One last thing, we need the name of the demon if we are to summon him. In this case, that name is the unique identifier for your HBA. The WWN is in a file called port_name, in a directory assigned by the system. To get it:

cat  /sys/class/fc_host/host*/port_name

Navigating directories, it looks like this:

[root@OldShuck ~]#  cd /sys/class/fc_host  [root@OldShuck ~]# ls  host5 [root@OldShuck ~]# cd host5  [root@OldShuck ~]# cat port_name  0x210000e08b9d63ae

When working with your WWN it is parsed like a MAC address, by hex byte, so Old Shuck’s WWN is 21:00:00:e0:8b:9d:63:ae

We have everything set up and are now ready to start configuring SCST. SCST is configured through the scstadmin command, we will use scstadmin to set scst parameters, then write the configuration to /etc/scst.conf so it persists across reboots.

First, to enable FC HBA as a target host, you’ll need the WWN from above, i.e. scstadmin  -enable  <your wwn>

[root@OldShuck ~]#  scstadmin –enable 21:00:00:e0:8b:9d:63:ae  Collecting current configuration: done. —> Enabling target mode for SCST host '0x210000e08b9d63ae'. All done. [root@OldShuck ~]#

Now add the array to SCST. We’ll assign the array a device name, tell it to use the virtual disk handler, provide the device path to the volume we set up above, and, for now, use the WRITE_THROUGH cache policy, i.e. scstadmin –adddev  <your device name> -handler vdisk  -path /dev/<volume group>/<volume> -option WRITE_THROUGH

[root@OldShuck ~]#  ls /dev/san  fiberarray [root@OldShuck ~]# scstadmin —adddev SAN_LUN0 —handler vdisk —path 
/dev/san/fiberarray —options WRITE_THROUGH
Collecting current configuration: done.   —> Opening virtual device 'SAN_LUNO' at path '/dev/san/fiberarray' using handier 'vdisk'.. All done

The WRITE_THROUGH option means that we acknowledge the write once it is flushed from the cache to the disk. We could use WRITE_BACK policy option, which acks the write before it is flushed, problem is, if the power fails or something else goes wrong you could lose data. The tradeoff is speed, it takes time to flush the cache.

We now assign our device to the default group and give it a LUN. SCST provides for groups to ease administration of who can access what. SCST comes with a Default group (note the capital D) which is just that. We’ll assign our device to that group and give it unique logical number (LUN), in our case, zero, i.e. scstadmin –assigndev  <your device name>  -group Default –lun  0

[root@OldShuck ~]#  scstadmin –assigndev SAN_LUN0 —group Default —lun 0  Collecting current configuration: done.   —> Assign virtual device 'SAN_LUNO' to group 'Default' at LUN '0'.. All done. [root@OldShuck ~]#

If you were to have multiple volumes, you’d increment the LUN for each subsequent volume.

Openfiler SAN Configuration - more

We want our DAS server to access all the storage in the default group. In the terminology of SCST, the DAS server’s HBA, identified by its WWN, is a user (Figure 11). You’ll need the port name (the WWN) that we wrote down when we configured Windows box using SanSurfer above.

Add DAS server HBA to default group:

scstadmin –adduser  <DAS WWN>  -group Default

Add DAS HBA as a User

Figure 11: Add DAS HBA as a User

The configuration is almost complete, we have to write our configuration out to the config file /etc/scst.conf, so SCST can use it whenever we restart our SAN.

scstadmin –writeconfig  /etc/scst.conf

By default, our HBA is disabled in the SCST configuration file, we need to enable it by moving the host entry, our SAN HBA, to the list of enabled targets. Do this by editing the /etc/scst.conf and cutting and pasting the entry (Figure 12).

Enabling HBA as Target

Figure 12: Enabling HBA as Target

We are done configuring SCST. Your scst.conf file should look like the listing below.

# Automatically generated by SCST Configurator v1.0.11. # NOTE: Options are pipe (|) seperated. [OPTIONS] #OPTION <1|0|YES|NO|TRUE|FALSE|VALUE> # Copy configuration options during a -writeconfig KEEP_CONFIG FALSE # For FC targets, issue a LIP after every assignment change ISSUE_LIP FALSE [HANDLER vdisk] #DEVICE <vdisk name>,<device path>,<options>,<block size>,<t10 device id> DEVICE SAN_LUN0,/dev/san/fiberarray,WRITE_THROUGH,512,SAN_LUN0 df0abcfc [HANDLER vcdrom] #DEVICE <vdisk name>,<device path> [GROUP Default] #USER <user wwn> USER 21:00:00:1B:32:0E:5D:91 [ASSIGNMENT Default] #DEVICE <device name>,<lun>,<options> DEVICE SAN_LUN0,0 [TARGETS enable] #HOST <wwn identifier> HOST 21:00:00:e0:8b:9d:63:ae [TARGETS disable] #HOST <wwn identifier>

We have one last change to make before we are done. We have to change the QLogic default driver kernel module, qla2xxx, so that of the QLogic target driver module, qla2x00tgt, is loaded instead. 

To do this you need to edit the module configuration file, modprobe.conf in the etc directory, changing qla2xxx to qla2x00tgt. When you are done editing, the file should look like the listing below.

alias eth0 e1000 alias eth1 e1000 alias scsi_hostadapter1 3w-9xxx alias scsi_hostadapter2 ata_piix alias usb-controller uhci-hcd alias usb-controller1 ehci-hcd alias scsi_hostadapter3 qla2x00tgt

That’s it, all done. We are ready to fire up our Old Shuck for the first time. Go ahead and reboot your SAN server, once it is back up, your SAN server should be visible to your Windows DAS box.

After the reboot, SANSurfer should be able to see the disks – take a look in the left hand pane, select LUN 0. It should look like Figure 15.

SANSurfer LUN 0

Figure 15: SANSurfer LUN 0

If you cannot see your disk as LUN 0, go back through your configuration, verify both scst.conf and modprobe.conf, especially the WWNs, which are easy to mistype.

Configuring The SAN On The DAS Server

We can now configure our disk. Highlight ‘Port 1’ in the left pane of SANSurfer; in the left pane you should see a tab called Target Persistent Binding (Figure 16), select it, and check the Bind All box and click Save. We are done with SANSurfer now, so quit.

Target Persistent Binding Tab

Figure 16: Target Persistent Binding Tab

The rest of the configuration is standard Windows, and your SAN is just a very large disk. Under Control Panel -> System and Security -> Administrative Tools bring up the Disk Management tool (Figure 17).

Windows Disk Management Tool

Figure 17: Windows Disk Management Tool

Go ahead and initialize and format the disk. You can then change to properties to share the disk to your network, making your DAS server a logical NAS.

See Figure 18, Cool, right?

Figure 18: Shared SAN Disk

With everything set, let’s buckle in and take a look at performance.


SAN Performance

We are looking for Old Shuck’s SAN node performance figures, that is the speed over our 4Gb fiber link to BlackDog, our DAS node. These numbers should be much better than those of a comparable native drive, let alone our initial NAS speeds. Figure 19 shows our test topology.

DAS to SAN Test Topology

Figure 19: DAS to SAN Test Topology

When initially started testing our SAN performance using Intel’s NASPT test suite, we got some outrageous numbers, like 2010 MB/s for the ‘File Copy from NAS’ test, and 1633 MB/s for ‘4x HD playback’. These indicated something was wrong or that we had inadvertently overclocked our PCIe x4 slot by a factor of two. Since there wasn’t any smoke pouring out of BlackDog, we guessed something was wrong.

After some brief research, we realized that we were boneheads, NASPT recommends a maximum of 2 GB of memory, being a Java application it uses the JVM for memory management. At 8 GB, caching was giving us speeds that were not in the realm of possibility (as much as we wanted to believe them…)

When we stripped back BlackDog down to 2 GB and retested, results emerged that made more sense (Figure 20).

Old Shuck SAN Performance

Figure 20: Old Shuck SAN Performance

Very sweet numbers, with two exceptions, the Content Creation figure of 13.7 MB/s, which was most likely due to our stripe size, and the File Copy to Old Shuck, at 65.4 MB/s, still mysteriously low. Hopefully our planned tuning of BlackDog/Old Shuck as a logical NAS, in the next part, would give us a clue why this number was so much lower than our commercial competitors. It has us scratching our heads…

In Figure 21, we can see how we measure up against the SNB top performers. In all but the anomalous File Copy to NAS category, we trounce ‘em. But that is not a surprise, we’ve unfettered the traffic from the constraints of our Gigabit network.

Performance Comparison

Figure 21: Performance Comparison

Interestingly, if we compare the test figures of our SAN against those of the same kind of hard disk (compressed) connected on BlackDog’s SATA bus, Old Shuck still shines – putting CPU and fiber muscle behind the same model of disk makes a big difference (figure 22).

Hard Drive plus Standings

Figure 22: Hard Drive plus Standings

Wrap Up

Though the conversion / configuration process is a bit complex, it is straightforward and took about 30 minutes, the most difficult part is making sure the scst configuration file is correct.

And as you can see by the benchmarks, the performance is outstanding.

Let’s say you make your HTPC the DAS server, running a long fiber cable from your living room (the WAF factor for the Norco case is right up there with the dogs playing poker painting), converting the NAS you built to a SAN for less than $300. Now you are able to stream the densest Blu-Ray resolution known to man directly to your TV without a hitch, dramatically reduce the time it takes to rip that same Blu-Ray disk - and store, if fully loaded, about fifty more terabytes beyond that.  Do backups and update music and pictures from any machine in the house via that shared drive.

You won’t have to worry about performance or storage for quite some time (or until 2015, when storage guru Tom Coughlin comes knocking at your door with his Petabyte.)


In figuring out how to convert our NAS to a SAN, I’d be remiss in not thanking the sources used in solving a problem of configuring fibre channel, a problem that has no official documentation. First there is Snick’s thread in the Openfiler forums. The other source we used was Brian’s How To. Without their generosity I’d probably be bald, and launching expensive equipment through windows (the kind you see through walls with).