Qnap NAS

Pete Allen

Suspended / Banned
Messages
3,250
Name
Pete (really)
Edit My Images
No
Looking for some advice.

I have been using direct NAS until now but, recently bought a Qnap NAS for backups, 10tb set up as raid 5.

I have it working ok but, wondering if I'm using it in the best way for both performance and security.

I am on BT Infinity and have two pc's connected via homeplugs, the NAS is connected via the BT router through the GIGe connector.

I know I'm not getting the best from the homeplugs but, prefer these to trailing wires all over the house.

I've been reading a bit about network switches but, not sure how this would work with the homeplugs as there's not a lot of info about.

Any advice would be appreciated.
 
Any advice would be appreciated.
Invest in some decent network infrastructure.

Firstly ditch the HH as a hub of any description. You can use it as the bridge to the modem if you want, but remove it from day-to-day transactions on the network.

The homeplugs will be running at way lower than the rated speed, so for anything where you want to move serious amounts of data, I'd remove those too. You essentially have 2 options:

  • Run cables everywhere (or at least run 1 cable and add a switch at the other end and distribute from there).
  • Buy two decent wireless routers and bridge them (by decent, I mean dual band ones that can be bridged in the 5GHz network range and that are triple speed at 450Mbps)

The first option is cheapest - probably less than £100 to do it properly with a couple of switches and using the HH as the bridge. The second will be ~£230 for a pair of decent routers and some cables (I'd recommend the Asus RT-N66U here). Clearly, there are intermediate points where you could do the first with a decent router replacing the HH (again Asus RT-N66U) and a switch at the other end which would cost £150-£170ish.
 
Thanks Andy,

What do you mean by day-to-day transactions?

I take it the speed benefit is going to be with the NAS, as I get 50mb/s and 17mb/s internet through the homeplugs.

Happy to splash out on better hardware but, running anything wired is not an option.

Thanks for your time. :thumbs:
 
What do you mean by day-to-day transactions?
Anything where you want to move large amounts of data around in the house. Clearly, you are limited by your Infinity connection for the Internet, but around the house, you should try and be as fast as possible.

I take it the speed benefit is going to be with the NAS, as I get 50mb/s and 17mb/s internet through the homeplugs.
Ouch. You have a gigabit connection on the NAS and are connecting to it at 17Mbits/sec.... That's a factor of 50x slower. To my server, I get write speeds approaching 100MBytes per sec (gigabit everywhere here). On your 17Mbit connection, you'll be lucky to reach 2Mbytes/sec.

Happy to splash out on better hardware but, running anything wired is not an option.
In which case, you'll need to be looking at a couple of decent routers or a single router and a couple of dual band Wifi cards in the back of the PCs. Are the 2 computers close enough together that you could cable both those to a single router or is wireless the best option for those too? Can you fit PCI/PCI-e cards in them or are they closed boxes a-la Apple iMac?
 
No problem fitting cards in both PC's and, they are only 6ft apart so don't need to be wireless at that end.

So, is there any benefit from have 2 routers, as opposed to 1 router and cards in each PC?

Sorry if I sound a bit thick but, never done this before.
 
The advantage of back to back routers is that you only have one "thing" talking on the wireless channel. With two cards in the PCs, when the things both want to talk they compete for the channel, so throughput will be lower when both are on.

It's difficult to predict what sort of throughputs you will get anyway without trying, but the further apart the routers are and the more walls the signal has to reach through - the lower the performance will be.

Are you really, really sure you can't get a single Ethernet cable to the two PCs? The speed will be at least 2x what you can achieve wirelessly (and that's with a good signal).
 
Yea, Definitely no chance for cable, I'm going to go with the 2 router option.

Thanks for the advice. :thumbs:
 
Yea, Definitely no chance for cable, I'm going to go with the 2 router option.
Depending on your needs/costs/house topology, a pair of RT-AC66U MAY be a better option. AC can transfer data 3x faster than wireless n. Just set one up as a media bridge and connect to the other as shown here: http://support.asus.com/Search/KDetail.aspx?SLanguage=en&no=42392BBD-29F7-D0C7-6E04-BA444E44B750&t=2

I've never used AC before (it's a new standard) but it does boast higher speeds than N (the router claims up to 1.3Gbps - but as usual, actual will be lower). Clearly, it will be dependent upon placement and distance apart, but you should be able to use the 5GHz channels (I'm the only person around here who has a 5GHz enabled router, so no congestion :)).

You can then replace the HH with the router at the NAS end (which is, I assume, by your Infinity modem.
 
Yep, NAS is by the modem downstairs, pc's are in an upstairs converted bedroom.

One more question, looking at the specs of the Asus RT-N66U, it says there is no built in modem, would I be better of getting a router with modem built in?

Thanks
 
One more question, looking at the specs of the Asus RT-N66U, it says there is no built in modem, would I be better of getting a router with modem built in?
On Infinity, the modem is separate - it's the (white) box that connects to the phone socket.

I'd definitely look into the AC66U before splashing my cash though.
 
Don't forget that while Raid 5 performs faster it is the least reliable of the Raid configurations.
 
Also neither are "less reliable"
Ah, you've never had to recover data from a crashed raid array, it's not a pretty site.

The hard drives work hardest in a raid 5 array than in any other disk configuration. For very piece of data written to a raid disk it has to write parity information to all the other drives in the raid 5 array.
 
Ah, you've never had to recover data from a crashed raid array, it's not a pretty site.

please don't make assumptions, I work with raid every day :)

The hard drives work hardest in a raid 5 array than in any other disk configuration. For very piece of data written to a raid disk it has to write parity information to all the other drives in the raid 5 array.

id like to see some reliable articles on that?

as I see it, raid1 for example also writes data to every drive. is that any less "hard working"? what about raid 2,3,4,6,53 that all write parity. then 10 and 01 (same as 1, writes to each disk)...

personally ive never seen any pattern in failed disks in a particular level raid.
 
Last edited:
Given it looks like you are about to have an argument about RAID configurations - at least get your facts right. RAID0 is the least reliable as it stripes across all the disks with NO redundancy. At least RAID5 has one disks worth of redundancy.

The smart money however is on RAIDZ configurations which are available through ZFS ;)
 
Ah, you've never had to recover data from a crashed raid array, it's not a pretty site.

The hard drives work hardest in a raid 5 array than in any other disk configuration. For very piece of data written to a raid disk it has to write parity information to all the other drives in the raid 5 array.

Not strictly true.

Out of RAID 0, 1 and 5, disks work hardest in RAID 1, as all data is written to each disk in the mirror.

Next would come RAID 5. RAID 5 stripes the data across all disks in the array and only writes parity information (a checksum) across all disks in the array. Note that all of the data is not duplicated to each of the disks, data is distributed relatively evenly across the disks, therefore, a disk in a five disk RAID 5 array would work far less than a RAID 1 disk.

Finally, a RAID 0 (striping), is basically RAID 5 without the protection of redundancy. If you are after pure read and write speed, RAID 0 is your answer and of the three RAID levels discussed here (0, 1 and 5), RAID 0 disks have the least strain put upon them, but of course is the least reliable, as the rate of failure is increased with each disk added into the array.
 
Last edited:
Given it looks like you are about to have an argument about RAID configurations - at least get your facts right. RAID0 is the least reliable as it stripes across all the disks with NO redundancy. At least RAID5 has one disks worth of redundancy.

technically it doesnt make it any less "reliable" persay. just the odds that a disk will fail resulting in a rebuild of the array are increased :thumbs:
 
technically it doesnt make it any less "reliable" persay. just the odds that a disk will fail resulting in a rebuild of the array are increased :thumbs:
Noo... What I mean is 1 disk fail in RAID0 = all data gone. 1 disk fail in RAID5 = replace disk and rebuild (and you should still be able to use the array whilst it is "broken").

This all assumes no backup - just talking about the RAID array itself....
 
Noo... What I mean is 1 disk fail in RAID0 = all data gone. 1 disk fail in RAID5 = replace disk and rebuild (and you should still be able to use the array whilst it is "broken").

This all assumes no backup - just talking about the RAID array itself....

Regarding raid, best bet, go for something like a Storageworks array with multiple disks in the spareset, 1 disk fails and is instantly replaced by one from the spareset and rebuilt, another disk fails and is replaced instantly and so on. Just replace the disks in the failedset and jobs a goodun, could probably pick up an old HSG80 on fleabay cheap too. Just a tiny little bit ott for home use though :);)
 
Regarding raid, best bet, go for something like a Storageworks array with multiple disks in the spareset, 1 disk fails and is instantly replaced by one from the spareset and rebuilt, another disk fails and is replaced instantly and so on.
My fileserver is based on FreeBSD and I run RAIDZ (ZFS equivalent of RAID5). There is also RAIDZ2 (RAID6 equivalent) and RAIDZ3 should you want 2 or 3 disks worth of parity. You can attach any number of hot spares (currently I have one) and I have replaced the disk and rebuilt a failing array without rebooting the server (although it wasn't automatic). In fact I was also able to take the failed disk offline, replace it physically and then add the new disk as the new spare without taking the machine down. About 6 commands in total...

Just a tiny little bit ott for home use though :);)
Yes, mine is virtually silent (2 low speed fans blowing air over the disks just to keep them from heating). The rest of the system (PSU and CPU cooler) is passive. (Un?)fortunately no server room environment here ;)
 
My fileserver is based on FreeBSD and I run RAIDZ (ZFS equivalent of RAID5). There is also RAIDZ2 (RAID6 equivalent) and RAIDZ3 should you want 2 or 3 disks worth of parity. You can attach any number of hot spares (currently I have one) and I have replaced the disk and rebuilt a failing array without rebooting the server (although it wasn't automatic). In fact I was also able to take the failed disk offline, replace it physically and then add the new disk as the new spare without taking the machine down. About 6 commands in total...

Yes, mine is virtually silent (2 low speed fans blowing air over the disks just to keep them from heating). The rest of the system (PSU and CPU cooler) is passive. (Un?)fortunately no server room environment here ;)

Must try the ZFS stuff at some point. I may be doing a Solaris course this year through work. Taking failed disks offline is pretty standard unix really. Server rooms are great in the summer when you need to cool down a little and IBM blade chassis are great if you want to warm up, just stand behind one :D
 
Must try the ZFS stuff at some point. I may be doing a Solaris course this year through work. Taking failed disks offline is pretty standard unix really. Server rooms are great in the summer when you need to cool down a little and IBM blade chassis are great if you want to warm up, just stand behind one :D
Gotta love a system that allows you to do something like this (all from the command line):

Code:
[andy@MAINSERVER ~]$ zpool status
  pool: backup
 state: ONLINE
  scan: scrub repaired 0 in 1h20m with 0 errors on Sun Mar 31 03:20:41 2013
config:

        NAME           STATE     READ WRITE CKSUM
        backup         ONLINE       0     0     0
          label/back1  ONLINE       0     0     0
          label/back2  ONLINE       0     0     0

errors: No known data errors

  pool: storage
 state: ONLINE
  scan: scrub repaired 0 in 12h36m with 0 errors on Sat Mar 30 13:36:54 2013
config:

        NAME           STATE     READ WRITE CKSUM
        storage        ONLINE       0     0     0
          raidz1-0     ONLINE       0     0     0
            gpt/disk1  ONLINE       0     0     0
            gpt/disk2  ONLINE       0     0     0
            gpt/disk5  ONLINE       0     0     0
            gpt/disk4  ONLINE       0     0     0
        spares
          gpt/disk3    AVAIL

errors: No known data errors

  pool: tank1
 state: ONLINE
  scan: scrub repaired 0 in 0h0m with 0 errors on Sun Mar 31 02:00:54 2013
config:

        NAME          STATE     READ WRITE CKSUM
        tank1         ONLINE       0     0     0
          mirror-0    ONLINE       0     0     0
            gpt/sys0  ONLINE       0     0     0
            gpt/sys1  ONLINE       0     0     0

errors: No known data errors

So, 3 volumes:
  • Mirrored backup volume
  • Striped RAIDZ array with hotswap spare
  • Mirrored boot drives (actually SSDs)

EDIT: the scrubs refer to the online data integrity check - I know if my drives are failing - they are checked cover-to-cover on a weekly basis.
 
Last edited:
Regarding raid, best bet, go for something like a Storageworks array with multiple disks in the spareset, 1 disk fails and is instantly replaced by one from the spareset and rebuilt, another disk fails and is replaced instantly and so on. Just replace the disks in the failedset and jobs a goodun, could probably pick up an old HSG80 on fleabay cheap too. Just a tiny little bit ott for home use though :);)

Hot Spare.

Must QNAP's allow you to configure this. They really are excellent devices, so much so that both firms signed a deal to allow them to be rebadged Cisco.

QNAP'S also support iSCSI, which if setup correctly can make for a very flexible SAN-like environment.
 
Hot Spare.

Must QNAP's allow you to configure this. They really are excellent devices, so much so that both firms signed a deal to allow them to be rebadged Cisco.

QNAP'S also support iSCSI, which if setup correctly can make for a very flexible SAN-like environment.

Yeh, I know. Not knocking Qnap in any way, I have a 4 bay one myself. It'll never be as robust or offer as much redundancy as a data centre solution though but the discussion led away from home use a bit!
My Qnap does occasionally 'fail' a perfectly good disk at times, to it's cedit I can move the disks around and it recovers fine. Just a bit of a pain....
 
Hot Spare.

Must QNAP's allow you to configure this. They really are excellent devices, so much so that both firms signed a deal to allow them to be rebadged Cisco.

QNAP'S also support iSCSI, which if setup correctly can make for a very flexible SAN-like environment.

Ha ha, I must start using the TP website instead of the mobile app so that I can write my posts properly!
 
Hot Spare.

Must QNAP's allow you to configure this. They really are excellent devices, so much so that both firms signed a deal to allow them to be rebadged Cisco.

QNAP'S also support iSCSI, which if setup correctly can make for a very flexible SAN-like environment.

same with synology (both hot spare and iscsi), sometimes i think they come out of the same factory or something because theyre very similarly spec'd
 
Back
Top