Solved

RAID 5 disk speed impact on gigabit NAS

Posted on 2008-10-13
14
783 Views
Last Modified: 2013-11-14
Hello there,

I have purchased a NAS device which is connected to my network via a gigbit network connection.

I have 4 * 7.5k rpm disks in this device in a RAID 5 configuration.

My supplier recommended not getting 10k drives as he said 'the network will be the bottleneck'... however two of these disks went wrong recently resulting in data corruption - I am wondering if there is an advantage to be gained in having faster drives sue to the fact that the RAID needs the best speed possible to do it's striping etc?

Would having faster drives be better for such a setup?

Many thanks,

BH
0
Comment
Question by:butterhook
  • 5
  • 4
  • 3
  • +2
14 Comments
 
LVL 55

Expert Comment

by:andyalder
ID: 22701843
>My supplier recommended not getting 10k drives as he said 'the network will be the bottleneck'

LOL, I suggest getting a new supplier. Lets assume you get 150 IOPS from those 10K disks that you didn't buy and also let's assume you read 64K in a single I/O (quite unlikely since directory reads etc are much smaller) that's 9.6MBPS or 96Mbps per disk and the network that's supposed to be a bottleneck is 10 times that fast.
0
 
LVL 63

Expert Comment

by:SysExpert
ID: 22702557
Faster drivers are always better.
I would look at 15 RPM  drives if available, especially if you need the speed.


I hope this helps !
0
 
LVL 70

Expert Comment

by:garycase
ID: 22704884
Let's not confuse apples and oranges :-)

Your vendor is correct with regard to sustained transfer speeds.  A 4-drive RAID-5 will easily saturate a GB network connection, so there would be no advantage to faster drives in terms of how long the data will take to transfer.   A single modern high-capacity 7200rpm drive typically has sustained rates in the 80MB/s area -- the RAID array will have close to triple this ... certainly above 200MB/s = 1600 Mbps => well above the GB network's capabilities.   So once a transfer is started from the array, it makes NO difference if the array uses 7200rpm drives or faster.

However ... the access times for higher-speed drives are appreciably better than for a 7200rpm drive.   For example, a 10,000k Raptor has an average access time ~ 4.5ms vs. a typical 8ms (or longer) for a 7200rpm drive.   So you can START the transfers quicker.

How much of an advantage this may be depends on several things => if the drives have large buffers and support native command queuing, there's probably no advantage (except for the first command in a string of commands) ... since the "next" seek will overlap with the transfer of the last command's data from the buffer.  It also depends on what kind of applications you're using the drives for -- transaction-oriented applictions with a large number of small I/O's per second;  or data-oriented applications where a typically I/O transfers a relatively large amount of data.

But the bottom line is simple:  Higher speed disks will give you an access time advantage -- which will almost always make a system "feel" faster ==> but the data won't FLOW any faster from these disks than it would from a 7200 rpm based array.
0
 
LVL 4

Expert Comment

by:jsbush
ID: 22706382
Just a side note:  Keep in mind that you would want to just replace the faulty drives with higher RPM drives, you would want to replace all of the them - otherwise your array will operate at the speed of the slowest drive.  All or nothing essentially.

Also, there are 10K SATA drives but many higher RPM drives (10K and 15K) are SAS drives.  Make sure your NAS will support SAS if you go that route - you never mentioned the model so I have to throw that out there.
0
 
LVL 1

Author Comment

by:butterhook
ID: 22709407
So the speed of the connection into and out of the NAS may be limited by the disk speed - but how about the health of the RAID array?
0
 
LVL 70

Accepted Solution

by:
garycase earned 500 total points
ID: 22709440
No, the speed of the connection into and out of the NAS is NOT limited by the disk speed ==> it's limited by the GB LAN, which is easily saturated by 7200 rpm disks (so faster disks make no difference in the speed the data is transferred).

The difference is in how quickly the transfers start => higher rpm disks have better access times (as I noted before).

The "health" of the array isn't effected by the speed of the disks.
0
 
LVL 1

Author Comment

by:butterhook
ID: 22709457
So how could 2 disks fail when they have a 5 day guarantee?
0
Backup Your Microsoft Windows Server®

Backup all your Microsoft Windows Server – on-premises, in remote locations, in private and hybrid clouds. Your entire Windows Server will be backed up in one easy step with patented, block-level disk imaging. We achieve RTOs (recovery time objectives) as low as 15 seconds.

 
LVL 70

Assisted Solution

by:garycase
garycase earned 500 total points
ID: 22709481
I presume you mean a 5 year guarantee :-)

... Disk fail.   There's no way to predict when that will happen ... the guarantee eases the financial pain (you can get the disks replaced) but does nothing to help with your data (that's why backups are so important).    Be sure you have good airflow around your NAS device -- if the disks aren't getting good ventilation and are running hot that can accelerate their failure.
0
 
LVL 55

Expert Comment

by:andyalder
ID: 22709609
I completely disagree with Gary. Sustained transfer rate that you measure with a benchmark tool might come in at 50MB/s but you're not going to get anywhere near this on a NAS box serving several users since the I/O takes a more random patern. Add to that the fact that high capacity SATA disks go into write/verify mode when they get hot and the RAID 5 write penalty of 4 physical I/Os per logical I/Os and the LAN is not the bottleneck, it's the disks.
0
 
LVL 70

Expert Comment

by:garycase
ID: 22712701
I don't disagree that the # of operations/second is a function of the access speed of the disks in a transaction-oriented access mode with many small I/O's [I noted before that it "... depends on what kind of applications you're using the drives for ..."].   BUT the speed "into and out of the NAS" is the same no matter what the speed of the disks ... since ANY of the reasonable options (7200rpm, 10000rpm, or possibly 15000rpm) will easily saturate a GB LAN.

I'm building a 15TB array to stream video around the house, and my initial tests with only 4 disks easily saturate a GB LAN ==> the added speed when I grow it to 12 disks won't make any difference at all.   These are all 7200 rpm drives (1.5TB Seagates) ... but the performance wouldn't be any different if they were 15,000rpm drives (except for a few ms advantage in starting the transfers).

Bottom Line:  As I noted earlier, the real answer depends on the access mode and on the mix of read/write operations.   The LAN will be saturated with data by ANY disks (7200/10000/15000rpm) whenever data is "flowing" ... but the total performance will depend on just how often that is happening.
The RAID-5 write penalty is "within the box" on a NAS where the RAID controller is internal -- at least up to the point where the buffer is saturated.   Agree that if that happens often (i.e. if the access mode is a lot of transaction-oriented writes) that the disk speed would be a notable factor;  but this again is a function of how the box is used and, in this case, also depends on how much of a buffer the controller and disks have.

0
 
LVL 1

Author Comment

by:butterhook
ID: 22767866
So it shouldn't matter that I've got 7500 rather than 10k disks in there? i.e. this shouldn't be why they failed?

0
 
LVL 55

Expert Comment

by:andyalder
ID: 22768027
BUT the speed "into and out of the NAS" is the same no matter what the speed of the disks ... since ANY of the reasonable options (7200rpm, 10000rpm, or possibly 15000rpm) will easily saturate a GB LAN.

No it won't. Not unless you're using it for something like video streaming. In a multi-user environment the I/O profile becomes random and it takes several disks to saturate a 1 Gb LAN.
0
 
LVL 1

Author Comment

by:butterhook
ID: 22777084
When I say they failed I mean that 2 disks became physically corrupted in the RAID array, rather than having any data loss for specific file transfers only. Could this have been because of disk speed?
0
 
LVL 70

Assisted Solution

by:garycase
garycase earned 500 total points
ID: 22777452
"... Could this have been because of disk speed? " ==>  No.  Disks at any rpm can and do fail.   Corruption could have been due to factors other than the disks, however --> memory problems; a controller failure; etc.   But it's likely that the disks simply failed.  

One thing to consider if you're getting excessive indications of failure in an array:   Use enterprise-class drives, which are optimized for RAID array timings.   All modern drives will occasionally need to do recalibrations, which take a few milliseconds --> if these cause too long a delay a RAID controller may mark the drive as failed.   Enterprise class drives are designed to keep those delays small so they won't cause inadvertent (and incorrect) drive failures in RAID arrays.   Virtually all 10,000rpm or faster drives are enterprise class; but with 7200 rpm drives the drive manufacturers sell both desktop and enterprise models.   The enterprise models also have a lower bit error rate, so are more reliable drives overall [The price, of course, is commensurately higher :-) ].
0

Featured Post

What Security Threats Are You Missing?

Enhance your security with threat intelligence from the web. Get trending threat insights on hackers, exploits, and suspicious IP addresses delivered to your inbox with our free Cyber Daily.

Join & Write a Comment

The Samsung SSD 840 EVO and 840 EVO mSATA have a well-known problem with a drop in read performance. I first learned about this in an interesting thread here at Experts Exchange: http://www.experts-exchange.com/Hardware/Storage/Hard_Drives/Q_2852…
this article is a guided solution for most of the common server issues in server hardware tasks we are facing in our routine job works. the topics in the following article covered are, 1) dell hardware raidlevel (Perc) 2) adding HDD 3) how t…
This tutorial will walk an individual through the process of installing the necessary services and then configuring a Windows Server 2012 system as an iSCSI target. To install the necessary roles, go to Server Manager, and select Add Roles and Featu…
This Micro Tutorial will teach you how to reformat your flash drive. Sometimes your flash drive may have issues carrying files so this will completely restore it to manufacturing settings. Make sure to backup all files before reformatting. This w…

760 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

22 Experts available now in Live!

Get 1:1 Help Now