Server Drive Speeds, 7.2k to 15k makes no difference?

Our main file server has been running on 7.2k SAS drives for the last 2 years. We have been close to running out of space, so we bought 4 new 15 drives and built a 2nd array on the system (HP DL185 G5, 12 slots) to move one DFSRoot over to the faster drives. When testing however I see no improvement in seek/save/copy times between the old 7.2k drives and the 15 drives.

Is there something I am missing. When testing other servers with 10k drives there is a noticeable difference? Do I need to move the OS over to the faster drives to see a difference?
Who is Participating?

[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

its the controller thats limiting the i/o imho
Aaron TomoskyDirector of Solutions ConsultingCommented:
15k drives have a theoretical double sustained write/read transfer rate. Random access isn't affected since that's the heads moving. The limiting factor is now goin to be your controller and where the data is coming from/ going to. I have raid 5 setups with 5400 rpm drives that max out the pci express 1.0 x1 (250MBps) that the raid card is plugged into.
canuseeitAuthor Commented:
hmm.. is uses the P400 array controller, it shouldn't limit the speeds this much?
SolarWinds® VoIP and Network Quality Manager(VNQM)

WAN and VoIP monitoring tools that can help with troubleshooting via an intuitive web interface. Review quality of service data, including jitter, latency, packet loss, and MOS. Troubleshoot call performance and correlate call issues with WAN performance for Cisco and Avaya calls

canuseeitAuthor Commented:
Sorry, I'm still not 100% on these things, maybe it's my understanding. We have all of our server as HP Proliant's... but this is our newest one. I would think the 15k would transfer/save faster than the 7.2's.. if not kind of wasted $$
Aaron TomoskyDirector of Solutions ConsultingCommented:
Saving from where? Memory is the only thing that will hold data faster than a 7.2k rpm raid array. Or are you copying from the old array to the new???
canuseeitAuthor Commented:
No, I was copying from another computer on the network to the old array and then the new array and checking the times it took to transfer (upload and download) There seems to be no difference.
Dr. KlahnPrincipal Software EngineerCommented:
You stated "... to move one DFSRoot over to the faster drives."

Are you testing drive throughput across the network?  If so, it's very unlikely that any improvement will be seen.  The network is the bottleneck in that case.
Dr. KlahnPrincipal Software EngineerCommented:
Sorry, canuseeit, didn't see your last posting.  Yes, you're seeing a network bottleneck and not the drive performance.  There's an enormous amount of network overhead involved in file transfers.

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
canuseeitAuthor Commented:
Hmm that's what is weird then.

Our old file server, which has less ram and a slower cpu but 10k drives has always transfered about 25% faster. Our only clue was the drives speed.
Aaron TomoskyDirector of Solutions ConsultingCommented:
Probably has 2 nics teamed.
canuseeitAuthor Commented:
Actually our old one has just the one nic being used.. our new current one has 2 nics teamed. That was the first thing we did when we noticed the speed difference.

MY only guess is that since the OS in on the slow drives, and it controls what data goes where that is slowing it down. May have to move the OS over to the new faster drives and see if it speeds it up.
Aaron TomoskyDirector of Solutions ConsultingCommented:
I'd start with the wires going to the switch and the switches before I worry about the speed of the drive the os is on affecting network performance.
copying files isn't a good measure of drive performance, too many other things get in the way. Have you compared ther perfmon statistics for the different arrays, ->physical disk, avg sec/io to see how long an i/o is actually taking? That should give a reading for drive performance.

I suppuse the other questions is are the RAID arrays you are comparing of the same type and do they have the same number of disks?
andyalderSaggar maker's framemakerCommented:
If copying from one computer to another doesn't change with the faster drives then it's probably the computer you're copying from/to that's the problem rather than the fileserver.

Although it's not a backup server you could always download HP's library and tape tools and use that to do a performance test on the disk subsystem, of course you could also use any other benchmark tool you choose to use.

The 15K disks should be about twice as fast at seeks as well as having half the rotational latency but there's other things that matter. As already mentioned you haven't told use the RAID levels involved or the number of disks in the supposedly faster 7.2K array.
yes what is the nic in the server as the problem could be in the nic and nothing to do with the raid array as the other server may be working with a newer bios which means the nic could be working better
canuseeitAuthor Commented:

so array's on all server involved are 4 HD RAID 5. The server which is quicker is an older DL380 G2 with 4 72.8GB 10k drives and running Windows 2000 server and has a Smart Array 5i controller. It has only one working nic (the 2 default ones no longer work) which is a netgear 10/100 FA311. The new (but slower) one is a DL185 G5 running Windows server 2008 and has teamed Broadcom NetXtreme gigabit nics (though our switches are old and only 10/100). It has 4 720GB 7.2k drives, and the new ones are 4 450GB 15k drives. Both servers are into the same switch of our Cisco Catalyst 6500... in fact they are 2 ports apart.

I've tried switching ports to see if that affected anything and it was a no go.

I'll try updating the bios and firmware (if there in an update) later tonight and see if that makes a difference.

canuseeitAuthor Commented:
g4ugm: any tools that I can get to measure this for us?

the issue was the old computer was our old file server and was much faster in saving data. Was trying to bridge that gap and the only difference we could find was the HD speed. Thought this would fix that, but obviously hasn't. Our only measure is how fast files open/save and transfer speed.
Aaron TomoskyDirector of Solutions ConsultingCommented:
Heh, go buy a gigabit switch. That will do somuch more for your network than 15k drives
canuseeitAuthor Commented:
aarontomosky: working on that.... but a much more expensive purchase.We needed to increase on capacity so we needed to build out another array either way. A gigabit switch is in the plans as well, but probably not for another 2-3 months.
andyalderSaggar maker's framemakerCommented:
As a matter of interest how did you realise they only has a 100Mb switch? That's obviously the problem, but I haven't seen a production server connected to a 10/100 port for years.
Aaron TomoskyDirector of Solutions ConsultingCommented:
Without upgrading you core switch you could get a netgear gigabit smart switch for under $300 and plug the server into that. Then link aggregate a few 100mbit ports where the servers used to be plugged into the cisco. The affect will be much faster server to server speeds with workstation to server speeds unchanged.
Aaron TomoskyDirector of Solutions ConsultingCommented:
As I stated in my comments, the user didnt realize that filesharing over a network would be unaffected by faster drives as the network Is the bottleneck.
using HD_speed or other tools to testing the disk r/w performance.
Customer has lots of advice. Several folks posted correct answer "its the network not the drives" or suggested testing locally rather than across network.. I would like to see some split of points for them,.
lots of good stuff in there. Several folks said slow network. Give them some points.
ModalotEE ModeratorCommented:
Since we have received no precise  recommendations from the participants how to split points after two (!) calls for them, I will finally decide to distribute points how I see fit.

In future, please post your recommendations in the form of
   http:#a«comment no.»
with a points recommendation, if applicable.

Community Support Moderator
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Server Hardware

From novice to tech pro — start learning today.