Comparison for Data transfer rate between SATA and SCSI over network

crcsupport
crcsupport used Ask the Experts™
on
I'm planing to expand disk storage in one of our server. I'm thinking of just attach new mirrored disk set so that the server can have more disk space. I was doing a little test to compare data transfer rate between two different disk specs as below, but the result shows not much difference, but the data transfer rate over network was very lower than I expected.

1.
HP SCSI Ultra 320 (2560 Mbit/s)
HP Smart Array 532 (1028Mbit/s)
Configured as RAID 5.

2.
WD SATA 3.0 (3Gbit/s)
Norcor SATA controller (3Gbit/s)
PCI-X slot (~1064Mbit/s)

Both over 100Mbit/s network.

Test: Transfer 170MB data over network to a host connected in 100Mbit/s Ethernet.

The result was 2 was a little faster, 55 sec(1) than 58 sec(2) and both can be said around 23Mbit/s around.

As the test shows the data transfer rate over 100Mbit/s network was not much different. The controller between the disk sets and the system board are roughly the same(smart array 1028Mbit/s vs 1068Mbit/s). So this is understandable.  However, the data transfer rate over the network was only around 23Mbit over the 100Mbit/s Ethernet.

Here are my questions.

1. Can anyone explain why this data transfer over the network is downgraded to 23Mbit/s?

2. Is 23Mbit/s normal real world speed for 100Mbit/s network? What is your experience?

3. Does this mean, to improve the overall data transfer from server to end user, network speed is much important than disk block level data access speed?

I appreciate very much in advance.



Comment
Watch Question

Do more with

Expert Office
EXPERT OFFICE® is a registered trademark of EXPERTS EXCHANGE®
disk speed will depend on how many transactions etc are happening locally on the server.
Also - note RAID 10 is considerably faster at writing data - raid 5 is slow to write data.

network speed will depend on a number of factors such as other network traffic, also the data is broken down into packets with additional overheads, and you also need to take into account the speed to write the data on the local system
1. Can anyone explain why this data transfer over the network is downgraded to 23Mbit/s?
Yes - it is most likely the way your disks are configured and the number of disks in the RAID sets. Can you provide more information about this please?

2. Is 23Mbit/s normal real world speed for 100Mbit/s network? What is your experience?
It is unlikely that the network is the problem - unless you have cheap and nasty NICs and cheap and nasty switches that simply can't cope with the workload

3. Does this mean, to improve the overall data transfer from server to end user, network speed is much important than disk block level data access speed?
You'll see this question (or a variation of it) asked a lot at EE - "SATAII is 3Gb/sec so why can't I get 3Gb/sec" - that sort of thing. Disk performance is *always* slower than the performance of the connections to the disk. You get good disk performance by aggregating multiple drives into a RAID set - which also protects the data. If you want to improve overall performance, the first place to look is at the disk arrays.
Top Expert 2014
Commented:
In both cases what you're really testing is probably the write speed of the disks in the host you're transferring the data to.

Author

Commented:
I was just a little surprised, most of them here agree on the disk performance, not the network or other between the server and host. After reading posts, I may have to put more attention to disk performance than network to track the slowness of the problem.


Thanks again.
Thanks! Glad I could help.

Do more with

Expert Office
Submit tech questions to Ask the Experts™ at any time to receive solutions, advice, and new ideas from leading industry professionals.

Start 7-Day Free Trial