External Drive very Slow

Hello experts:
   We needed more space on our Windows 2003 server standard, so we added an external drive array (MicroNet) that has four 500 GB drives and connects via a single eSATA.  We purchased a PCI - eSATA card that does RAID (Sil 3124r5) and configured two mirrored arrays (RAID1).  We moved our Exchange database onto one of the RAID1 arrays, which seemed to work OK at first, but then would occassionally hang or slow down.  So we thought we would move all the graphics over and bring the Exchange database back.  This was worse - especially for the Macs.  It took several minutes just to get a folder listing.  I ran a disk benchmark on the drives and found that the Access time was much slower for the external drive, but the read and cached time wasn't out of line.  Any thoughts on why this external is slow?  I thought by using eSata we wouldn't really notice a difference.  Is it our RAID card, just the single eSata cable - or do we need SCSI??
Who is Participating?

Improve company productivity with a Business Account.Sign Up

MrMintanetConnect With a Mentor Commented:

I'd read this and make sure you have followed all the simple procedures.  I would also call the manufacturer and talk to them about this.  I would also consider getting rid of this damn thing and getting a decent NAS that has everything controlled internally.  It really sounds to me that you need to rethink about what is going on within that server of yours.  Perhaps you should get better internal drives with a higher volume and better cache.  You really should consider rethinking this entire setup.  It sounds to me that you also may want to do some consumer reviews.  I really hope you find what you're looking for.  I will keep monitoring this and answer any questions you may have, but as for now, I am leaning towards you returning that array and rethinking your strategy.  I'm not giving up.  I'm simply ready to see the ugly array get out of this equation :)   Good luck, brotha!
Mohammed HamadaSenior IT ConsultantCommented:
There's no big speed difference between SCSI and the eSATA , The SCSI is only 20 mbs faster....
Try a different cable, and Disable your antivirus real time monitoring for couple of minutes and test the transfer if the speed is better !!!

Why did you opt to use an external hard drive rather than an internal?  Another suggestion would be to do consumer reviews on hardware components that perform tasks like house your Exchange database.

Honestly, it's somewhat hard to find anything useful on either product you purchased by using Google.  I see a great deal of vendor websites for the enclosure, and I see next to nothing regarding the controller.  I'm not sure who you use as a parts vendor, but I would highly suggest hitting up NewEgg.com to search for your products.  If anything, just use their review system to help you decide, and purchase the product elsewhere if you are committed to another vendor.

Here's what I would have done if I was in your situation:
http://www.newegg.com/Product/Product.aspx?Item=N82E16816102062 - SATAII controller (4 ports) - $59.99
http://www.newegg.com/Product/Product.aspx?Item=N82E16822148373 - 1TB HD x 4 - $359.96

Total Cost:  $419.95
Perks:  Double the size of your configuration, better review ratings, less likely to fail due to excessive heat buildup in enclosure, costs only half of what your external array costs, far more easier to run diagnostics if necessary
Cons:  Server case may not fit this many drives, configuration not set for scalability, and your server's PSU may be too weak, as well.

So, my answer:  Ditch an external hard disk to host this much data that is constantly accessed, and more importantly, read the manual when you get the new products.
Improve Your Query Performance Tuning

In this FREE six-day email course, you'll learn from Janis Griffin, Database Performance Evangelist. She'll teach 12 steps that you can use to optimize your queries as much as possible and see measurable results in your work. Get started today!

problem is the PCI card, your maxed out at 133mb shared with anything else on the PCi bus.
what company made this PCi card?  most PCI raid cards if not a brand name are slow and they are NOT true raid, they use your CPU to do the processing, not that raid 1 has alot of overhead, but i am going to say likely a combo of being a PCI card and being a cheap raid card = poor performance.
"There's no big speed difference between SCSI and the eSATA , The SCSI is only 20 mbs faster.... "

the difference comes in on seek times and multi-user environments where SCSI would demolish your standard SATA drives. when you have more then one person accessing files.

I completely agree with you.  "No difference between SCSI and SATA"?!?!

SATA:  7200 - 10,000 (10k being produced by only one company targeting gamers)
SCSI:  15k (targeting servers)
SCSI - 320 MB/sec  <- Note the caps
SATA - 300 MB/sec

SCSI - It's what the pros used over a decade ago... and they're still spinning!
SATA - You can buy 1TB for less than $100 bucks = SMART FAIL & DOA
jhuntiiAuthor Commented:
The reason we are using external drives is that our internal bays are full.  We have a Dell 2600 with 6 bays - 4 x 72 GB SCSI in RAID5, plus 2 x 246 GB SCSI RAID1.  I guess we could increase the size of the 4 drives, but they hold drives C:, D:, and F: (I didn't set this up, I inherited it...), so we'd have to reinstall the OS, Apps, etc., plus down time.  The enclosure is nice, has a fan, etc., but only one eSATA cable between the enclosure and the controller to service all drives.  I think the card was a little cheap.  The server has no PCI slots open - only PCI-X slots, so we used one of those.  

I believe Silicon Raid made the controller card.  It came with very little documentation.  The enclosure originally came with a PCIe card, but this server does not have any PCIe slots.  So we bought this one from NewEgg.  If we're maxing the 133mb on the bus, then perhaps a PCI-X card would help.  If we have to dump everything and go with SCSI drives, that will be a lot more expensive with less space - and a lot harder sell.  This server has to last for two more years they tell me.  :)
Is the external drive of 4 disks using raid itself?
Poor ventilation and heat buildup can slow down drives, and thus whole arrays.  If you power down the external drive for twenty minutes, and then power up and test immidiately do you measure different performance on startup?
SCSI drives are expensive.  I'd try a good sata raid controller first, I like adaptec.

Is the external drive connected to the raid controller too?  Can you connect it directly to the motherboard (take some load of that card)?

jhuntiiAuthor Commented:
  No, the external array doesn't really have any RAID capabilities.  It's box from MicroNet that basically houses 4 SATA drives.  It has its own power supply and ventilation fan.  It has one eSATA connection on the back of the box that is used to connect to a RAID controller card.  The RAID controller card is from Addonics and is a 4-port eSATA controller card with RAID.  It is a dedicated card controlling only the external MicroNet box (via the single eSATA cable).  Even though it only has one cable, it sees all four drives.  I set up two RAID1 arrays.  It doesn't seem to matter which of those arrays I use, the performance seems to slow or stop every few minutes.  Performance is much worse under a load.  If users come in early, performance seems just fine.  Once all 25 users are in, Exchange activity hangs every few minutes.  I also get occassional errors in the event log (event ID 509 and others) about a request from the Exchange database that succeeded, but took a very long time - like a minute and a half.
   I think the hard card is software based.  There aren't very many chips on the thing and cost less than $100 if I remember.  I'm considering a higher quality card from LSI, Promise or Adaptec if they have one.
"so we'd have to reinstall the OS, Apps, etc., plus down time"

you could just image the main drive with something like Acronis Workstation :), IF it came down to having to do that.

SCSI is expensive, and so is SAS, but there is a reason for that, their MBF is greater then SATA drives, i know Seagate and WD have enterprise level SATA drives, they also perform much better.

Buying SCSI now is not suggested IF your starting new, SAS is the way to go, but if you already have the SCSI equipment not point in buying new toys if they are not needed.

All of Adaptec low end SATA cards are pretty much garbage once you get beyond raid 0 and 1, and even then most of their entry level cards off load the work to the CPU, as well, they use old chips on them limiting performance significantly, i went through about 3 Adaptec SATA raid cards last year because they would all peak out jsut before 100MB, no matter what harddrives or raid configuration i used, i finally bit the bullet and got a nice 3ware 9690 and now 6 drives in raid 5 flies!

To ask, if new items had to be purchased, what would a possible budget be? I think we can all suggest what should be done, but in the end, management has the final say as to if any money can be spent at all on new, better performing parts, if needed.
definetly sound like a "software" raid card if it cost less then $100, it is not until you get in the $300+ range that cards become true raid cards really,Arcea has a good card in the $350 range that people seem to love!  LSI has always been know well as well as 3ware, Adaptec is a hit and miss, their higher end cards just kick butt, while most things below that are not worth their weights in gold.
jhuntiiAuthor Commented:
Well, I hope the problem is with the card -  but would it be possible that the design of the connection from the box to the server could be a problem?  It only has one eSATA cable that connects all four drives (or RAID arrays) to the conroller card.  I haven't seen that before - until I bought this external box.  Could I be max-ing out the throughput of the cable?  If so I may have to drill some holes and run the cables direct... :|
eSATA cable can handle upto 3Gig's of bandwidth so the cable is not likely the issue, now it is possible the chipset used in the external box is a really cheap one as well, thus not giving you your full performance you should have and causing the time out issues, alot of external cases i have looked at over the yerars use no name cheap parts in them .

is it possible to connect your harddrives in the external enclosure directly to the raid card to see how they perform?
The cable itself could be bad, though.  The throughput is 3Gb/s not GB, just so you don't get confused, jhuntii.  

Have you ran diagnostics on the drives and checked for physical errors?  And you've yet to list the actual speeds your drive is transferring at.  What kind of speeds are you getting?

>>No, the external array doesn't really have any RAID capabilities.
If the drive has one eSATA cable coming out, the enclosure is using some sort of controller internally.

Which one do you have?  Please provide link once you find it:

We need more information on this before you go any further.

>> If we're maxing the 133mb on the bus, then perhaps a PCI-X card would help.
Did you mean 133MB/s?  If so, there is more to be said.  Please tell us the model.
lol my bad! :) yes, cable could be bad (why i shouldnt check EE early in the morning, brain isnt quite awake yet!)
jhuntiiAuthor Commented:
I purchased the SR42000E http://www.micronet.com/products/sr4.htm - which I should have purchased the X rather than E since all the open ports are PCI-X.  So I called the company to see if I could use a different card and they said I could (I forgot that I had done that) and so I ordered a similar card from Addonics http://www.addonics.com/products/host_controller/ads3gx4r5-e.asp  .  We bought a brand new eSATA cable and tried that, but no change could be seen.  Exchange still makes users wait and I think we're still getting the Event 509 - waiting on disk acess.  I'm wondering if it's the "mini-controller" card inside the MicroNet box.  Maybe replacing the RAID card will not help anything...  :||

I did a speed test from roadkil's utility.  Access to the physical drive - access time was 66.5 ms, Max Read speed 75600 KB/sec, Cached Speed 144.99 KB/sec.  It reads files sizes from 0.5k doubling in size up to 1024k.  Linear reads started at 1530 KB/sec, 2, 20, 7560, 17, 32, 138,458, 238, 654, 1930, 5350.  Then a random read: 8, 3, 6, 9, 83, 85, 66, 123, 370, 480, 807, 4200.
I then tested drive H: which is the same drive:
Access time was 98.10 ms, Max Read 11160 KB/sec, Cached Speed, 5.55 MB/sec
Again starting with 0.5k, 1k, 2k, 4k, etc.,  Linear read was 497, 3, 1450, 689, 3060, 44, 164, 443, 491, 795, 11160, 2560.  Random Read was 5 KB/sec, 6,4,10 24,94,57,184,424, 543, 1140, 2040.  

I don't know why they're not more consistent as the file size increases.  Access time on drive H: is half again what the internal drives are.  It's the apparant "hanging" that I'm trying to solve.  I don't know why sometimes it reads fine and other times just seems to hang....
access time was 66.5 ms / 98.10ms

that is pretty dam crappy! and the other numbers dont seem very impressive either

are you able to try the drives connected directly to the raid controller by-passing the external enclosure? (you just ly the drives out beside the case and plug them in to the controller..)
jhuntiiAuthor Commented:
Yup, that's the next plan of action is to connect the drives directly to the controller.  I'm also going to contact the manufacturer about this - maybe there's a firmware update or maybe it just isn't able to do the job...  I'll let you know how it goes.
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.