SAN technologies: traditional vs virtual

We are preparing to bring a SAN into our environment.  We are currently debating traditional vs. virtual technology.  We have narrowed the field to the Hitachi AMS1000 (possibly the 2300 now that it has been released) vs the HP EVA 8100.

I would like to hear from folks who have used both systems.  Why did you chose to purchase that technology and why you have either chosen to stay with it or dump it (or wish you could)?

One of the main areas we are struggling with is understanding the effect of disparate systems sharing a large disk group on the EVA.  Is there really a noticable performance penalty because of this or is it a wash against the benefits of using all of your spindles?

From an administrative standpoint, we are all sold on the ease of management offered by the EVA.

Thanks for your help!
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

EVA -> management simplicity, auto leveling vith any disk added to disk group, etc...
Perhaps with Hitachi you can have better perf results, but it eat skiller person who will tune that array to ever changing enviroment...
Duncan MeyersCommented:
>One of the main areas we are struggling with is understanding the effect of disparate systems sharing a large disk group on the EVA.  Is there really a noticable performance penalty because of this or is it a wash against the benefits of using all of your spindles?

Depends on what you want to do. As a rule, disparate loads on the same physical discs is a recipe for pain. Having said that, a virtualised data centre presents a highly random load, so as long as you have enough physical discs to absorb the load, you should be OK. The trick is to size the array for the performance you need, not the disc space. If you get the performance right, the space usually takes care of itself. Both arrays work well, although I have to express my preference for HDS kit over HP

As far as management goes, all SANs are pretty simple to drive once you're used to the management console. The difficulty or ease of management is purely sales FUD. If you have multiple SANs, a product like Symantec's Storage Foundation and CommandCentral starts to make sense. Speaking of which, Symantec Storage Foundation path management software is free for servers with two processors or less and three attached LUNs or less. It may be worth your while evaluating Storage Foundation as an alternative to HP's or HDS' path management software.

Finally - don't be swayed by a free offer of HP Data Protector backup software - it sucks!

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
weatherman67Author Commented:
MeyersD, would you please give some detail on why you prefer the HDS kit over HP?  I am very interested in your opinion on this and the reasons behind it.  Thanks.
Ultimate Tool Kit for Technology Solution Provider

Broken down into practical pointers and step-by-step instructions, the IT Service Excellence Tool Kit delivers expert advice for technology solution providers. Get your free copy now.

There's nothing to stop you from defining lots of disk groups on the EVA and dedicating one for each job/server if you want to so the EVA can be used in non-virtual mode as well.

I would be interested in why meyersd prefers one over the other as well, maybe it's down to what kit you know? Certainly that's why I would go for the EVA.
Duncan MeyersCommented:
Cache partitioning - HDS kit allows you to allocate write cache to LUNs so, unlike other arrays, you can stop slower drives (SATA/ATA) hogging write cache and affecting overall performance of the array.

As to the rest - it's all spinning brown, innit? One array is much like another. Each vendor has a unique benefit - three that really spring to mind are NetApp's on-array de-duplication, EMC's Quality of Service manager and HDS's cache partitioning.
weatherman67Author Commented:
Meyersd, do you really use the cache partitioning extensively?  The reason I ask is because I have spoken to a few HDS users who say they just go with the default cache config.  You are the first person, aside from Hitachi sales, who say they use it.  I agree with you that it sounds compelling.  I'm just wondering about how practical it is in day to day use.  Thanks for the answer by the way.  I do find it quite helpful.
Duncan MeyersCommented:
On a well-configured array with plenty of Fibre Channel discs, it is of little value as data will be written out to disk quickly. Things will change as your environment grows - if you specify plenty of disc now, you'll allocate it to all sorts of apps, and you may find yourself putting some load on SATA disc as it's going to be low utilisation. The SATA discs will consume more write cache as they are between a half and two thirds slower than FC drives, and that will affect overall performance. The ability to limit the amount of write cache that SATA LUNs consume is something I wish all manufacturers would provide.

As a real world example, I'm working on some performance analysis files from an EMC CLARiiON array where the customer (a large organisation) has used SATA disc for production VMware. The highly random nature of VMware VMFS means that the CLARiiON can't do it's funky write optimisations, so write cache is filling up. Once write cache fills, the array stops accepting host I/O for a few milliseconds ntil its made some space, but those few milliseconds affects all attached servers, not just teh hosts causing write cache to fill. They fell into the trap of using any available space no matter whether it was suitable or not. The fix is simple: they need more Fibre Channel disc. If the EMC array had cache partitioning, they could stop the rogue hosts affecting performance of important servers. As it is, I've advised them to turn off write cache on the SATA discs in the short term (which will hurt the VMware virtual machines on the SATA discs), and to buy more discs.

By the way - beware of extravagant performance claims for SATA discs and SATA arrays. SATA discs will run a production load, no sweat, but you need so many of them to absorb the number of writes generated, you may as well have purchased the more expensive SAS or FC discs in the first place.
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today

From novice to tech pro — start learning today.

Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.