weatherman67
asked on
SAN technologies: traditional vs virtual
We are preparing to bring a SAN into our environment. We are currently debating traditional vs. virtual technology. We have narrowed the field to the Hitachi AMS1000 (possibly the 2300 now that it has been released) vs the HP EVA 8100.
I would like to hear from folks who have used both systems. Why did you chose to purchase that technology and why you have either chosen to stay with it or dump it (or wish you could)?
One of the main areas we are struggling with is understanding the effect of disparate systems sharing a large disk group on the EVA. Is there really a noticable performance penalty because of this or is it a wash against the benefits of using all of your spindles?
From an administrative standpoint, we are all sold on the ease of management offered by the EVA.
Thanks for your help!
I would like to hear from folks who have used both systems. Why did you chose to purchase that technology and why you have either chosen to stay with it or dump it (or wish you could)?
One of the main areas we are struggling with is understanding the effect of disparate systems sharing a large disk group on the EVA. Is there really a noticable performance penalty because of this or is it a wash against the benefits of using all of your spindles?
From an administrative standpoint, we are all sold on the ease of management offered by the EVA.
Thanks for your help!
ASKER CERTIFIED SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
ASKER
MeyersD, would you please give some detail on why you prefer the HDS kit over HP? I am very interested in your opinion on this and the reasons behind it. Thanks.
There's nothing to stop you from defining lots of disk groups on the EVA and dedicating one for each job/server if you want to so the EVA can be used in non-virtual mode as well.
I would be interested in why meyersd prefers one over the other as well, maybe it's down to what kit you know? Certainly that's why I would go for the EVA.
I would be interested in why meyersd prefers one over the other as well, maybe it's down to what kit you know? Certainly that's why I would go for the EVA.
Cache partitioning - HDS kit allows you to allocate write cache to LUNs so, unlike other arrays, you can stop slower drives (SATA/ATA) hogging write cache and affecting overall performance of the array.
As to the rest - it's all spinning brown, innit? One array is much like another. Each vendor has a unique benefit - three that really spring to mind are NetApp's on-array de-duplication, EMC's Quality of Service manager and HDS's cache partitioning.
As to the rest - it's all spinning brown, innit? One array is much like another. Each vendor has a unique benefit - three that really spring to mind are NetApp's on-array de-duplication, EMC's Quality of Service manager and HDS's cache partitioning.
ASKER
Meyersd, do you really use the cache partitioning extensively? The reason I ask is because I have spoken to a few HDS users who say they just go with the default cache config. You are the first person, aside from Hitachi sales, who say they use it. I agree with you that it sounds compelling. I'm just wondering about how practical it is in day to day use. Thanks for the answer by the way. I do find it quite helpful.
On a well-configured array with plenty of Fibre Channel discs, it is of little value as data will be written out to disk quickly. Things will change as your environment grows - if you specify plenty of disc now, you'll allocate it to all sorts of apps, and you may find yourself putting some load on SATA disc as it's going to be low utilisation. The SATA discs will consume more write cache as they are between a half and two thirds slower than FC drives, and that will affect overall performance. The ability to limit the amount of write cache that SATA LUNs consume is something I wish all manufacturers would provide.
As a real world example, I'm working on some performance analysis files from an EMC CLARiiON array where the customer (a large organisation) has used SATA disc for production VMware. The highly random nature of VMware VMFS means that the CLARiiON can't do it's funky write optimisations, so write cache is filling up. Once write cache fills, the array stops accepting host I/O for a few milliseconds ntil its made some space, but those few milliseconds affects all attached servers, not just teh hosts causing write cache to fill. They fell into the trap of using any available space no matter whether it was suitable or not. The fix is simple: they need more Fibre Channel disc. If the EMC array had cache partitioning, they could stop the rogue hosts affecting performance of important servers. As it is, I've advised them to turn off write cache on the SATA discs in the short term (which will hurt the VMware virtual machines on the SATA discs), and to buy more discs.
By the way - beware of extravagant performance claims for SATA discs and SATA arrays. SATA discs will run a production load, no sweat, but you need so many of them to absorb the number of writes generated, you may as well have purchased the more expensive SAS or FC discs in the first place.
As a real world example, I'm working on some performance analysis files from an EMC CLARiiON array where the customer (a large organisation) has used SATA disc for production VMware. The highly random nature of VMware VMFS means that the CLARiiON can't do it's funky write optimisations, so write cache is filling up. Once write cache fills, the array stops accepting host I/O for a few milliseconds ntil its made some space, but those few milliseconds affects all attached servers, not just teh hosts causing write cache to fill. They fell into the trap of using any available space no matter whether it was suitable or not. The fix is simple: they need more Fibre Channel disc. If the EMC array had cache partitioning, they could stop the rogue hosts affecting performance of important servers. As it is, I've advised them to turn off write cache on the SATA discs in the short term (which will hurt the VMware virtual machines on the SATA discs), and to buy more discs.
By the way - beware of extravagant performance claims for SATA discs and SATA arrays. SATA discs will run a production load, no sweat, but you need so many of them to absorb the number of writes generated, you may as well have purchased the more expensive SAS or FC discs in the first place.
Perhaps with Hitachi you can have better perf results, but it eat skiller person who will tune that array to ever changing enviroment...