VMware SAN or Local Storage


Currently, I have VMware ESXi 5.1 that runs on two Dell Power Edge 2970 Servers in my corporate office and one Dell Power Edge 2970 in a remote location.  These servers run off of hard disks that are inside each server.  I believe these disks are SAS 7.2.  There are 6 disks in each server.  0 and 1 are a mirrored configuration and I run my backups on the mirror.  Disks 2,3,4, and 5 are in a RAID 5 configuration and this is the main datastore in VMware where the machines (Microsoft 2003 server, MySQL, XP....etc) are stored and run from.

I am looking into the possibility of purchasing a SAN, so that I could have some centralized storage, but am not familiar with the technology and am researching all options available.  I have attached a report of some metrics that were gathered regarding the performance of my current servers.  It would be helpful if I fully understood what this collected data really means.  I believe it would help make my decision easier.  For instance, I know IOPS are important, but 755.4 at 95%, 1071.3 at 99% and 1315.0 at peak means nothing to me.  Is there anyone out there that can break down what this report is telling me?  Also, I would welcome advice on what people are using in the real world as far as storage and what hardware is giving good performance in a production environment.

Your advice and assistance in this matter is greatly appreciated!
Who is Participating?
johnkerry8652Connect With a Mentor Commented:
Can you speak to an independent SAN dealer ?  

There are a number of different SAN products. These can connect via Ethernet iSCSI (cheaper) or these can connect via fibre channel (in theory faster but more expensive.)

All of these SAN storage boxes have specific performance (IOPS) targets.

Most of these SANs are going to be quicker and easier for you to use than local storage (available on your existing Dell 29xx Servers or Dell Rxxx Servers) but you would get better resilience and increased capacity by centralising and making the storage available to all of your VMware servers.

A "relatively" cheap iSCSI SAN would give you an impressive range of data storage - they may provide upwards of 6TB Storage (based on fast SAS disks ) or 20TB Storage (based on slightly slower SATA disks)  However, SANs will also give you difference performance metrics (IOPS) depending upon how much you want to pay, the type of hard disk drives that you wish to include inside the boxes, ,the total number of network connections involved, the number of RAID controllers in the box and so on. There can be a big differences in IOPS between SATA and SAS disks and this affects the cost.

But  there is also no point purchasing additional storage if it is not fully compatible with the servers, network switches, VMware licensing (to support HA and vMotion and DRS) that you have now. Or if you don't have them, then you need to buy them before you buy storage.

You may find it better to invest in 8/12 spare Ethernet network ports in your servers and at least 1 or 2 x 1gb network switches before you start, to prepare for the introduction of iSCSI SAN storage equipment if you don't have these items available now.

If your existing servers are older than 3 years, the costs of additional network cards and memory for your existing servers could be expensive.
Additional network ports are required before you can add your Storage. This is to ensure that you can connect your Server to your storage box via multiple cards, to the live side of the RAID card and to the standby side of the RAID card. Each of your servers could therefore have 1 set of 3 cables (1 from each network card) running to one side of the RAID card on your new storage box and another set of 3 cables running to the 2nd (standby side) of the RAID card on your new storage box) Another set  of cables could be dedicated between each of the servers to vmotion tasks and so on.

It might be worth checking the cost of 3 additional quad port 1gb network cards for each of your existing servers (and more memory too) as well as the cost of replacement servers (with sufficient Ethernet network ports and memory) to begin.

Performance - 1500 IOPS might be quick for some SAN STORAGE and 3000 IOPS might be quick for other tasks - all of these options are possible but the costs of the disks to obtain these speeds with iSCSI storage are going to increase your costs. From $10k upto $30k which is at the cheaper end of iSCSI storage kit.

If you already have a good relationship with Dell (as you are using there servers) or a Dell Dealer, and you are happy, give them a call to see if they can give you the name of an independent storage partner in your area. Try and look at the costs of a few suppliers to see whether there are better value or better features available to you, within your available budget.

I can say that the Dell Equallogic brand is a good one for iSCSI Storage Equipment.
But Dell also have (more expensive) Storage options available in iSCSI and in Fibre Channel. HP have Storage options, NetApp and many others.

Check these are all VMWARE certified Storage devices and support all of the VMware features that you need and use (DRS , Vmotion) and also they have Replication, so that when you purchase another one, you can replicate your data from your 1st box over to you 2nd box.

Hope this is ok and you find all of this interesting...... It's a very exciting area but also one that could end up costing you a lot of money if you are not careful.
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
What do you want to do with your SAN storage?

Do you also have licensed for VMware vSphere which can take advantage of shared storage features?

e.g. VMware HA, VMware DRS and vMotion?
krhoades7601Author Commented:
I currently have Essentials, not Essentials Plus.  I am considering purchasing the Standard version.  About a month ago I lost the Raid 5 in both servers and had to rebuild.  I had my backups on the mirrored partition so I did not actually lose my data, it was just painful rebuilding.  Looking to try to mitigate down time if my RAIDs fail again.
Hire Technology Freelancers with Gigs

Work with freelancers specializing in everything from database administration to programming, who have proven themselves as experts in their field. Hire the best, collaborate easily, pay securely, and get projects done right.

A SAN or even better a NAS would be your choice. For most people SAN's are an investment as you will have to buy then "of-course" and possibly upgrade your existing hosts with HBA cards [not cheap] and also an FC switch. A NAS on the other hand will make use of your existing Ethernet network using iSCSI, and if you have gigabit switches already ... :-)

I'd use a NAS in your situation, configure jumbo frames support on the switchport and vmkernel port and increase the MTU size to 9000.
krhoades7601Author Commented:
Do you have any NAS recommendations?  I have tried hooking up a Netgear NAS and the performance was horrible.  It was really slow.
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
Remember SANs also FAIL its only RAID!

More disks with more chance to fail....
Get a good backup or two SANs
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
And with more disks takes longer to rebuild and restore
And is one very large single point of failure
krhoades7601Author Commented:
I have no experience with SANs so I just want to make sure my thought process on the use of a SAN is correct.  I would have a SAN with multiple ethernet connections and then the two power edge servers would plug in to the one san.  The SAN would then be added as a Datastore in ESXi and then I would create my virtual machines on that Datastore and run the servers from the SAN.  If this is the case, is the performance of the server good?
dipopoConnect With a Mentor Commented:
Yes SAN's do have failed disk/controller but at least you have better tolerance on failure.

More thought needs to go in like your:

1: Workload
3: Bandwidth/Network
4: Wallet size

There are Small, Medium and Large capacity solutions out there from someone like HP:

HP N54L - Small
HP P2000 MSA - Medium
HP StorServ 7200/7400/7450

As I said before you will need a Fibre Channel Switch and HBA cards to use a SAN, for Ethernet, this is NAS, and Yes you can connect your 2 servers to the NAS box using iSCSI, you can carve up the space on the disks after RAID applied into LUNS and provide these as disks to your servers to use as datastores.....being that you can now have multiple disks to present. It makes it easy to scale out, if you buy a 3rd server you can connect this too and the next and the next.....you get the feel.
Also use IOMeter to benchmark your servers disk IOPS, it gives a better output.

krhoades7601Author Commented:
Dipopo you have provided great information that I can use.  Just one final question to you, can you tell me in terms I can understand what the document I have attached to this question is telling me?  It is an output from a Dpack that was run.  The person that was going over it with me was very rushed and I didn't really quite understand what the heck he was talking about.  Does it make any sense to you?  If not that's okay....was just trying to understand!
I just want to make few things simple.

Each sata disk(7200rpm) does 120 to 130 IOPS on average.
Each Fiber channel disk(15krpm) does 180 IOPS on average.

So if you have a server with 4 disks in a raid , it can do 500 IO/sec. Beyound that is not an opted performance.

Again if get what you pay for, other imp issue you have to consider while you invest in SAN storage is cache, which is a value measure for the performance. There are storage boxes from 512 mb cache to 2 TB cache.

Just my 2 cents to consider.
The IOPs percentile is based on workload, see below:

It means 95 % of the time the IOPs usage is below the figure provided.

95% of the time IOPs below 755.4
98% of the time IOPs below 1071.3
Peak usage is below 1315.0

Also the figures for your CPU and Memory look OK to me.

Another thing is this, I prefer RDM disks with such applications like SQL, try not to use virtual disks for SQL as the scsi commands/activity go via the vmkernel and creates overhead. I tend to use Raw device mappings direct from my SAN/NAS Luns to the VM for these.
krhoades7601Author Commented:
Thank you for the response.  You have provided very valuable information!
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.