SAN and Setup's


Firstly I'd like to start by saying I am no storage expert when it comes to SAN's.

What I am trying to achieve is some sort of redundant storage network and was wondering how this is done. We currently have HyperV running on most of our servers with local storage used and I want to move to some sort of clustered storage method. I am aware that we need a back end storage and then connect over iSCSI or fiber channel. I believe fiber channel with require HBA cards so that would add too much to the cost / budget.

I have been looking at the new HP G3 MSA P2000 series and connecting via iSCSI to it.
What exactly is required? Do I need SAN switches? Are these different from our standard switches? What do I need to buy? Is the MSA P2000 even capable of running 10 virtual machines on it as back end storage?

One of the virtual machines will be SQL and MS Dynamics AX Server.

Thanks experts!
Who is Participating?
djcanterConnect With a Mentor Commented:
I cant say yes or no as to whether that configuration can support that VM load.
I use 15k drives rather than 7.2k and have no issues. I would suggest talking to sales support and they can provide guidance as to how many iops 7.2k in your desired raid array can support.

When we shopped for SANs, dell has a performance analyzer that runs on each server and can tell how many iops are required per server. Take the sum of your server iops required and plug that into a SAN configuration with your IOPS required plus 20-50%.
Page 14 of this doc
states that  using 15k 1.6tb drives, you can get 9400 mixed IOPs.

The below calculates the SAN using 7.2k drives in a raid 6 to have only 912 IOPS.

Spindle speed and raid level can have a huge impact. You really need to know your requirements, then size your san accordingly.

That said,  I wouldnt deploy on 7.2k spindles, the SAN is the single most expensive piece in our rack, and i dont want to outgrow it before 5 years.
Any switch you want to use should support jumbo frames. We are using an Equalogic SAN connected to a stack of Dell Powerconnect Switches. in my config, each hyper v server has 6 nics, 2 for san, 2 for management, and 2 for virtual machines.  The Dell switches also use LLDP to identify the SAN and define quality rules on the switch automatically.

I did testing on a Cisco SG-300 series switch, but i did not test its stacking capability.

I roughly used this as a guide to my configuration
dqnetAuthor Commented:
So just to confirm, any switch can be used providing it has jumbo frames? I was under the impression you needed specific 'San switches' we have the Cisco 2960's as our core switches.. My objective would be to use that hp storage device and present Luns to our servers and use csv.. Is this feisible? Or even correct? Thanks!
Will You Be GDPR Compliant by 5/28/2018?

GDPR? That's a regulation for the European Union. But, if you collect data from customers or employees within the EU, then you need to know about GDPR and make sure your organization is compliant by May 2018. Check out our preparation checklist to make sure you're on track today!

You would need a SAN switch if you choose to use fibre channel, i however do not have experience with FC.

For iSCSI,  a gigabit switch that supports jumbo frames will be fine. I connect the SAN and each HyperV host to multiple switches in the stack for redundancy.

For my Hyper V implementation, I have a 2 iscsi targets connected to each HyperV host. 1 is the CSV, the other is the cluster witness.  Each VM gets configured to use a VHD on the CSV as their OS disk, and in my case, the data disk is also on the CSV, but you could create individual iSCSI targets on the SAN for the data disks.
To answer your earlier question about if the HP G3 MSA P2000 can handle 10 VMs.
This entirely depends on the configuration of the SAN. What drives are you installing and what raid level will determine the amount of IOPS the SAN can support.

For the servers, what types of VMs are they, and specifically how many IOPS are needed ?
SQL and Exchange are high IOP users, Exchange with 5000 users and 500 email per user per day is estimated to use about 3000 IOPS, while a desktop OS workstation uses about 18 IOPS average.
dqnetAuthor Commented:
Excellent response, many thanks. In answer to the IOPS question.
The server will be running Windows Server 2012 with RemoteApp installed hosting Microsoft Dynamics AX. Average user count is 50. It will also be hosting approximately 300 mailboxes running on a flat file architecture with MDaemon. We do have an SQL server but that will probably running on local storage on one of the other servers.

The disks would be 2TB Western Digital Enterprise LFF Drives operating at 7200 RPM.

Unless recommended otherwise? What would your expert opinion say about the above?
dqnetAuthor Commented:
dqnetAuthor Commented:
Extremely sorry for the delay. I was stuck out of town on a Virtulisation project.

OK, as I understand it seems that 15k disks are the way forward.
I also wanted to ask you about your setup where you have your data disks also on the CSV?

Does that mean you have at least 2 VHD's per VM?
Like what would be the benefit? Why not just partition the actual VM into two separate drives (partitions) rather then have 2 VHD's (1 for the OS and one for the Data)?

Personally, I don't like to partition. I can spec a 40GB OS drive and a suitable, say 100GB, Data drive. If either drive needs additional space, I can add capacity to the VHD, then extend partition without much effort. Dealing with multiple partitions can complicate the process.
dqnetAuthor Commented:
Thanks a million!
All Courses

From novice to tech pro — start learning today.