VSphere 4 Essentials Plus SAN HA Setup Guidance

We want to migrate our existing traditional infrastructure to a virtualised one.  Our 5 Dell poweredge servers are several years old, and will fall over at some point plus we need the flexibility to deploy new servers to test the IT solutions we sell.

We have quite a basic setup of an AD domain controller / file server, exchange 2003 server, SQL 2000 server, web server and an extra server running anti virus management, network monitoring, and some other minor apps.  We have 5 employees and on a daily basis, they use outlook email and a web based CRM.  Our customers use our webserver.  We plan to upgrade to the latest SQL server and Exchange.  Our existing 32bit servers have held us back to date.

We have a quote from Dell for
2 x R510 with 2 x Xeon E5507 processor 24Gb RAM 250Gb SATA HD to act as hosts
1 x R410 with 1 x Xeon E5507 processor 16Gb RAM 250Gb SATA HD for vCenter Server
MD3200i iSCSI SAN with 450Gb SAS x 12
VMWare vSphere 4 Essentials Plus

We have had some performance analysis conducted by Dell on our existing servers and they assure us the above more than meets our requirements.

We also have from our existing infrastructure a Netgear GSM7224 24 port layer 2 managed switch, a Watchguard  Firebox x550e security appliance and a Mitel CS5200 IP phone system.

Our current infrastructure has all servers on the same subnet, with some servers published to the internet. External DNS is provided by our ISP.  Our employees connect via VPN to the watchguard firewall to access some of our internal resources including the phone system and exchange.

I am totally new to vSphere, but have experience of VMWare through workstation and server, I'm also new to SAN technology.

I've got a very rough idea how a virtualised setup might look, with our servers running as VMs connected to a virtual switch and that connecting through to a physical network port to the outside world for connection to the firewall.

I'd appreciate if someone could provide some feedback on the equipment spec and also the basics of networking up a 2 host environment to a SAN to provide HA, plus exposing the virtual subnet to external hardware i.e. connecting the IP phone system and firewall.

Our switch is VLAN capable, so can we have the storage traffic and that internal subnet on the same switch?

Also, I'd appreciate some best practice guidance on setting up the storage for the different types of servers we have, i.e.  SQL and Exchange

Many thanks.
Who is Participating?
Danny McDanielClinical Systems AnalystCommented:
just a quick thought before I have to go to a meeting...  virtualize your vCenter server and use that money you would have spend on the physical server to increase the amount of RAM on your other hosts and, if possible, upgrade the CPU's.  You're more likely to run out of memory before using all of the CPU, but I think the 5504's are a little light on processing power compared to the others that are out there near the same price.
This is totally bare minimum. Ideall you want N+1 to give you resilience, and that means a minimum of 3 hosts for running ESX. Additionally has danm66 says, running vCenter is definitely the preferred option and they have WAY over-spec'd the server to achieve this. Pool your resources and get 3 identical hosts.

Not sure I'd trust a Dell SAN, there's some good SOHO products around, any idea what your requirements are? Difficult to give recommendations really without knowing the capacity planning details.
neburtonAuthor Commented:
our current 5 servers are single core dinosaurs from 2003. the storage in use across all our current boxes is probably less than 300gb. the new storage will provide around 4tb raid 5 which is more than 10x that.  we want the new kit to consolidate/ replace that hardware and give us a playground to test  the software solutions we sell.  we have a 10u rack that limits us and a limited budget. i dont think we need to go overboard on resilience.  we dont have to provide any sla's to our customers and we can tolerate a certain amount of downtime ourselves.
Ultimate Tool Kit for Technology Solution Provider

Broken down into practical pointers and step-by-step instructions, the IT Service Excellence Tool Kit delivers expert advice for technology solution providers. Get your free copy now.

Okay, if you don't need resilience, then cut the servers down to 2, but put in as much memory as you can afford.

Size up all your storage and tell Dell what you want, don't let them lead you into getting 4tb of storage when you don't need it! There's a lot of cheap good solutions out there. Have a look at Drobo for example, they do some awesome small storage with some great features.
To add to " as much RAM as you can afford":-) go for two six core processors (56xx series). 6 cores is a maximum you can go without having to buy Advanced or Enterprise Plus license.
Agreed re virtualizing vCenter server... virtualize, virtualize, virtualize!
You did not mention number of NIC's you are going for... I'd ensure to get at least 6 ports (2 onboard, 2x dual port PCIe), best 8 for greater flexibility. This will allow you to isolate management and VMotion traffic from iSCSI and production network. If you are planning to have DMZ on the same host, I'd go for extra 2. Check out iSCSI SAN config guide on configuring iSCSI http://www.vmware.com/pdf/vsphere4/r41/vsp_41_iscsi_san_cfg.pdf.
Also, since you mentioned you're new to vSphere, I know you said you know Workstation and Server, but vSpere is quite a different beast;-) I'd recommend searching Amazon for a few books. First that comes to my mind is Mastering VMware VSphere 4 by Scott Lowe, but there is plenty others. Enjoy!

Just my 2p ;-)
neburtonAuthor Commented:
Thanks for all the advice so far.  I've just noticed Dell have an offer on their processors, so have requested a revised quote for Intel Xeon X5670 6 Core, 2.93GHz and 48Gb of RAM.

I've ditched the r410 for vCenter and am going with the advice to virtualise this.

I'm also getting 2 Powerconnect 5424 switches for the iSCSI storage traffic, as I've determined my current switch is too old.

I'd appreciate some advice on the other points I raised in my original question, about connecting vm's on a virtual subnet through to a physical switch and other physical devices (all to be on the same subnet).  Also so guidance on storage setup for different types of VMs.

Sorry I missed the other questions.

I'd look at creating VLANs for all your traffic isolation. This will allow you to present the separate networks through to VMware (which fully supports VLANing), and also keep this separation through to physical systems also. You want to do this as much as possible as things like storage and management traffic really need to be separated securely from public traffic. You'll probably need to mix VLAN tagged ports (for ESX servers) with VLAN untagged ports (physical servers and storage). VLANing is a big topic, but let me know if you want more detail on this.

Storage setup is a maybe a bit tricky without knowing a lot more detail. For iSCSI you probably want to look at maybe 10 VMs per datastore as a maximum. I don't believe the PowerVault supports VAAI in which case you don't get the advantages of storage offloading, so you need to be careful of VM contention for disk, the more VMs that share a disk, the more risk of contention you run (SCSI locking issues mainly).

Separate out things like ISO images and templates from production VMs. Don't bother separating swap files or pagefiles, it's more hassle than it's actually worth to be honest. There isn't too much more consideration for the storage layout beyond this as you have a single SAN with one type of disk and not many difference in the VMs you'll be running. Keep it simple, it'll make life easier to manage!
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.