Link to home
Start Free TrialLog in
Avatar of thenos
thenosFlag for Australia

asked on

Running Virtual Machines from dual Gigabit iSCSI Storage

I would like to virtualise our internal IT infrastructure to reduce downtime in the event of server failure and implement an offsite backup policy. I intend to ask several questions relating to this project but have decided to break them up to allow them to be answered easily.
Is there a way on EE to link them all together?

We have two sites with a 2 x 2:2MB SHDSL connection and IPSec VPN between them.
Four different domains with three at one site and one at our main site.
Each site will be running 3-5 virtual servers i.e DC/DNS/WINS, SQL/IIS, network management, WSUS.
There are 60 users at our main site and 30 at our branch.

I was thinking of getting two of the QNap TS-809U-RP Turbo NAS, one for each site.
http://www.qnap.com/pro_detail_feature.asp?p_id=111
This would replace our FTP, File Server and act as the iSCSI storage for Virtual Machine images.
I plan to also purchase a another server to run Windows 200? and VMWare Server 2 (Free) (open to suggestions here).

The main question I have is will the dual gigabit Ethernet of the QNap be fast enough to mount a partition on front end server, so it appears locally and run the Virtual Machine Images (VHD's) from this location. Maybe dedicate one of the QNap's Ethernet ports to this physical server or load balance on Gigabit 802.3ad compliant switch?

If this will work would it be a good idea to create a separate RAID volume for the SQL databases?
Avatar of giltjr
giltjr
Flag of United States of America image

Well, it depends on how much file traffic you have.

Say you were going to get a SCSI based SAN, you would end up with at least 2 Gbps, if not 4 or 8 and that is only if you have single SCSI connections.  If you go with dual connections, then you double the speed.

The problem with going directly to the server is that you only have a single 1 Gbps data path which may not be fast enough.

Personally I would put the QNap and the server on the same physical switch and use 802.3ad to give you 2 Gbps performance (well not exacty 2 Gbps, but close).  I would also setup Jumbo frames, make the MTU as large as the common size.  So if the server supports 8992 and the QNap support 8192, then set the MTU to 8192.  

If possible put the QNap, the server on the same IP subnet.  This allows the switch to do switching, no routing takes place.

If for some QNap and the server have to be on separate IP subnets, then make sure you have a L3 switch and let it do the routing, then at least all of the traffic will stay within the switch even if it is routed.

Dual is fine but you need understand, that is purely for iscsi traffic

I recommend at least 4 NICs:
2 NICs dedicated for iSCSI traffic via dedicated physical switch only for iscsi you can also include mgmt traffic but i also recommend dedicated NICs for mgmt/vmk traffic such as vmotion(if you plan to use in future) so this will be vSwitch0

2 NICs would be dedicated to VMs traffic vSwitch1

Fyi, esx teaming at vSwitch only manage outbound traffic its better to configure etherchannel for each pair at the physical switch to manage inbound traffic and dont forget to change the policy to "Route based on IP hash" if you decide to go for etherchannel

Refer to below links for more info on etherchannel
http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&externalId=1001938&sliceId=1&docTypeID=DT_KB_1_1&dialogID=57447605&stateId=0 0 57451234
http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&externalId=1004048&sliceId=1&docTypeID=DT_KB_1_1&dialogID=57447605&stateId=0 0 57451234

If this will work would it be a good idea to create a separate RAID volume for the SQL databases?
Yes ofcoz, the DB & Log should be located on separate LUN
Avatar of thenos

ASKER

I was hoping to keep licensing costs down by using the free version - VMWare Server 2.

The device that i planned on getting only has dual gigabit Ethernet. I was hoping to get away with dedicating one port  to iSCSI/VM and one to FTP/ File Server/Web Server via a switch.
I wanted to keep the file server in simple NFS format which would be backed up every hour to a USB attached device using Storage Craft's Shadow Protect Server and this is replicated to remote site both ways each night.

So in your opinion this would be pushing it?
I found this regarding VMWare and the deivce "Supports VMware vSphere (ESX/ESXi 3.5, 4.x)"
from http://www.qnapsecurity.com/pro_detail_software.asp?p_id=111

What if I ran dual Gigabit Ethernet between the server running VMWare and the switch and the same between the QNap NAS and the switch then had two ethernet going to the switch connecting workstations to the network with load balancing configured?

Maybe it would be better to run the VHD on a local disk to the server running VMWare and have backup images of the live virtual machines being made on the iSCSI volume (might have to spend some money here. These would then replicate to the other QNap device at the DR site each night both ways using RSync.
If a server goes down backup up image of virtual machine could be brought up:
* from the local iSCSI target backup on local VMWare server.
* from remote iSCSI target backup on the remote VMWare server and accessed via IPSec VPN.

If the site is destroyed backup up image of virtual machine could be brought up:
* from remote iSCSI target backup on the remote VMWare server accessed by staff remotely using SSL VPN client using LDAP intergration with ADS..
* ADS replication between sites whilst up would allow authentication to work correctly.

Only problem I can think of would be DNS/Routing as each site has a different subnet and virtual servers would either obtain. Think I may have found a solution here https://www.experts-exchange.com/questions/24883999/Using-VMs-for-offsite-disaster-recovery-IP-range-problem.html but I am getting a little confused.
I suppose what I would need is some sort of dns entry which obeyed the following logic and was the same at both sites for all servers.

If    {server1_IP_Adress1 = pingable}
   Then server1 = IP _Address1
Else
   server1_IP_Address1 = IP_Address2

Any ideas?



Avatar of thenos

ASKER

Had an idea with the DNS / Subnet problem.

Could I setup a DHCP reservation on the DHCP server which gave the recovered server's NIC a dedicated IP address or even assign it statically. Then put two similar DNS entries with the production entry above the backup version. When the production entry can't be resolved it would continue down the list of entries until it finds the failover one.Doubt this would work as you can sort the list of DNS entries by clicking the columun titles in the DNS MMC. I don't believe MS would have made it that easy to change as it could have some dire consequences.
Sorry for the confusion, I'm talking from esx host perspective, the device itself is fine to have 2 giga ports
as i said, 2 giga nics is fine, just need dedicated nics at the esx host for users to access all the VMs
well if you are not aware, esxi in single server mode is free as well, just register at vmware website to get the free license key
and i recommend esxi for production as it is more reliable and will give you better performance
Again, with regards to the QNap, I think you are going to have performance problems if you try to dedicate 1 NIC to the VM Host.  That is only 1 Gbps, which is about 125 MBps.  Although that sounds fast, it really not when you are doing VM hosting for multiple OS's.  In fact doubling to 250 MBps by doing the load balancing is not that fast.  On some of our VM Hosts we have dual 2 Gbps HBA going to our SAN's (total 500 MBps) and we have performance issues.

Now I just assumed you would have at least two NIC's on the ESX server, if not more (just as ryder0707 suggested) depending on your expected network traffic and security requirments.  We have one set of VMHosts that use iSCSI and the hosts have 4 NIC's, two dedicated for iSCSI traffic and two dedicated for "network traffic".  We have two switches, one for network traffic and one for the iSCSI traffic.  The iSCSI stuff has statically assigned IP address all in the same subnet, no routing what so ever.

For IP addressing I would use DHCP and dynamic DNS.  This way when/if the servers IP address changes, the DNS entry is also changed.  However, you do have to put a low TTL on the DNS entry.
Avatar of thenos

ASKER

Sorry for the delay in replying. I have been away.

giltjr: "We have one set of VMHosts that use iSCSI and the hosts have 4 NIC's, two dedicated for iSCSI traffic and two dedicated for "network traffic".  We have two switches, one for network traffic and one for the iSCSI traffic"

Wouldn't these leave the "iSCSI and network traffic" switches as a single point of failure? I wanted to try and avoid them as much as possible and make everything redundant, though I understand budget restraints sometimes don't allow true redundancy.

I guess in order to do this i would need to have two switches, both attached via two dual gigabit Ethernet connections (one for each switch) to both the iSCSI target (QNap), the ESXi host and another 48 port switch which workstations are connected to.
QOS / Load Balancing would be used to prioritise iSCSI traffic. This would allow the QNap to be used as a file server (NFS) and FTP Server to take the CPU / Network bandwidth load off the ESXi host as requests could go directly to the NAS instead of going through a VM running on the ESXi host.

What are your thoughts on this? Could someone point me in the right direction for configuring the switches as I haven't had to much experience with managed switches?
ASKER CERTIFIED SOLUTION
Avatar of ryder0707
ryder0707
Flag of Malaysia image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of thenos

ASKER

Thanks for the help guys