Running Virtual Machines from dual Gigabit iSCSI Storage

Posted on 2010-01-04
Last Modified: 2012-06-21
I would like to virtualise our internal IT infrastructure to reduce downtime in the event of server failure and implement an offsite backup policy. I intend to ask several questions relating to this project but have decided to break them up to allow them to be answered easily.
Is there a way on EE to link them all together?

We have two sites with a 2 x 2:2MB SHDSL connection and IPSec VPN between them.
Four different domains with three at one site and one at our main site.
Each site will be running 3-5 virtual servers i.e DC/DNS/WINS, SQL/IIS, network management, WSUS.
There are 60 users at our main site and 30 at our branch.

I was thinking of getting two of the QNap TS-809U-RP Turbo NAS, one for each site.
This would replace our FTP, File Server and act as the iSCSI storage for Virtual Machine images.
I plan to also purchase a another server to run Windows 200? and VMWare Server 2 (Free) (open to suggestions here).

The main question I have is will the dual gigabit Ethernet of the QNap be fast enough to mount a partition on front end server, so it appears locally and run the Virtual Machine Images (VHD's) from this location. Maybe dedicate one of the QNap's Ethernet ports to this physical server or load balance on Gigabit 802.3ad compliant switch?

If this will work would it be a good idea to create a separate RAID volume for the SQL databases?
Question by:thenos
    LVL 57

    Expert Comment

    Well, it depends on how much file traffic you have.

    Say you were going to get a SCSI based SAN, you would end up with at least 2 Gbps, if not 4 or 8 and that is only if you have single SCSI connections.  If you go with dual connections, then you double the speed.

    The problem with going directly to the server is that you only have a single 1 Gbps data path which may not be fast enough.

    Personally I would put the QNap and the server on the same physical switch and use 802.3ad to give you 2 Gbps performance (well not exacty 2 Gbps, but close).  I would also setup Jumbo frames, make the MTU as large as the common size.  So if the server supports 8992 and the QNap support 8192, then set the MTU to 8192.  

    If possible put the QNap, the server on the same IP subnet.  This allows the switch to do switching, no routing takes place.

    If for some QNap and the server have to be on separate IP subnets, then make sure you have a L3 switch and let it do the routing, then at least all of the traffic will stay within the switch even if it is routed.

    LVL 24

    Expert Comment

    Dual is fine but you need understand, that is purely for iscsi traffic

    I recommend at least 4 NICs:
    2 NICs dedicated for iSCSI traffic via dedicated physical switch only for iscsi you can also include mgmt traffic but i also recommend dedicated NICs for mgmt/vmk traffic such as vmotion(if you plan to use in future) so this will be vSwitch0

    2 NICs would be dedicated to VMs traffic vSwitch1

    Fyi, esx teaming at vSwitch only manage outbound traffic its better to configure etherchannel for each pair at the physical switch to manage inbound traffic and dont forget to change the policy to "Route based on IP hash" if you decide to go for etherchannel

    Refer to below links for more info on etherchannel 0 57451234 0 57451234

    If this will work would it be a good idea to create a separate RAID volume for the SQL databases?
    Yes ofcoz, the DB & Log should be located on separate LUN

    Author Comment

    I was hoping to keep licensing costs down by using the free version - VMWare Server 2.

    The device that i planned on getting only has dual gigabit Ethernet. I was hoping to get away with dedicating one port  to iSCSI/VM and one to FTP/ File Server/Web Server via a switch.
    I wanted to keep the file server in simple NFS format which would be backed up every hour to a USB attached device using Storage Craft's Shadow Protect Server and this is replicated to remote site both ways each night.

    So in your opinion this would be pushing it?
    I found this regarding VMWare and the deivce "Supports VMware vSphere (ESX/ESXi 3.5, 4.x)"

    What if I ran dual Gigabit Ethernet between the server running VMWare and the switch and the same between the QNap NAS and the switch then had two ethernet going to the switch connecting workstations to the network with load balancing configured?

    Maybe it would be better to run the VHD on a local disk to the server running VMWare and have backup images of the live virtual machines being made on the iSCSI volume (might have to spend some money here. These would then replicate to the other QNap device at the DR site each night both ways using RSync.
    If a server goes down backup up image of virtual machine could be brought up:
    * from the local iSCSI target backup on local VMWare server.
    * from remote iSCSI target backup on the remote VMWare server and accessed via IPSec VPN.

    If the site is destroyed backup up image of virtual machine could be brought up:
    * from remote iSCSI target backup on the remote VMWare server accessed by staff remotely using SSL VPN client using LDAP intergration with ADS..
    * ADS replication between sites whilst up would allow authentication to work correctly.

    Only problem I can think of would be DNS/Routing as each site has a different subnet and virtual servers would either obtain. Think I may have found a solution here but I am getting a little confused.
    I suppose what I would need is some sort of dns entry which obeyed the following logic and was the same at both sites for all servers.

    If    {server1_IP_Adress1 = pingable}
       Then server1 = IP _Address1
       server1_IP_Address1 = IP_Address2

    Any ideas?


    Author Comment

    Had an idea with the DNS / Subnet problem.

    Could I setup a DHCP reservation on the DHCP server which gave the recovered server's NIC a dedicated IP address or even assign it statically. Then put two similar DNS entries with the production entry above the backup version. When the production entry can't be resolved it would continue down the list of entries until it finds the failover one.Doubt this would work as you can sort the list of DNS entries by clicking the columun titles in the DNS MMC. I don't believe MS would have made it that easy to change as it could have some dire consequences.
    LVL 24

    Expert Comment

    Sorry for the confusion, I'm talking from esx host perspective, the device itself is fine to have 2 giga ports
    as i said, 2 giga nics is fine, just need dedicated nics at the esx host for users to access all the VMs
    well if you are not aware, esxi in single server mode is free as well, just register at vmware website to get the free license key
    and i recommend esxi for production as it is more reliable and will give you better performance
    LVL 57

    Expert Comment

    Again, with regards to the QNap, I think you are going to have performance problems if you try to dedicate 1 NIC to the VM Host.  That is only 1 Gbps, which is about 125 MBps.  Although that sounds fast, it really not when you are doing VM hosting for multiple OS's.  In fact doubling to 250 MBps by doing the load balancing is not that fast.  On some of our VM Hosts we have dual 2 Gbps HBA going to our SAN's (total 500 MBps) and we have performance issues.

    Now I just assumed you would have at least two NIC's on the ESX server, if not more (just as ryder0707 suggested) depending on your expected network traffic and security requirments.  We have one set of VMHosts that use iSCSI and the hosts have 4 NIC's, two dedicated for iSCSI traffic and two dedicated for "network traffic".  We have two switches, one for network traffic and one for the iSCSI traffic.  The iSCSI stuff has statically assigned IP address all in the same subnet, no routing what so ever.

    For IP addressing I would use DHCP and dynamic DNS.  This way when/if the servers IP address changes, the DNS entry is also changed.  However, you do have to put a low TTL on the DNS entry.

    Author Comment

    Sorry for the delay in replying. I have been away.

    giltjr: "We have one set of VMHosts that use iSCSI and the hosts have 4 NIC's, two dedicated for iSCSI traffic and two dedicated for "network traffic".  We have two switches, one for network traffic and one for the iSCSI traffic"

    Wouldn't these leave the "iSCSI and network traffic" switches as a single point of failure? I wanted to try and avoid them as much as possible and make everything redundant, though I understand budget restraints sometimes don't allow true redundancy.

    I guess in order to do this i would need to have two switches, both attached via two dual gigabit Ethernet connections (one for each switch) to both the iSCSI target (QNap), the ESXi host and another 48 port switch which workstations are connected to.
    QOS / Load Balancing would be used to prioritise iSCSI traffic. This would allow the QNap to be used as a file server (NFS) and FTP Server to take the CPU / Network bandwidth load off the ESXi host as requests could go directly to the NAS instead of going through a VM running on the ESXi host.

    What are your thoughts on this? Could someone point me in the right direction for configuring the switches as I haven't had to much experience with managed switches?
    LVL 24

    Accepted Solution

    See Figure 1 on
    That is for different storage model but the concept is still the same
    That should give you high level idea how to setup physical switches with redundancy/HA
    LVL 57

    Assisted Solution

    Yes, if you had one network switch and one iSCSI switch, then you have a single point of failure for the network and one for the iSCSI world.

    The document ryder0707 provide you a link to is a good document to start with.

    The main reason we have dual network and dual iSCSI switches is because the servers that use the iSCSI device are blade servers, so we have 3 blade centers with 10 servers each.   Some of the servers are running ESX and multiple VM hosts, in some instanced 20 virtual hosts on a physical server, and some servers are stand alone.

    So we have quite a few servers (both phyiscal and logical) accessing the iSCSI devices.

    Author Closing Comment

    Thanks for the help guys

    Featured Post

    How your wiki can always stay up-to-date

    Quip doubles as a “living” wiki and a project management tool that evolves with your organization. As you finish projects in Quip, the work remains, easily accessible to all team members, new and old.
    - Increase transparency
    - Onboard new hires faster
    - Access from mobile/offline

    Join & Write a Comment

    Lets start to have a small explanation what is VAAI(vStorage API for Array Integration ) and what are the benefits using it. VAAI is an API framework in VMware that enable some Storage tasks. It first presented in ESXi 4.1, but only after 5.x sup…
    Create your own, high-performance VM backup appliance by installing NAKIVO Backup & Replication directly onto a Synology NAS!
    This video teaches viewers how to encrypt an external drive that requires a password to read and edit the drive. All tasks are done in Disk Utility. Plug in the external drive you wish to encrypt: Make sure all previous data on the drive has been …
    This video shows you how to use a vSphere client to connect to your ESX host as the root user. Demonstrates the basic connection of bypassing certification set up. Demonstrates how to access the traditional view to begin managing your virtual mac…

    746 members asked questions and received personalized solutions in the past 7 days.

    Join the community of 500,000 technology professionals and ask your questions.

    Join & Ask a Question

    Need Help in Real-Time?

    Connect with top rated Experts

    14 Experts available now in Live!

    Get 1:1 Help Now