Link to home
Start Free TrialLog in
Avatar of basilthompson
basilthompsonFlag for South Africa

asked on

22 Mar 12 09 - 10GB nic in VM

Hi All

I have a nic that supports advanced vm functionality - it even has a Virtulization server profile which turns on or sets all the advanced settings to those suited for hyper -v. The problem is that I am not sure if I should share the interface with the host / management server. I have a 20TB array on the VM, as a pass through disk, plus I have 20TB array on the host server - which I use to backup the array in the VM - so having the adapter shared is great because I get 10GB between the machines. The thing is I would like to have 2 Virtual machines - each with their own 20TB array - for example \\fserver1 and \\fserver2 - but I only have a single 10GB nic - is it advisable to attach the same nic to both VMs? I cant see any problem with it but I also cant find any online documentation for this. I imagine if I had 3 10GB nics in the host - I could use 1 for the host - and 1 each for the VM's - but then I have to pass all traffic through the physical 10GB switch - where using the Virtual switch makes more sense to me. Also - if I had only 2 10GB nics, and dedicated those to 2 VMs - would I then share both interfaces with the host so when I want to backup either VM to the host array I get 10GB connectivity without having to use the physical switch?
Avatar of kevinhsieh
kevinhsieh
Flag of United States of America image

It is not at all clear how you are conencting to storage, and how many arrays you have. It is unlikely that your sorage can get even close to filling a single 10 Gb conenction, and I doubt that you would fill even 1 Gb Ethernet often if you are using iSCSI with MPIO.

Since you seems to be using passthrough disks to the VMs, the VMs are not using the network to connect to storage, so you could even have them on 1 GB physical connections and that would not be a bottleneck to the storage.

I guess that you can dedicate a NIC to the VMS, or share it with the host. The VMs should all use the same NIC either way. Perhaps you can better describe how many storage arrays you have, and what is connected to them and how, and the ways that you would like to have optimized throughput. With 10 Gb Ethernet, the chances are that your throughput through the virtual switch or the physical switch will be limited by other factors, such as file systems and physical disks.
Avatar of basilthompson

ASKER

Yeah - I think I didnt describe it well enough - I guess its because I am not exactly sure the best path. Ok, so lets try explain it more in detail. I have a 36 bay super micro chassis - with SAS2 redundant expanders - in that system I have 24 disks - setup as two RAID 50 - so thats 12 in each array. They are 2TB disks - the Seagate Enterprise ES range. In the chassis I have a Intel Server motherboard with a Xeon e5645 CPU, and 12GB RAM. I then have a quad port Intel Server adapter, plus a 10GB Server adapter (fibre). I have Server 2008 R2 SP1 as the host OS. The one array is disabled - (offline) so I can attach it to the VM. The VM also runs Server 2008 R2 SP1 (intergration services are installed). So - in initial tests - before attaching the one array to the VM - I can copy from one array in the host to the other array at around 800MB/s - as both arrays can handle the read / write. Testing read speeds on either array I get about 1200MB/s.
So - when I set the one disk offline so I can set it as a passthrough disk for the VM (as a second disk - not a os / boot disk) I cant read that array in the host anymore - unless I access it through the VM as a share.
So my host is VMHOST and my VM is VMGUEST1. I use the host array to backup the VM array - and in a environment where I only have one VM on the HOST - I imagine the best approach would be to create an external VM nic - and share that with the host - that way I have the host at 10GB on a VM switch that the GUEST is also attached to at the same speed. That way I can backup the VM (sync nightly) to the HOST array at the fastest speed. I dont use the quad adapter - its disabled for the purpose of this explanation. So in that environment I am confident that its configured in the best possibel way to get the fastest throughput between the host and guest - am I correct? Or shoud I have 2 10GB nics - one for the host and one for the guest (so I dont share the one interface with the host) so I can still get 10GB between the host and the guest - but that way the data has to travel through a physical switch - where with one interface in a shared configuration - I dont even need a physical switch. I obviously do have a physical switch for connectivity to the other users on the network - about 100 - so I think the 10GB is advantages over the 4 port nic - do you agree?
I am getting confused at the point where I want to setup another VM - and use the other array as a passthrough disk for that VM. The thing is if I want both VMs available to the network (physical network) at 10GB for the users - and I want the host to be able to access both VM/s at 10GB I know I would have to create a singel external VM nic - using the 10GB - then enable that NIC to be shared with the host - that way I have 10GB connectivity to either VM from the HOST - and both VM have 10GB to the network - would that be the best setup? or would I need to have 3 10GB nics ? or two 10GB nics? With 3 I can work out the setup easily enough _ I think it would just be each VM gets a external NIC to it - and the host uses the 3rd one - no sharing needed here - that way each interface is utilised less - and shoud provide the optimal setup - but I dont have 3 and cant afford that route at tme moment. I could get another 10GB module though - as the card I have is a dual port - then I would have 2 10GB - but how woudl I best set it up then? If assign one each to each VM - then both VMs have the best configuration to the network? - but the host cant access the VMs then without sharing one of them - and if I do that - will the host use the VM switch only when accessing the one VM and the VM and the physical swithc when accessing the other? - how can I set it that the host can access either VM without having to use the physical switch? is that even possible? Could I create another internal only network for access between the host and guests?

I have made a few statements here - but thats just what I think - so please correct me where ever I am wrong and advice accordingly - I am besicalyl after advice on the diffrent setups possible - and the advantages of each approach - so I can take the best option depending on the hardware I have.

Thanks in advance.

By the way - whats happened to Experts Echange - when I posted this question - there were no tags available for Hyper-V??? only Virtual Server - why are the options less now?
I forgot to add - that by adding the second VM and using the second array - I would at the same stage install another 12 disks in the remaining 12 slots in the chassis - so I can then use that for the host. the setup would then be HOST and GUEST1 and GUEST2 - so what would be the best possible network setup using 1 Nic? - then what would be the best using 2, - and in the unlikely even that I could afford 2 dual cards and setup 3 ports - what would be the best setup if I had 3 10GB nics? Obviously the servers are being accessed all the time by users - so I want the setup in a way that the interfaces are never under too much load.

Thanks again, in advance.
ASKER CERTIFIED SOLUTION
Avatar of James Haywood
James Haywood
Flag of United Kingdom of Great Britain and Northern Ireland image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial