I was trying to test a server that has 2x 1GB and 2x 10GB ports so I can move VMs from one host to the other using vMotion (no shared storage). The problem is, I don't have a 10GB switch so I've beem managing the servers through the 1GB ports connect to a switch, and the 10GB ports are connected directly from one server into the other, no switches in between.
On the 10GBIT cards, I set up IP address and GW (which can't really go anywhere as they are connected directly to each other) and I manage the servers using vCenter, which reach these servers through the switch using the 1GB ports. I enabled Vmotion on the 10GBIT ports, and tried to vMotion. It got stuck.
If I do the same on the 1GBIT ports, enabling vMotion on them, it works fine. I don't know if it is because the vMotion can't reach a gateway, but as everything is in the same subnet it should need a GW.
I heard that vMotion is actually a 2 item process:
the cold migration will copy the VMDK files and will use the normal virtual network, not the vMotion network
vMotion is actually just the memory state that will be copied over the network to the machine located on the new host, no VMDK files are involved in this step
Is this right?
Even though, I should be able to use the directly connected networks to migrate VMDK of vMotion the VMs to a different host. Why am I encountering this problem just when connecting the NICs directly to each other?
I was planning to get some infiniband cards as they are a cheap way to get 10-40GB without a switch, but if this is a limitation and I can't really internet the hosts properly and vMotion, it may not make much sense.