Link to home
Start Free TrialLog in
Avatar of Garry Shape
Garry ShapeFlag for United States of America

asked on

vmware 5.5 - can't migrate VM between two hosts

I'm trying to migrate a VM from one host to another and they are connected to the same Distributed Switch. However I can't migrate, I get the error that the "The Network interface 'Network adapter 1' uses network "test-dvSwitch1-lacp', which is not accessible.".
The port groups are named the same across the hosts, so I'm not sure what it could be.
It does this no matter which network I make the VM a part of.    
At this point I can change the VM's network to "VM Network", migrate it to other host, then set it back to the desired vLAN, power on, and good to go.    
I just don't know why I can't keep it on the same vlan during the migration if it exists on both hosts...
HLSTxlj-1-.png
SOLUTION
Avatar of Zephyr ICT
Zephyr ICT
Flag of Belgium image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of Garry Shape

ASKER

Yeah I'm not sure. So go over the settings on the hosts?
ASKER CERTIFIED SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Ok after I removed, I re-added the host to the switch choosing the host and its only available NIC. Then it let me assign port group and I did so, and also specified the 3 VM's on the host to the destination port group, now I think the error is resolved, and the VDS Status column shows "Up".
Ok, sometimes these uuid's can have some issues I've learned, if they don't match up, maybe something somewhere got screwed up somehow...

Test a few vmotions and make sure no more errors happen ... case closed ;-)
Ok I think it's good, still checking though.
I can migrate between both hosts.
What I had done after my last step was migrate the Management Netowrk VM Kernel out of the Distributed switch, it was sitting in a specific port group. I migrated it back to the vSwitch0 and I think that may have fixed the final thing.
Only issue I have now is occassional VM won't power on getting the same network not accessible error.    
Workaround apparently on these are two change to VM Network, powe r on, then change network back to the preferred network while it's on and it picks up the new network setting. I can apparently reboot it from the console leaving that setting...so not sure why powering on from a power off state would cause that error.
I'd have to take a look at the network setup, it's getting hard to follow :-)

You could start by leaving the vmkernels on standard vswitches for now and try to get it all working smoothly before moving them to distributed vswitches ... If feasible of course.
Well I think at this point a support call with VMWare should help me finish up the last lap


I think on the last issue it's a matter of removing the network card then re-adding it
That sometimes can help yes, make sure you're (almost always) using the vmxnet3 type vnic (if you're not talking about the physical ones).
So vMotion kernels should be on the standard vSwitches?
No, I was just saying that for troubleshooting purposes, sometimes it's better to start a new with a simpler setup and evolve from that.
Ok I'm still learning the product so trying to figure out the differences and best practices for them, what kernels are, etc.
Avatar of Pber
No comment has been added to this question in more than 21 days, so it is now classified as abandoned.

I have recommended this question be closed as follows:

Split:
-- garryshape (https:#a40898444)
-- Zephyr ICT (https:#a40898241)


If you feel this question should be closed differently, post an objection and the moderators will review all objections and close it as they feel fit. If no one objects, this question will be closed automatically the way described above.

Pber
Experts-Exchange Cleanup Volunteer