Link to home
Start Free TrialLog in
Avatar of 88D1T
88D1T

asked on

Hyper-V Live Migration failure 0x80041024

I have two nearly identical servers - Dell R520/530 Servers, both running 2012 R2 and Hyper-V with a few guest VMs in a non-clustered environment.  Both hosts support replicas.  After a few intermittent lockups on HostA, I moved all machines to HostB using Live Migration.  It worked like a charm.  After a firmware update to HostA, it appears to be stable so I want to start moving the guest VMs back to their original host using the same live migration (Shared Nothing Live Migration option).

After running through the wizard, the process fails with 'There was an error during move operation.  Migration operation on vmS01 failed.  Additionally, in the System Event log, are Event IDs 21024, Virtual machine migration operation for 'vmS01' failed at migration source 'HOSTB',  and Event ID 16000, The Hyper-V Virtual Machine Management service encountered an unexpected error: Provider is not capable of the attempted operation (0x80041024).

My research took me to several support sites (Microsoft, TechNet, 4sysops, Techgenix, and here of course) and seemingly knowledgeable individuls with similar issues, but no suggestions resolved the issue.  Any assistance is greatly appreciated!

Here is what I have confirmed/done to try to fix the problem:
1) Confirmed Live Migrations are enabled/use any available network
2) Confirmed both hosts use CredSSP/Compression but also trying Kerberos
3) Confirmed Processor compatibility mode
4) Confirmed resources are available (minimal disk, processor, RAM required, plenty available)
5) VM being moved is not a DC
6) Restarted VMMS on both hosts
7) Restarted both hosts
8) Firewall groups Hyper-V and Hyper-V Mgmt Client are all Enabled and Allowed
9) Firewall permits hosts to ping each other. No supplemental firewall enabled.
10) Computer delegation has been set on both hosts to use Kerberos for CIFS and Microsoft Virtual System Migration service (may not be required as hosts are both DCs and am starting migration on the source, but added anyway)
11) Migration is taking place from the logged in HostB's Hyper-V Manager and not from powershell or HostA's HVM.
12) Name of Virtual Switch on the hosts are the same
13) Single domain, domain admin signed in to start the move
14) A replica of a different VM is running on HostA and Reverse Replication changes are being sent to HostB
15) Path to vhdx on HostA will changing from C:\hyper-v\vmS01\ to D:\hyper-v\vmS01\
16) Able to perform a move of Storage Migration from HostB's C: to D:
Avatar of 88D1T
88D1T

ASKER

This might be fixed.  I restarted VMMS again on each host and now after the last step in the wizard, I was prompted to select a different virtual switch.  These used to be named unique to the host (vsHostA and vsHostB), but after some research I renamed them to match - vsHost (even though when I moved from A to B the switch names were mismatched and I was prompted to select a new switch).  I guess I had to restart VMMS after renaming them and even though the virtual switch names were now the same, the move process then brought up that error screen saying vsHostB didn't exist and that allowed me to select a new virtual switch, vsHost.
ASKER CERTIFIED SOLUTION
Avatar of Philip Elder
Philip Elder
Flag of Canada image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of 88D1T

ASKER

Thank you for your comments.  While there are plenty of valid arguments for the best practice of having DCs only running the applicable services, this doesn't prevent the Hyper-V role from being installed or running successfully on a DC.  If they were that incompatible, you wouldn't be allowed to install those roles.  Best practice isn't the only practice.  Depending on one's environment (i.e. making do with what little hardware you have and not having the budget to purchase another server but rather utilize virtualization), such was an acceptable risk.  As environments change and expand, DC roles and services can be migrated, which we have done - one to a VM and another to AWS, although AD services still remain on HostA.  I incorrectly stated both hosts were DCs; it is just HostA.  I'm more leary of removing that role from HostA and breaking something else.  So I will just wait for the next hardware refresh and the issue will resolve itself.

Live Migration, at least using the GUI, does not need the virtual switches to be named the same.  The wizard will prompt when the switch isn't found, otherwise I wouldn't have been able to migrate from A to B.  Why the wizard wasn't prompted initially when moving from B to A is still a mystery but at least it is prompting and working now.