We have a Windows 2019 Standard on-premises Hyper-V host with a single Windows 2012 guest. We started Azure Site Recovery in order to migrate from this on-premises host to Azure.
The initial replication seemed to go well, but we have not completed the migration because we are concerned about some error messages we are receiving.
At exactly one hour apart we are receiving this error in Event Viewer on the host: Reference point create operation failed. Error 33800
Initially we had some corrupted (?) VSS writers, but we restarted some recommended services, and now those errors have disappeared, but we are still receiving the Error 33800 every hour.
When we review the Replication Health, we see this Warning: "Time duration since the last successful application checkpoint has exceeded the warning limit for the virtual machine."
A. This seems to be an old message, but Resetting Statistics in the Replication Health screen doesn't make this error go away.
B. Other statistics regarding the replication look very good (i.e.currently 55 out of 56 successful replication cycles).
And gosh, I hate to mention these other screwy things, because I don't THINK they're the cause of this specific problem, but here goes:
1. An application on the guest system is constantly creating EDI transaction reports (several per minute). These are not needed and we're researching how to disable this process, but have been unable to. As a result, at each 5 minute interval, there are maybe 400MB of text files to replicate. We periodically (monthly) go in and compress them to save space.
2. The guest OS is a domain controller. Don't really need it at this juncture, but it is what is is.
3. We've disabled our backup service to see if that will help with VSS related errors. Hasn't helped so far.
4. This whole on-premises virtual is the result of a failed RAID array on the original native server, and we fortunately had made virtual backups using Cloudberry. When we started the VM, we noted that the guest OS (Server 2012) indicated that it needed Windows Activation. We have back-burnered this situation until we can resolve these replication errors and getting the whole thing migrated to Azure.
We've researched heavily on the internet, but have not had any luck with the suggestions. We really need to get this replication cleaned up, and the virtual migrated to Azure.