I was reading the above mentioned article, and it appears to suggest not using dynamic disks if at all possible. It suggests not using dynamically expanding disks because of reduced performance by increased overhead and fragmentation.
I usually assign the guest VM OS as a fixed disk. This also applies to virtual disks with databases. However, for growing company file storage, I usually assign the virtual disk as dynamically expanding with a cap imposed.
I will be installing a new Windows 2016 hyper-v server with two vms. On one of the two vms I was going to assign the guest os (80Gbs) and a database virtual disk (150Gbs) as fixed virtual disks. Their company data is currently at 330Gbs, and growing, and I was going assign it as a dynamically expanding disk with a cap of 800Gbs. As a rough estimate the amount of disk space will probably double within 4 or 5 years.
After reading this article, I may want to consider assigning this virtual disk as fixed (800Gbs), and be done with it. I understand, a large fixed disk has drawbacks when it comes to moving/restoring the entire drive should that be necessary. However, fragmentation is just as an important issue over the near/distant future. Also, I purchased this server with a more than adequate amount of overall disk space to last many years, so disk space is not really an issue.
The article also mentioned the following referencing Dynamically Expanding VHDX:
Use a defragmentation tool once a week (by the way, this can be done while VMs are running) and keep plenty of disk space (over 30%) free to provide plenty of free disk space for growth and as defrag work space.
Not sure if it refers to Windows defrag utility or some sort of third party option.
I am just wondering what your opinions are about this.