• Status: Solved
  • Priority: Medium
  • Security: Private
  • Views: 87
  • Last Modified:

Windows Server 2016 Hyper-V VM & Disk Defrag

I just added a new WS2016 Hyper-V Host with a VM that has a 600Gb added disk (dynamic disk).  I want to know if its a problem scheduling a disk defrag on all host disks (including the disk holding the VHDX files) on the Hyper-V host, and then also within the VM defrag all the disks (including the 600Gb dynamic disk).  Basically, will it be a problem scheduling weekly defrags of all disks for the Hyper-V host and within each VM as well?
0
cmp119
Asked:
cmp119
  • 5
  • 4
  • 3
  • +1
2 Solutions
 
David Johnson, CD, MVPOwnerCommented:
A weekly defrag of both is a good idea if you are using spinning disks.  I've never had a problem doing this
0
 
JohnBusiness Consultant (Owner)Commented:
I think you will find that Windows does an adequate job by itself. Virtual Machines tend to defragment and unless Hyper-V tells you a given machine needs to be defragged, you do not need to bother.
0
 
cmp119IT ManagerAuthor Commented:
This Hyper-V host has (6) 900Gb conventional disk configured in a RAID6 array.

I see where both the host and vm are setup to optimize the drives weekly, but both the host and vm have been setup for over two weeks now.  When I review the status of the optimize drives, I see "Last Run" column indicates "Never Run" even though two weeks have passed.  If I am reading this right, weekly auto optimization has been configured but has never run.  I would think if it ran on a weekly schedule it would display the date it was last run for each drive.  

The following image is a screenshot of the VM drive optimation.  The Host displays the same result.

ScheduledDiskDefrag.jpg
0
Keep up with what's happening at Experts Exchange!

Sign up to receive Decoded, a new monthly digest with product updates, feature release info, continuing education opportunities, and more.

 
JohnBusiness Consultant (Owner)Commented:
I don't think you need to bother with Disk Defrag on RAID 6.

Optimize Drives is for SSD and you have conventional drives.
0
 
Philip ElderTechnical Architect - HA/Compute/StorageCommented:
I suggest not defragmenting. The VHDX files get huge and what's the point?

When we set things up we set up the OS VHDX files as FIXED VHDX files at 75GB each. Then, the data VHDX that carries the largest repository would be DYNAMIC so as to not have to move around a monster that's not even full.

My EE article explains more: Some Hyper-V Hardware & Software Best Practices.
0
 
cmp119IT ManagerAuthor Commented:
Okay, while reviewing the section "Storage and VHDX Files", I do not see anything pertaining to fragmentation on large dynamic VHDX files.  You did address migration and recovery scenarios, but I am more concerned about disk performance issues as time progresses, etc.

My understanding is dynamic VHDX files are prone to fragmentation.  So I am puzzled why you're suggesting not defragmenting at all, especially on a large disk that will be around 600Gbs once all the data is transferred/restored.

For instance, as a test I copied via windows explorer a folder that contained many subfolders with around 1.2 million files (300Gbs).  I just wanted to see if it would error out with a simple copy between servers.  I have now deleted all the copied data.  Within a week or so I plan on conducting a restore on a cutoff date (Friday evening) to allow all the data to properly restore.  So, I would think the initial test copy and removal, and with an actual restore of live data will include some sort of fragmentation.  As time progresses, and folders and files are added daily more fragmentation will pursue.  So I am having a hard time understanding why defragmentation is not necessary.  I just want the best performance reading the files once written, and I believe degramentation assists improve this sort of performance over time.  So this is why I am asking for clarification.  Thanks.
0
 
Philip ElderTechnical Architect - HA/Compute/StorageCommented:
The simple way to avoid fragmentation is to do the following:

Set up all VM's operating system VHDX files as fixed. In our case, we have a PowerShell script that creates the entire setup automagically for us. We do so to avoid fragmentation for the operating systems as the process creates one contiguous file for each VM OS VHDX.

Once that process is complete we create either fixed or dynamic VHDX files for the data repositories that each VM may have. If the needed space is less than or equal to 300GB then we create a fixed VHDX file for it. If greater than 300GB we create a dynamic VHDX file but first we create the other needed fixed VHDX files (done via the above script).

We are usually left with one or two very large (TBs) VHDX files that grow as needed. Since they are rather static, fragmentation is not a big issue. If there is a concern, then bloat the dynamic VHDX file up by copying and re-copying a bunch of large files such as ISOs to do so then delete them all. This will expand the dynamic VHDX file in a contiguous manner on the host.
1
 
cmp119IT ManagerAuthor Commented:
So, I guess the existing attached Dynamic VHDX file of which I copied 300Gbs that are many small files needs to be removed and a new one created then.  I actually deleted all the 300Gbs test copy files, and formatted the disk.  So, maybe I can leave it intact and then copy large (30gb) test files until the VHDX file expands to 600gbs and then delete them.  

I just find it odd using Windows disk defrag is not mentioned at all.  So, either it simply does not work well in a virtual environment or possibly causes some sort of damage or negative impact.  Maybe you can provide some sort of clarification.

My initial thought was to copy all the data (600Gbs) to the dynamic VHDX file and then manually use Windows disk defrag.  We are dealing with 1.2 Million files on the initial transfer, so I would think the resulting copy would introduce some sort of fragmentation.

So let's say, I wind up leaving the attached dynamic VHDX file as is in that it was formatted.  Please note on the host file system it still shows 300gb size for the VHDX file even though it's now empty.  I then copy all the live data (600Gbs) during the cut off date, you still suggest not running a manual or scheduled disk defrag?  I also need to mention, I am thinking each year moving forward we may add 50 to 100Gbs of more small files, and as suggested there is no need to defrag the disk within the VM.  Running Disk defrag is standard practice on a physical server, and I just want to know why its not recommended running at all on a virtual server.

John Hurst's above comment suggest Windows will handle it without user intervention, and Windows will somehow inform me a defrag is needed.  Not sure how I will be informed, I guess an event viewer entry within the vm itself or the hyper-v host.  Not sure how exactly I will be informed if at all.
0
 
JohnBusiness Consultant (Owner)Commented:
Running Disk defrag is standard practice on a physical server  

We do not defrag RAID drives (real or virtual machines).  I do not see a need.

John Hurst's above comment suggests .... Windows will somehow inform me a defrag is needed.

In a non-RAID environment, VMware will ask to defrag a disk if needed. But I do not see this in a RAID environment or SSD environment
0
 
cmp119IT ManagerAuthor Commented:
I might be overreacting about potential disk performance issues due to fragmention with the initial data transfer and moving forward over time.  

In receiving your responses, I decided to do more research on the matter and found the following article:

https://www.altaro.com/hyper-v/disk-fragmentation-not-hyper-vs-enemy/

I am planning on installing this new VM next week, and I foresee it to be in production for at least five years.  Since I am dealing with a growing dynamic disk with an initial size of approximately 600Gbs, I image it growing to around 2TB with around 4 to 5 Million files over the projected life of the server.  I am not sure if there will be a time where if its warranted to "Compact" VHDX file.  After reading the above article, that might not even be necessary especially since it was not discussed/mentioned.
0
 
Philip ElderTechnical Architect - HA/Compute/StorageCommented:
Fragmentation is something we were concerned with when hard drives were spinning less than 10K RPMs and had seek times closer to seconds than milliseconds.

It is a concern, to some degree, on SAN applications where there are hundreds or thousands of disks with as many or more workloads. The SAN vendors take care of that in the background.

We have virtualization solutions that are standalone or clustered that have been around for a long time and have not seen any real degradation in performance. The caveat for that though is that we set things up from get-go with fixed VHDX files and then one or two dynamics as already mentioned.
0
 
cmp119IT ManagerAuthor Commented:
Thank you both your suggestions/recommendations.
0
 
JohnBusiness Consultant (Owner)Commented:
You are very welcome and I and I was happy to help
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

Join & Write a Comment

Featured Post

Cloud Class® Course: CompTIA Cloud+

The CompTIA Cloud+ Basic training course will teach you about cloud concepts and models, data storage, networking, and network infrastructure.

  • 5
  • 4
  • 3
  • +1
Tackle projects and never again get stuck behind a technical roadblock.
Join Now