ROM
asked on
Best creation of VMs and base image in Hyper V 2012
Hi All,
Just need some quick advice.
I am virtualising an entire estate this weekend so will have CRM+SQL, Exchange, File, DC+DHCP+DNS+PRINT, DC+DHCP+DNS+PRINT, RDS
I am creating a base image of dynamic disk 60GB which I am getting up to scratch tonight with updates and AV etc...
Then will sysprep and copy VM and start to build the infrastructure and then services.
a) All sound good so far ?
b) In my VMs. I can just expand the 60GB for each type of VM as some will need more like Exchange and the CRM+SQL. And run the VM Guest OS and Data on 1 VHDX. Or is there any amazing performance hits if I dont create 2x VHDX.. one for Guest OS and one for Guest Data/App etc...
Would like some steer before tomorrow AM please.
Many thanks everyone
R
Just need some quick advice.
I am virtualising an entire estate this weekend so will have CRM+SQL, Exchange, File, DC+DHCP+DNS+PRINT, DC+DHCP+DNS+PRINT, RDS
I am creating a base image of dynamic disk 60GB which I am getting up to scratch tonight with updates and AV etc...
Then will sysprep and copy VM and start to build the infrastructure and then services.
a) All sound good so far ?
b) In my VMs. I can just expand the 60GB for each type of VM as some will need more like Exchange and the CRM+SQL. And run the VM Guest OS and Data on 1 VHDX. Or is there any amazing performance hits if I dont create 2x VHDX.. one for Guest OS and one for Guest Data/App etc...
Would like some steer before tomorrow AM please.
Many thanks everyone
R
ASKER
Great.. thanks Andrew.
So to be clear. For each VM... I will create 1 VHDX. This will Have 1 C: Drive parition that I will have the Guest OS on and also the Guest Apps and Data.
So if I have 5x VMs, I will end up with 5x VHDX only.. And enlarge them for each use and also convert from DYNAMIC to get best performance?
Thanks Andrew...
R
So to be clear. For each VM... I will create 1 VHDX. This will Have 1 C: Drive parition that I will have the Guest OS on and also the Guest Apps and Data.
So if I have 5x VMs, I will end up with 5x VHDX only.. And enlarge them for each use and also convert from DYNAMIC to get best performance?
Thanks Andrew...
R
You probably won't see any performance difference when using 1 VHD vs 2, but the solutions you have that utilize databasing (SQL and Exchange) benefit greatly from having the Database and Transaction stored on different drive spindles. This is to allow point in time recovery in the event of a major disk failure on either drive (If the Database disk fails, you can restore to the last transaction log by recovering the last full backup, then apply the logs that have been generated since then. If the logs disk fails, you have all the data in the DB still and can recover easily). So you will want to take that into account when building up your server.
ASKER
Great point Adam, thanks.
I didnt point out that I will actually be RAID 6 or RAID 10 this HOST and also I am setting up an identical HOST and using Hyper V Replicas.
I get your point completely and I worked heavily in the old IT world where spindles and LUNs etc... all made sense when it came to performance..
So the transition to Hyper-V has been hiccupy and I always have the Physical vs Virtual conversation with my techs and colleagues lol.. However, Im now virtualising this paricular entire shop... so I guess I bit the bullet :).
So with my budget restrictions, using RAID, Backup to Disk from the Host and a second server ready to go, I believe this will be as sound a setup as I can make it.
However... I still want disks for OS and App and Transaction and Data.. lol... Beer money.. champagne tastes!!
Thanks everyone
R
I didnt point out that I will actually be RAID 6 or RAID 10 this HOST and also I am setting up an identical HOST and using Hyper V Replicas.
I get your point completely and I worked heavily in the old IT world where spindles and LUNs etc... all made sense when it came to performance..
So the transition to Hyper-V has been hiccupy and I always have the Physical vs Virtual conversation with my techs and colleagues lol.. However, Im now virtualising this paricular entire shop... so I guess I bit the bullet :).
So with my budget restrictions, using RAID, Backup to Disk from the Host and a second server ready to go, I believe this will be as sound a setup as I can make it.
However... I still want disks for OS and App and Transaction and Data.. lol... Beer money.. champagne tastes!!
Thanks everyone
R
ASKER CERTIFIED SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
I would still use more than a single VHD, if it contains data, but don't put more than a single partition per disk.
ASKER
IM getting stuck guys as I hve a 2.8TB partition and Windows will not let me use the rest. Even if I convert to GPT.. windows setup just doesnt seem to want to play.
now hunting from bios for UEFI settings...
need some help please.
many thanks in advance
R
now hunting from bios for UEFI settings...
need some help please.
many thanks in advance
R
ASKER
OK set to UEFI.... was not set in the bios but now i cant boot from my usb for windows install.
Dont want to create to arrays as I want the iops from the system disk.
little stuck :)
R
Dont want to create to arrays as I want the iops from the system disk.
little stuck :)
R
SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
ASKER
Thanks for answering Andrew.
Its a H700 in a Dell r510.
I can create Raid Arrays. I cannot create logical volumes within the raid controller.
the only way I can boot is with BIOS not uefi as I am using USB sticks only.
When I start the setup in windows. I can create a 100GB partition and then the rest get chopped into 350mb windows, 1950gb unallocated and 750gb unallocated.
When I try to use these unallocated in setup or windows when built the 750GB is completely unusable. But if I switch on UEFI... then I will not be able to boot to my OS will I?
R
Its a H700 in a Dell r510.
I can create Raid Arrays. I cannot create logical volumes within the raid controller.
the only way I can boot is with BIOS not uefi as I am using USB sticks only.
When I start the setup in windows. I can create a 100GB partition and then the rest get chopped into 350mb windows, 1950gb unallocated and 750gb unallocated.
When I try to use these unallocated in setup or windows when built the 750GB is completely unusable. But if I switch on UEFI... then I will not be able to boot to my OS will I?
R
ASKER
Sorry Andrew... I think I am stuck in the fog and frustration.
You mean create 1 raid array... then create two VDs.
Build on first VD with the BOOT set to BIOS.. get my OS on... Then convert to GPT the remaining ...??
Thanks
R
You mean create 1 raid array... then create two VDs.
Build on first VD with the BOOT set to BIOS.. get my OS on... Then convert to GPT the remaining ...??
Thanks
R
ASKER
Just checked and when I create my RAID ARRAY I am creating the VD on that RAID controller.. so I cannt create two separate Virtual Disks/Logical Volumes without losing two disks (RAID 1).
R
R
ASKER
And this is because when in UEFI I cannot use my bootable usb because that has been created with Windows 7/USB loader.. and that format must not work as the servers do not boot from it when in UEFI.
R
R
ASKER
Going to try and create a UEFI compatible boot usb drive.
again, not all storage controllers, can create two or more VDs logical disks.
ASKER
Hi Andrew,
I have managed to create two VDs in my one raid 6 array.
I was being a divvy :P.
K pushing on .. thanks
R
I have managed to create two VDs in my one raid 6 array.
I was being a divvy :P.
K pushing on .. thanks
R
ASKER
All up and running now..
Crumbs.. sometimes you really can over think things.
Many thanks for your help Andrew on this and my others posts. Much appreciated
R
Crumbs.. sometimes you really can over think things.
Many thanks for your help Andrew on this and my others posts. Much appreciated
R
ASKER
Many thanks guys.
For the initial steer and Andrew for your reclarification for me when I got in a muddle.
Thanks
R
For the initial steer and Andrew for your reclarification for me when I got in a muddle.
Thanks
R
I realize this is after the fact.
On the RAID controller set up two logical disks:
75GB for the host OS
Balance GB/TB for data
This avoids the jam you ran into. We use the traditional BIOS setup for our servers at this time not uEFI.
On the RAID controller set up two logical disks:
75GB for the host OS
Balance GB/TB for data
This avoids the jam you ran into. We use the traditional BIOS setup for our servers at this time not uEFI.
b) yes use disks instead of partitions, it's much easier to enlarge disks with a single partition, than if you put two partitions on one disk.