Hyper-V - Best Setup Scenario

I am about to embark on restructuring the network I look after which has a couple of physical servers (a PE2800 and a PE2900). The PE2800 is running SBS 2003 Premium and the PE2900 has Windows Server 2003 Enterprise installed.

The PE2900 also has Virtual Server 2005 installed and has 5 VMs installed (4 servers and 1 XP workstation). It has 18GB or RAM and a decent amount of storage.

That's the current scenario, what I'm looking to do is the following:

2 new PE R710 servers with roughtly 18GB of RAM; 2xQuad Core Xeon processors (in each server). These machines will be used to run the following:

SBS 2011;
Microsoft Dynamics CRM v4 Professional Server (2 organisations - 1 is a test one which has little to no access to it; the other has 50+ users);
Windows Server 2003 Enterprise (x4) - the previous machines from the VS2005 install mentioned earlier; these servers are only accessed by 2-3 members of staff at any one time and house back office software that is a mix of MS Dynamics GP10 and non MS software - each server housing it's own system indepenently;
A couple of VM workstations - probably running Windows 7 Pro but they will only be used sparadically for staff needing access to a machine from a remote location;

Each of the systems above also have MS SQL Server 2005 running on them (except the SBS machine and the workstations); the CRM Server currently has MS SQL Server 2005 with about 5 other instances on it as well. The SBS box will be running Exchange but we are looking to move to Office365 in the next 6 months.

I'm hoping this is enough info to go on but if not please let me know.

So I'm looking to determine the best way to setup the network (I know there's no definative answer here) but should I be setting the 2 R710 machines as Hyper-V workgroup machines and house all the other systems as guests (I'm looking to virtualize everything here)? Can they be setup as a cluster in a workgroup - should I even consider this for fallover? Am I best having them sitting on a domain (even tho the DC would be a guest upon them)?

All suggestions welcome here and material to read appreciated as I've not setup a Hyper-V before (although looks simple enough).

Thanx as always guys
Steven O'NeillSolutions ArchitectAsked:
Who is Participating?
If the single server price is going to blow through your budget, I can assure you that clustering will more than double what you are already looking at. I hope that the pricing you are showing is list, because I configured a Dell R710 with 2x X5620 (2.4 GHz, 12M cache processors), 8 x 146 GB 15K drives, PERFC H700 w/512 MB NV Cache, 5 year NBD ProSupport, DRAC Enterprise, redundant Energy Smart power supplies for $10,277 USD list, which would be about $7000 after standard discount, before converting to £.

I don't see any reason why you would need more than the 4 built in NICs. You don't need ToE or iSCSI offload. With Hyper-V, the VMs share the physical NICs, so you probably only really need 2 NICs (1 for the host management, 1 for VMs).

I suggest the Xeon 5630 over the Xeon 5630 because the extra 133 MHz won't make a difference in performance. The 12Mb cache is the reason to step up to the 5620 from a cheaper processor.

I haven't figured out the value of ProSupport, so I just get hardware support, which saves a little bit.

It let me configure with Enerygy Smart power supplies which doesn't save much on the purchase, but should save on electricity.

If you can swing the £ for iDRAC Enterprise, I say do it. The iDRAC Enterprise allows full remote console control of the server including KVM, remote media, and power control. It's great for when you're not on-site and you want to see what's on the screen, or even reboot the server (hopefully you won't have to do that, but it's nice to be able to see the server booting after a BIOS or service pack upgrade).

I don't know if you need rack rails, but you didn't specify them.

In terms of having performance problems down the line, You should be in pretty good shape. I have found that there are generally two limiting factors when virtualizing systems: amount of RAM in the host, and disk performance. We are starting out with 48 GB RAM, and you can easily go to at least 96 GB in the future while keeping all of your existing RAM, so I think you are in good shape from a RAM pespective. As for disk IO, 8x 15K SAS drives in RAID 10 is as good as you can get without a really big investment in storage, so you should be okay there too. Your system should deviler more IOPS than what I have using 12 SATA drives, and I am running a lot more than you are, do your disk should be okay too.

Microsoft makes iSCSI target software available for freeto Windows 2008 R2 Customers (Don't need WSS)
Steven O'NeillSolutions ArchitectAuthor Commented:
Forgot to mention there is also a DroboPro device on the network that has about 4Tb of storage right now but this will be increased to 12Tb shortly. This device is used primary for storing data from the backup software on the network (Acronis Backup & Recovery 10) but I was wondering if this could also be used to store the VHD files themselves or if this was a bad idea?

I'm also curious what partitions/RAID setup we should be using for the Hyper-V machines?
The direction you go depends on your budget. I see two general options for you: install everything on a single R710 and use internal drives, or buy two R710, buy external SAN storage and setup Hyper-V clustering with an additional physical domain controller.

The simplest and cheapest is a single server solution.  An R710 with 6x8 GB RAM, dual processors, and 8 SAS drives in RAID 10 will give really good performance.  

Going more complicated would be a cluster solution. With a cluster the cluster nodes need to be part of a domain and while I hear that it may be possible to virtualize all of the domain controllers as long as one is outside of the cluster, I wouldn't do it. I recommend having a DC running on a 3rd box. It can be an existing server or a different low end box. The DC needs to be available when booting the cluster. If you are going to cluster the R710 should have the same processor and RAM spec as before, but only single drive or small RAID 1 for OS. Speaking of processors, go low end like Xeon 5620. You won't be pushing them. You can probably do well just with a single quad core.
When clustering you need shared storage. I don't know if a DroboPro can deliver the IOPS you will need. At any rate, I wouldn't use the same device for primary storage and backup because that places all your data at risk. There are dozens of storage options available. You can use a Windows Server and load StarWind or Microsoft's iSCSI target software and make the server a SAN. You would be depending on that single Windows server for all operations. (Perhaps better off using single R710?). StarWind also has a high availability mode that clusters two Windows servers together for a solution that will keep on running even is a storage node goes down. There are many hardware solutions available including Drobo, Qnap, Dell MD3200i, Celeros, HP, EMC, NetApp, etc. Some platforms have redundant controllers and others don't. You can use iSCSI or external SAS for a 2 node cluster.  
Introducing Cloud Class® training courses

Tech changes fast. You can learn faster. That’s why we’re bringing professional training courses to Experts Exchange. With a subscription, you can access all the Cloud Class® courses to expand your education, prep for certifications, and get top-notch instructions.

Steven O'NeillSolutions ArchitectAuthor Commented:
Hi Kevin

Thanx for the feedback. So what you're basically saying is that depending on out budget (about £7K) would show the direction to go.

I had toyed with the idea of a single R710 but was conscious this was again "putting all the eggs in one basket". The spec of hardware I had been looking at was:

PowerEdge R710 Rack Chassis, Up to 8x 2.5" HDDs, Intel 5500/5600 Series Support
x2 Intel Xeon E5630, 4C, 2.53GHz, 12M Cache, 5.86GT/s, 80W TDP, Turbo, HT, DDR3-1066MHz
48GB Memory for 2 CPUs, DDR3, 1333MHz (6x8GB Dual Ranked RDIMMs)
5Yr ProSupport and Next Business Day On-Site Service
Riser with 2 PCIe x8 + 2 PCIe x4 Slots
PERC H700, Integrated RAID Controller, 512MB Cache, For x8 backplane
RAID Connectivity C5 - RAID 10 for PERC H700 Controller, 4-6 or 4-8 HDDs based on chassis
High Output Power Supply, Redundant (2 PSU), 870W, Performance BIOS Setting
Primary Hard Drive (8) 146GB, SAS 6Gbps, 2.5-in, 15K RPM Hard Drive (Hot Plug)
Broadcom® NetXtreme II 5709 Dual Port 1GbE NIC with TOE and iSCSI Offload, PCIe-4
Embedded Gigabit Ethernet NIC with 4P TOE
iDRAC6 Express Server Management Card

Now wasn't sure of the config for the network cards above (the need for iSCSI in this instance). I take you're point about the DroboPro and having both backups and primary storage here.

Do you think the above spec is enough to go on for the machines I've spoken about for the moment (the budget is tight as it's for a charity but conscious about the amount of storage that's there); I don't think they could stretch for another server at this moment to host Microsoft Storage Server (if that's what you were meaning) - the software wouldn't be a problem though (there's a good relationship with Microsoft here and they can normally get software for a reduced cost or free although I know MSS is OEM so that may cause and issue here).

Essentially if they went for the server above it would blow the budget (server comes in at about £8K but would negotiate with Dell again on this).

What I'm wondering though is if they went for this spec would they run into problems later down the line in terms of performance degradation having the one server perform every furnction?

Thanx again for reviewing this.
Josiah RockeNetwork & CommunicationsCommented:
I can't say much about the clustering, but I will say you should probably get more than 18GB of RAM if you go the 2 server route. In terms strictly of performance, I think the 48GB of RAM on 1 server might trump 2 servers each with 18 GB, especially running Exchange and SQL.All that said, with that many VS, I would certainly be looking for a failover option too.
Steven O'NeillSolutions ArchitectAuthor Commented:

Apologies for the delay in coming back to this but fell off the grid for a while there...back reviewing this now.

Yes the price was list based and hadn't contacted Dell or other reseller just yet about this. Bit confused by the comment you made about the processors tho. In my config I said I was going to go for the Xeon 5630 and you said go for the Xeon 5620 (is that correct...as you say Xeon 5630 later as your recommendation as it has the 12M cache?)

I've never used the ProSupport either so will probably stick with the standard hardware support 4 hour response.

Not sure if we can go for the iDRAC Enterprise but I'm based only 20 minutes away from this client so any issues and I can be on-site pretty soon.

It's good to know I'm on the right track with this. So in your situation u are running a lot more than this (which I know is farily small) but they are very uptight about single point of failure, etc and I'm trying to assure them that they have managed for years as is but this is moving 2 physical servers onto the one machine and having all the eggs in one basket. So long as we ensure the DR solution is in place, data is being copied on-site and replicated off-site as well as redundancies in the hardware itself I can't see what else we can do on such a tight budget.

And having a RAID10 across all 8 disks is the way to go here? I should simple install Hyper-V itself (not as part of Windows Server 2008) and then have everything hanging off the same RAID? That's the only issue I think I still have here.

Thanx again for all your help.
I must have made a typo earlier. IMHO, the extra 133 MHz from a Xeon 5630 to a 5630 isn't worth ~$153 USD, especially when your applications won't push  a X5630 one bit. The X5630 is the lowest current processor with 12M cache. Otherwise I would suggest a processor with even fewer MHz. Case in point, the difference between two x5630 and the x5620 will pay for the iDRAC Enterprise. :-)

I agree that you need to put money and effort towards good offsite backups and recovery before you worry about clustering. When you cluster, you still have a single point of failure: the storage. If you have good backups you could restore some VMs on the 2900 running Hyper-V. I assume that the 2900 can run Hyper-V.
Steven O'NeillSolutions ArchitectAuthor Commented:
Hi Kevin

Good points made there and something to think about. The PE2900 is on the Hyper-V list from MS so I'm assuming it should be fine to run this (it was in fact how we were planning on performing a recovery in an off-site location should something catastrophic happen to the primary site...the PE2900 will be kept in storage off-site).

Could you think about the last question I had though as this is what I'm confusing myself with:

And having a RAID10 across all 8 disks is the way to go here? I should simple install Hyper-V Server itself (not as part of Windows Server 2008) and then have everything hanging off the same RAID? That's the only issue I think I still have here.

Thanx once again
The best performance would be to create a single logical disk using all 8 drives in RAID 10. Create a C partition of maybe 40-60 GB for Hyper-V Server 2008 R2 SP1 and then put all of the VMs on a D partition using the rest of the space.

5nine Manager for Hyper-V (Free) looks like a very nice tool for managing Hyper-V Server locally. I have never used it because I manage from Hyper-V Manager on other machines, but it looks like a very nice tool for doing management from the server itself which doesn't have a GUI in Hyper-V Server.
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.