Link to home
Start Free TrialLog in
Avatar of Steven O'Neill
Steven O'NeillFlag for United Kingdom of Great Britain and Northern Ireland

asked on

Hyper-V - Best Setup Scenario

I am about to embark on restructuring the network I look after which has a couple of physical servers (a PE2800 and a PE2900). The PE2800 is running SBS 2003 Premium and the PE2900 has Windows Server 2003 Enterprise installed.

The PE2900 also has Virtual Server 2005 installed and has 5 VMs installed (4 servers and 1 XP workstation). It has 18GB or RAM and a decent amount of storage.

That's the current scenario, what I'm looking to do is the following:

2 new PE R710 servers with roughtly 18GB of RAM; 2xQuad Core Xeon processors (in each server). These machines will be used to run the following:

SBS 2011;
Microsoft Dynamics CRM v4 Professional Server (2 organisations - 1 is a test one which has little to no access to it; the other has 50+ users);
Windows Server 2003 Enterprise (x4) - the previous machines from the VS2005 install mentioned earlier; these servers are only accessed by 2-3 members of staff at any one time and house back office software that is a mix of MS Dynamics GP10 and non MS software - each server housing it's own system indepenently;
A couple of VM workstations - probably running Windows 7 Pro but they will only be used sparadically for staff needing access to a machine from a remote location;

Each of the systems above also have MS SQL Server 2005 running on them (except the SBS machine and the workstations); the CRM Server currently has MS SQL Server 2005 with about 5 other instances on it as well. The SBS box will be running Exchange but we are looking to move to Office365 in the next 6 months.

I'm hoping this is enough info to go on but if not please let me know.

So I'm looking to determine the best way to setup the network (I know there's no definative answer here) but should I be setting the 2 R710 machines as Hyper-V workgroup machines and house all the other systems as guests (I'm looking to virtualize everything here)? Can they be setup as a cluster in a workgroup - should I even consider this for fallover? Am I best having them sitting on a domain (even tho the DC would be a guest upon them)?

All suggestions welcome here and material to read appreciated as I've not setup a Hyper-V before (although looks simple enough).

Thanx as always guys
Avatar of Steven O'Neill
Steven O'Neill
Flag of United Kingdom of Great Britain and Northern Ireland image

ASKER

Forgot to mention there is also a DroboPro device on the network that has about 4Tb of storage right now but this will be increased to 12Tb shortly. This device is used primary for storing data from the backup software on the network (Acronis Backup & Recovery 10) but I was wondering if this could also be used to store the VHD files themselves or if this was a bad idea?

I'm also curious what partitions/RAID setup we should be using for the Hyper-V machines?
Avatar of kevinhsieh
The direction you go depends on your budget. I see two general options for you: install everything on a single R710 and use internal drives, or buy two R710, buy external SAN storage and setup Hyper-V clustering with an additional physical domain controller.

The simplest and cheapest is a single server solution.  An R710 with 6x8 GB RAM, dual processors, and 8 SAS drives in RAID 10 will give really good performance.  

Going more complicated would be a cluster solution. With a cluster the cluster nodes need to be part of a domain and while I hear that it may be possible to virtualize all of the domain controllers as long as one is outside of the cluster, I wouldn't do it. I recommend having a DC running on a 3rd box. It can be an existing server or a different low end box. The DC needs to be available when booting the cluster. If you are going to cluster the R710 should have the same processor and RAM spec as before, but only single drive or small RAID 1 for OS. Speaking of processors, go low end like Xeon 5620. You won't be pushing them. You can probably do well just with a single quad core.
When clustering you need shared storage. I don't know if a DroboPro can deliver the IOPS you will need. At any rate, I wouldn't use the same device for primary storage and backup because that places all your data at risk. There are dozens of storage options available. You can use a Windows Server and load StarWind or Microsoft's iSCSI target software and make the server a SAN. You would be depending on that single Windows server for all operations. (Perhaps better off using single R710?). StarWind also has a high availability mode that clusters two Windows servers together for a solution that will keep on running even is a storage node goes down. There are many hardware solutions available including Drobo, Qnap, Dell MD3200i, Celeros, HP, EMC, NetApp, etc. Some platforms have redundant controllers and others don't. You can use iSCSI or external SAS for a 2 node cluster.  
Hi Kevin

Thanx for the feedback. So what you're basically saying is that depending on out budget (about £7K) would show the direction to go.

I had toyed with the idea of a single R710 but was conscious this was again "putting all the eggs in one basket". The spec of hardware I had been looking at was:

PowerEdge R710 Rack Chassis, Up to 8x 2.5" HDDs, Intel 5500/5600 Series Support
x2 Intel Xeon E5630, 4C, 2.53GHz, 12M Cache, 5.86GT/s, 80W TDP, Turbo, HT, DDR3-1066MHz
48GB Memory for 2 CPUs, DDR3, 1333MHz (6x8GB Dual Ranked RDIMMs)
5Yr ProSupport and Next Business Day On-Site Service
Riser with 2 PCIe x8 + 2 PCIe x4 Slots
PERC H700, Integrated RAID Controller, 512MB Cache, For x8 backplane
RAID Connectivity C5 - RAID 10 for PERC H700 Controller, 4-6 or 4-8 HDDs based on chassis
High Output Power Supply, Redundant (2 PSU), 870W, Performance BIOS Setting
Primary Hard Drive (8) 146GB, SAS 6Gbps, 2.5-in, 15K RPM Hard Drive (Hot Plug)
Broadcom® NetXtreme II 5709 Dual Port 1GbE NIC with TOE and iSCSI Offload, PCIe-4
Embedded Gigabit Ethernet NIC with 4P TOE
iDRAC6 Express Server Management Card

Now wasn't sure of the config for the network cards above (the need for iSCSI in this instance). I take you're point about the DroboPro and having both backups and primary storage here.

Do you think the above spec is enough to go on for the machines I've spoken about for the moment (the budget is tight as it's for a charity but conscious about the amount of storage that's there); I don't think they could stretch for another server at this moment to host Microsoft Storage Server (if that's what you were meaning) - the software wouldn't be a problem though (there's a good relationship with Microsoft here and they can normally get software for a reduced cost or free although I know MSS is OEM so that may cause and issue here).

Essentially if they went for the server above it would blow the budget (server comes in at about £8K but would negotiate with Dell again on this).

What I'm wondering though is if they went for this spec would they run into problems later down the line in terms of performance degradation having the one server perform every furnction?

Thanx again for reviewing this.
Avatar of Josiah Rocke
Josiah Rocke

I can't say much about the clustering, but I will say you should probably get more than 18GB of RAM if you go the 2 server route. In terms strictly of performance, I think the 48GB of RAM on 1 server might trump 2 servers each with 18 GB, especially running Exchange and SQL.All that said, with that many VS, I would certainly be looking for a failover option too.
ASKER CERTIFIED SOLUTION
Avatar of kevinhsieh
kevinhsieh
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Kevin

Apologies for the delay in coming back to this but fell off the grid for a while there...back reviewing this now.

Yes the price was list based and hadn't contacted Dell or other reseller just yet about this. Bit confused by the comment you made about the processors tho. In my config I said I was going to go for the Xeon 5630 and you said go for the Xeon 5620 (is that correct...as you say Xeon 5630 later as your recommendation as it has the 12M cache?)

I've never used the ProSupport either so will probably stick with the standard hardware support 4 hour response.

Not sure if we can go for the iDRAC Enterprise but I'm based only 20 minutes away from this client so any issues and I can be on-site pretty soon.

It's good to know I'm on the right track with this. So in your situation u are running a lot more than this (which I know is farily small) but they are very uptight about single point of failure, etc and I'm trying to assure them that they have managed for years as is but this is moving 2 physical servers onto the one machine and having all the eggs in one basket. So long as we ensure the DR solution is in place, data is being copied on-site and replicated off-site as well as redundancies in the hardware itself I can't see what else we can do on such a tight budget.

And having a RAID10 across all 8 disks is the way to go here? I should simple install Hyper-V itself (not as part of Windows Server 2008) and then have everything hanging off the same RAID? That's the only issue I think I still have here.

Thanx again for all your help.
I must have made a typo earlier. IMHO, the extra 133 MHz from a Xeon 5630 to a 5630 isn't worth ~$153 USD, especially when your applications won't push  a X5630 one bit. The X5630 is the lowest current processor with 12M cache. Otherwise I would suggest a processor with even fewer MHz. Case in point, the difference between two x5630 and the x5620 will pay for the iDRAC Enterprise. :-)

I agree that you need to put money and effort towards good offsite backups and recovery before you worry about clustering. When you cluster, you still have a single point of failure: the storage. If you have good backups you could restore some VMs on the 2900 running Hyper-V. I assume that the 2900 can run Hyper-V.
Hi Kevin

Good points made there and something to think about. The PE2900 is on the Hyper-V list from MS so I'm assuming it should be fine to run this (it was in fact how we were planning on performing a recovery in an off-site location should something catastrophic happen to the primary site...the PE2900 will be kept in storage off-site).

Could you think about the last question I had though as this is what I'm confusing myself with:

And having a RAID10 across all 8 disks is the way to go here? I should simple install Hyper-V Server itself (not as part of Windows Server 2008) and then have everything hanging off the same RAID? That's the only issue I think I still have here.

Thanx once again
The best performance would be to create a single logical disk using all 8 drives in RAID 10. Create a C partition of maybe 40-60 GB for Hyper-V Server 2008 R2 SP1 and then put all of the VMs on a D partition using the rest of the space.

5nine Manager for Hyper-V (Free) looks like a very nice tool for managing Hyper-V Server locally. I have never used it because I manage from Hyper-V Manager on other machines, but it looks like a very nice tool for doing management from the server itself which doesn't have a GUI in Hyper-V Server.
http://www.5nine.com/5nine-manager-for-hyper-v-free.aspx