Link to home
Create AccountLog in
Avatar of keystonetech
keystonetechFlag for Canada

asked on

Can we setup 2008 R2 Hyper-V CSV with Window SAS disks ?

We want to setup CSV failover and Live Migration for a client but first wanted to know if we can (and how) to setup Failover with a Win2008 R2 Hyper-V Enterprise 64bit live OS. The disks are Raid 5 and Raid 1. All the storage for the failover is built into 2 different servers, a PowerEdge 2950 (destintation) and a PowerEdge T710 (source) as Perc 6i raid's.
I've seen people using iSCSI and SAN's but we don't have one nor do we want to spend the money on one. Is it possible just using these two server towers ?
I have quite a few documents on setting up everything but I'm lacking info on the FailOver hardware/shared storage requirements.
Avatar of Erik Pitti
Erik Pitti
Flag of United States of America image

In this scenario the disk must be a shared disk, that both servers are able to access simultaneously. During fail over, the shared disk is "transitioned" from one server to the other.

You may be able to do this using somethying like EMC's recoverpoint ( though I can't find anything that would make me think that's a "supported" scenario.  You may be better off getting an MD3000 SAS disk enclosure (less expensive than the MD3000i iSCSI enclosure) which (if your PERC cards have external ports) will integrate seamlessly with your existing hardware setup.
Avatar of keystonetech


Evening Chakote, yes they must be a shared disk but what are the requirements on that ? a seperate SAN ? something iSCSI ? we wanted to use one of the shares off the Windows 2008 r2 HV servers to each other. Each server is a live Win2008 r2 ent server, each has 1 Raid5 disk array and 1 Raid0 disk array. Could change that is needed. via a Perc 6i controller internally. No SAN - no seperate iscsi setup. For the failover cluster if I use a basic formatted disk of each SAS internal to the PowerEdge 2950 or the T710 and dedicate the entire disk to FailOver Manager it should work right ?
Avatar of Erik Pitti
Erik Pitti
Flag of United States of America image

Link to home
Create an account to see this answer
Signing up is free. No credit card required.
Create Account
Avatar of kevinhsieh
Actually, the requirements for shared storage is shared SAS, iSCSI, or FC. Shared parallel SCSI is no longer supported in Windows 2008 for clustering. Network shares have never been supported. You can use the internal storage if you convert it to iSCSI via software on the host or in a VM. StarWind Software has software that allows you to take your internal storage and convert it to an iSCSI SAN so you can use it for Hyper-V/VMware.
Avatar of Member_2_231077
Member_2_231077 is another shared torage solution that runs as VMs under Hyper-V or ESX.
A Virtual SAN Appliance like andyalder sugest is just a partial solution. As the storage is still local storage on one server, that will be a single point of failure... Say you have the two servers, A and B, with the RAID1 and RAID5-volumes on server A, with the VSA on Server A and Hyper-V machines on Server A and B (using a cluster with the VSA-volumes as CSV). If Server A fails, all virtual machines will fail as the storage becomes unavailable.

The common reason to use a Cluster is to provide redundancy and high availability. That's why most clusters (like SQL, like Exchange, like Fileservers,... like Hyper-V VM's) use a shared storage, independent of the hosts making up the cluster. That shared storage can be a SAN (using iSCSI or FibreChannel), or "direct" attached (using a SAS-disk enclosure).
So best would be to either find a SAS-enclosure that can be attached to all cluster-servers (probably like the MD3000 chakote suggests), or create some kind of (iSCSI-) SAN. That can be virtually using the VSA andyalder suggests (but preferably on a separate server), an iSCSI-device, a server with a software iSCSI target,...
Errm, you put VSA on both (or more) nodes of the cluster and they replicate their local storage to each other, therefore it's a full HA solution. Same with StarWinds mentioned in the previous comment.