2 node file server failover cluster with local storage on Server 2012R2

Hello,
  I have successfully setup, in the past, a hyper-v failover cluster using one iSCSI device as a target for storage.  I started a new project where I have the exact same setup, however I have two storage devices to use.  I wanted to create a file server failover for the storage and then setup the hyper-v failover to use that storage over smb3.  

  I have 2 physical VM host machines that I will make into my hyper-v failover cluster.  I also have 2 storage servers.  All are running 2012 r2.  I have 12 X 4TB disks in both storage servers for use as storage seperate from the OS.  

  Initially when I attempted to validate the failover cluster it failed for storage not finding suitable disks.  I discovered that this is because my disks are attached through a RAID controller.  I followed the MS KB article to allow RAID drives found here:  http://support.microsoft.com/kb/2839292

  After this reg entry was made I am now able to see the individual drives.  In computer management they are all online and initiated but they are not partitioned in any way.  I attempted to validate the cluster again and failed but instead of saying no suitable disks, it tells me that the disks are only visible by one node.  Well this makes sense since they 12 disks are local to the machine they are on.  So I wasnt sure if I have to share them, or do something else to make them available.  

  The next step I did was to try to create a storage pool, I went into file and storage services.  I can see both nodes, and the 12 drives on both nodes listed under each node.  So I selected all 12 drives on one node and create pool1 and all 12 drives on the second node and create pool2.  I reran the validation and it again tells me its failing due to storage.  Here is the results of the scan.  In short I have 12 local drives on each file server that I want to create a two node file server failover cluster.  What am I missing?

List Disks
Description: List all disks visible to one or more nodes. If a subset of disks is specified for validation, list only disks in the subset.
Start: 8/11/2014 3:13:38 PM.
Prepare storage for testing.


Begin to build matrix: 8/11/2014 3:13:39 PM.
Begin to build candidate disk list: 8/11/2014 3:13:39 PM.
Dump disk collection to report: 8/11/2014 3:13:39 PM.


STORAGE1.domain.net
Row Disk Number Disk Signature VPD Page 83h Identifier VPD Serial Number Model Bus Type Stack Type SCSI Address Adapter Eligible for Validation Disk Characteristics
0 0 674af9b9 06B77A0301000000001517FFFF0AEB84 OS Intel Raid 1 Volume RAID Stor Port 0:6:1:0 Intel(R) C600+/C220+ series chipset SATA RAID Controller False Disk is a boot volume. Disk is a system volume. Disk is used for paging files. Disk is used for memory dump files. Disk is on the system bus. Disk partition style is MBR. Disk type is BASIC.  
1 1 {37c7df2d-215b-11e4-80c8-000854704f52} 600605B008F0A3D01B7B5520BE5F9B9A 009a9b5fbe20557b1bd0a3f008b00506 LSI MR9271-8i SCSI Disk Device RAID Stor Port 2:1:0:0 LSI MegaRAID SAS Adapter True Disk partition style is GPT. Disk type is BASIC.  
2 2 {37c7df32-215b-11e4-80c8-000854704f52} 600605B008F0A3D01B7B5539BFD22C63 00632cd2bf39557b1bd0a3f008b00506 LSI MR9271-8i SCSI Disk Device RAID Stor Port 2:1:1:0 LSI MegaRAID SAS Adapter True Disk partition style is GPT. Disk type is BASIC.  
3 3 {37c7df39-215b-11e4-80c8-000854704f52} 600605B008F0A3D01B7B554AC0D49926 002699d4c04a557b1bd0a3f008b00506 LSI MR9271-8i SCSI Disk Device RAID Stor Port 2:1:2:0 LSI MegaRAID SAS Adapter True Disk partition style is GPT. Disk type is BASIC.  
4 4 {37c7df40-215b-11e4-80c8-000854704f52} 600605B008F0A3D01B7B555FC2143083 00833014c25f557b1bd0a3f008b00506 LSI MR9271-8i SCSI Disk Device RAID Stor Port 2:1:3:0 LSI MegaRAID SAS Adapter True Disk partition style is GPT. Disk type is BASIC.  
5 5 {37c7df46-215b-11e4-80c8-000854704f52} 600605B008F0A3D01B7B5572C33754D5 00d55437c372557b1bd0a3f008b00506 LSI MR9271-8i SCSI Disk Device RAID Stor Port 2:1:4:0 LSI MegaRAID SAS Adapter True Disk partition style is GPT. Disk type is BASIC.  
6 6 {37c7df4b-215b-11e4-80c8-000854704f52} 600605B008F0A3D01B7B55A6C64D6EF6 00f66e4dc6a6557b1bd0a3f008b00506 LSI MR9271-8i SCSI Disk Device RAID Stor Port 2:1:5:0 LSI MegaRAID SAS Adapter True Disk partition style is GPT. Disk type is BASIC.  
7 7 {37c7df50-215b-11e4-80c8-000854704f52} 600605B008F0A3D01B7B55B0C6E84D19 00194de8c6b0557b1bd0a3f008b00506 LSI MR9271-8i SCSI Disk Device RAID Stor Port 2:1:6:0 LSI MegaRAID SAS Adapter True Disk partition style is GPT. Disk type is BASIC.  
8 8 {37c7df55-215b-11e4-80c8-000854704f52} 600605B008F0A3D01B7B55C1C7EE05EA 00ea05eec7c1557b1bd0a3f008b00506 LSI MR9271-8i SCSI Disk Device RAID Stor Port 2:1:7:0 LSI MegaRAID SAS Adapter True Disk partition style is GPT. Disk type is BASIC.  
9 9 {37c7df5a-215b-11e4-80c8-000854704f52} 600605B008F0A3D01B7B55CDC8A792F9 00f992a7c8cd557b1bd0a3f008b00506 LSI MR9271-8i SCSI Disk Device RAID Stor Port 2:1:8:0 LSI MegaRAID SAS Adapter True Disk partition style is GPT. Disk type is BASIC.  
10 10 {37c7df5f-215b-11e4-80c8-000854704f52} 600605B008F0A3D01B7B55DCC98F57DB 00db578fc9dc557b1bd0a3f008b00506 LSI MR9271-8i SCSI Disk Device RAID Stor Port 2:1:9:0 LSI MegaRAID SAS Adapter True Disk partition style is GPT. Disk type is BASIC.  
11 11 {37c7df64-215b-11e4-80c8-000854704f52} 600605B008F0A3D01B7B55E7CA3B6690 0090663bcae7557b1bd0a3f008b00506 LSI MR9271-8i SCSI Disk Device RAID Stor Port 2:1:10:0 LSI MegaRAID SAS Adapter True Disk partition style is GPT. Disk type is BASIC.  
12 12 {37c7df69-215b-11e4-80c8-000854704f52} 600605B008F0A3D01B7B5606CC15085E 005e0815cc06567b1bd0a3f008b00506 LSI MR9271-8i SCSI Disk Device RAID Stor Port 2:1:11:0 LSI MegaRAID SAS Adapter True Disk partition style is GPT. Disk type is BASIC.  




STORAGE2.domain.net
Row Disk Number Disk Signature VPD Page 83h Identifier VPD Serial Number Model Bus Type Stack Type SCSI Address Adapter Eligible for Validation Disk Characteristics
0 0 {dcda8438-1f38-11e4-80c3-002590ebc972} 600605B008EF96A01B7B92D60ED987B9 00b987d90ed6927b1ba096ef08b00506 LSI MR9271-8i SCSI Disk Device RAID Stor Port 0:1:0:0 LSI MegaRAID SAS Adapter False Disk bus type does not support clustering. Disk partition style is GPT. Disk type is BASIC.  
1 1 {dcda843e-1f38-11e4-80c3-002590ebc972} 600605B008EF96A01B7B92E30F9FA4D4 00d4a49f0fe3927b1ba096ef08b00506 LSI MR9271-8i SCSI Disk Device RAID Stor Port 0:1:1:0 LSI MegaRAID SAS Adapter False Disk bus type does not support clustering. Disk partition style is GPT. Disk type is BASIC.  
2 2 {dcda8443-1f38-11e4-80c3-002590ebc972} 600605B008EF96A01B7B92F510B7AD8D 008dadb710f5927b1ba096ef08b00506 LSI MR9271-8i SCSI Disk Device RAID Stor Port 0:1:2:0 LSI MegaRAID SAS Adapter False Disk bus type does not support clustering. Disk partition style is GPT. Disk type is BASIC.  
3 3 {dcda8449-1f38-11e4-80c3-002590ebc972} 600605B008EF96A01B7B92FF114AA738 0038a74a11ff927b1ba096ef08b00506 LSI MR9271-8i SCSI Disk Device RAID Stor Port 0:1:3:0 LSI MegaRAID SAS Adapter False Disk bus type does not support clustering. Disk partition style is GPT. Disk type is BASIC.  
4 4 {dcda844e-1f38-11e4-80c3-002590ebc972} 600605B008EF96A01B7B930911ECC4FA 00fac4ec1109937b1ba096ef08b00506 LSI MR9271-8i SCSI Disk Device RAID Stor Port 0:1:4:0 LSI MegaRAID SAS Adapter False Disk bus type does not support clustering. Disk partition style is GPT. Disk type is BASIC.  
5 5 {dcda8453-1f38-11e4-80c3-002590ebc972} 600605B008EF96A01B7B93121276F403 0003f4761212937b1ba096ef08b00506 LSI MR9271-8i SCSI Disk Device RAID Stor Port 0:1:5:0 LSI MegaRAID SAS Adapter False Disk bus type does not support clustering. Disk partition style is GPT. Disk type is BASIC.  
6 6 {dcda8458-1f38-11e4-80c3-002590ebc972} 600605B008EF96A01B7B931C1303D375 0075d303131c937b1ba096ef08b00506 LSI MR9271-8i SCSI Disk Device RAID Stor Port 0:1:6:0 LSI MegaRAID SAS Adapter False Disk bus type does not support clustering. Disk partition style is GPT. Disk type is BASIC.  
7 7 {dcda845d-1f38-11e4-80c3-002590ebc972} 600605B008EF96A01B7B932513941D5D 005d1d941325937b1ba096ef08b00506 LSI MR9271-8i SCSI Disk Device RAID Stor Port 0:1:7:0 LSI MegaRAID SAS Adapter False Disk bus type does not support clustering. Disk partition style is GPT. Disk type is BASIC.  
8 8 {dcda8462-1f38-11e4-80c3-002590ebc972} 600605B008EF96A01B7B933514899A61 00619a891435937b1ba096ef08b00506 LSI MR9271-8i SCSI Disk Device RAID Stor Port 0:1:8:0 LSI MegaRAID SAS Adapter False Disk bus type does not support clustering. Disk partition style is GPT. Disk type is BASIC.  
9 9 {dcda8467-1f38-11e4-80c3-002590ebc972} 600605B008EF96A01B7B933F1521F4B4 00b4f421153f937b1ba096ef08b00506 LSI MR9271-8i SCSI Disk Device RAID Stor Port 0:1:9:0 LSI MegaRAID SAS Adapter False Disk bus type does not support clustering. Disk partition style is GPT. Disk type is BASIC.  
10 10 {dcda846c-1f38-11e4-80c3-002590ebc972} 600605B008EF96A01B7B934815AD9AA2 00a29aad1548937b1ba096ef08b00506 LSI MR9271-8i SCSI Disk Device RAID Stor Port 0:1:10:0 LSI MegaRAID SAS Adapter False Disk bus type does not support clustering. Disk partition style is GPT. Disk type is BASIC.  
11 11 {dcda8471-1f38-11e4-80c3-002590ebc972} 600605B008EF96A01B7B93621736EC35 0035ec361762937b1ba096ef08b00506 LSI MR9271-8i SCSI Disk Device RAID Stor Port 0:1:11:0 LSI MegaRAID SAS Adapter False Disk bus type does not support clustering. Disk partition style is GPT. Disk type is BASIC.  
12 12 69540066 94210C9501000000001517FFFF0AEB84 OS Intel Raid 1 Volume RAID Stor Port 1:6:1:0 Intel(R) C600+/C220+ series chipset SATA RAID Controller False Disk is a boot volume. Disk is a system volume. Disk is used for paging files. Disk is used for memory dump files. Disk bus type does not support clustering. Disk is on the system bus. Disk partition style is MBR. Disk type is BASIC.  



Stop: 8/11/2014 3:13:39 PM.

Back to Summary
Back to Top

--------------------------------------------------------------------------------


List Disks To Be Validated
Description: List disks that will be validated for cluster compatibility.
Start: 8/11/2014 3:13:39 PM.
Physical disk {37c7df69-215b-11e4-80c8-000854704f52} is visible from only one node and will not be tested. Validation requires that the disk be visible from at least two nodes. The disk is reported as visible at node: STORAGE1.domain.net
Physical disk {37c7df64-215b-11e4-80c8-000854704f52} is visible from only one node and will not be tested. Validation requires that the disk be visible from at least two nodes. The disk is reported as visible at node: STORAGE1.domain.net
Physical disk {37c7df5f-215b-11e4-80c8-000854704f52} is visible from only one node and will not be tested. Validation requires that the disk be visible from at least two nodes. The disk is reported as visible at node: STORAGE1.domain.net
Physical disk {37c7df5a-215b-11e4-80c8-000854704f52} is visible from only one node and will not be tested. Validation requires that the disk be visible from at least two nodes. The disk is reported as visible at node: STORAGE1.domain.net
Physical disk {37c7df55-215b-11e4-80c8-000854704f52} is visible from only one node and will not be tested. Validation requires that the disk be visible from at least two nodes. The disk is reported as visible at node: STORAGE1.domain.net
Physical disk {37c7df50-215b-11e4-80c8-000854704f52} is visible from only one node and will not be tested. Validation requires that the disk be visible from at least two nodes. The disk is reported as visible at node: STORAGE1.domain.net
Physical disk {37c7df4b-215b-11e4-80c8-000854704f52} is visible from only one node and will not be tested. Validation requires that the disk be visible from at least two nodes. The disk is reported as visible at node: STORAGE1.domain.net
Physical disk {37c7df46-215b-11e4-80c8-000854704f52} is visible from only one node and will not be tested. Validation requires that the disk be visible from at least two nodes. The disk is reported as visible at node: STORAGE1.domain.net
Physical disk {37c7df40-215b-11e4-80c8-000854704f52} is visible from only one node and will not be tested. Validation requires that the disk be visible from at least two nodes. The disk is reported as visible at node: STORAGE1.domain.net
Physical disk {37c7df39-215b-11e4-80c8-000854704f52} is visible from only one node and will not be tested. Validation requires that the disk be visible from at least two nodes. The disk is reported as visible at node: STORAGE1.domain.net
Physical disk {37c7df32-215b-11e4-80c8-000854704f52} is visible from only one node and will not be tested. Validation requires that the disk be visible from at least two nodes. The disk is reported as visible at node: STORAGE1.domain.net
Physical disk {37c7df2d-215b-11e4-80c8-000854704f52} is visible from only one node and will not be tested. Validation requires that the disk be visible from at least two nodes. The disk is reported as visible at node: STORAGE1.domain.net
No disks were found on which to perform cluster validation tests. To correct this, review the following possible causes:
* The disks are already clustered and currently Online in the cluster. When testing a working cluster, ensure that the disks that you want to test are Offline in the cluster.
* The disks are unsuitable for clustering. Boot volumes, system volumes, disks used for paging or dump files, etc., are examples of disks unsuitable for clustering.
* Review the "List Disks" test. Ensure that the disks you want to test are unmasked, that is, your masking or zoning does not prevent access to the disks. If the disks seem to be unmasked or zoned correctly but could not be tested, try restarting the servers before running the validation tests again.
* The cluster does not use shared storage. A cluster must use a hardware solution based either on shared storage or on replication between nodes. If your solution is based on replication between nodes, you do not need to rerun Storage tests. Instead, work with the provider of your replication solution to ensure that replicated copies of the cluster configuration database can be maintained across the nodes.
* The disks are Online in the cluster and are in maintenance mode.
No disks were found on which to perform cluster validation tests.
Stop: 8/11/2014 3:13:39 PM.
compcreateAsked:
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
A Failover Cluster is usually defined with shared storage, or DAS (with shared SAS controller), not local disks.

You could create a large iSCSI Target (VHD) on these local disks, and then this could be used as an iSCSI disk to both servers (remote and local).
0
compcreateAuthor Commented:
Can you elaborate on the shared SAS controller.  When I said I have 12 local disks I meant I have 12 local disks to that machine that are SAS drives connected through an LSI MegaRAID SAS 9271-8i controller.  Can this be configured as you are suggesting?

Alternately, would it be stupid to just create iSCSI targets from the STORAGE1 and STORAGE2 to VMHOST1 and VMHOST2 and then run the file server cluster from the VMHOSTS?
0
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
Unfortunately, you need to have a special external SAS shelf, which is connected to BOTH nodes.

Alternately, would it be stupid to just create iSCSI targets from the STORAGE1 and STORAGE2 to VMHOST1 and VMHOST2 and then run the file server cluster from the VMHOSTS?

Yes, it does create a single point of failure, but this is the only option you have.
0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
Ultimate Tool Kit for Technology Solution Provider

Broken down into practical pointers and step-by-step instructions, the IT Service Excellence Tool Kit delivers expert advice for technology solution providers. Get your free copy now.

andyalderCommented:
Two storage servers full of disks, sounds like you need a HA iSCSI for them, http://www.starwindsoftware.com/starwind-virtual-san for example - there are others such as HP LeftHand. MS don't do highly available iSCSI target software. Note if it's Windows Storage Server you can still use 3rd party iSCSI software on it legally since it's a storage task, if it's normal Windows server you could virtualise the iSCSI target and run lots of VMs on it and have two hosts left over.

If all this is for is fileserver rather than shared storage for general virtualisation why not just use DFS replication?
0
compcreateAuthor Commented:
this is for HA vm storage not just normal shared file storage.
0
andyalderCommented:
Use StarWind or LeftHand then, you certainly don't need a "special external SAS shelf".
0
compcreateAuthor Commented:
Basically the concept is the same.  I cant do the failover with my present setup.  I need an intermediary either a SAS shelf as the other poster stated or Starwind or Lefthand as you state to act in between the servers and storage.

At this point we are just doing iSCSI targets that are available form both storage servers.
0
andyalderCommented:
Unsubscribe. May all your SPOFs belong to you.
0
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Windows Server 2012

From novice to tech pro — start learning today.

Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.