ukitsme
asked on
SAN clustering
hi Experts,
We are setting up network for a new client.
Set up is a follows:
1 SAN
2 physical server with hyper v 2012 installed on them
We want to setup clustering on it
SAN has 2 canisters. we want to make it redundant (if one canister fails we should be able to operate with the other)
Here is my question
When we setup iscsi initiator on physical servers, under targets should I mention ipaddress of both canisters for fail-over
reason for my doubt is I have 2 node canister name. please check the attached screenshot
We want both the servers to work without a problem even when one of the canister fails.
7-07-2013-8-31-44-AM.png
We are setting up network for a new client.
Set up is a follows:
1 SAN
2 physical server with hyper v 2012 installed on them
We want to setup clustering on it
SAN has 2 canisters. we want to make it redundant (if one canister fails we should be able to operate with the other)
Here is my question
When we setup iscsi initiator on physical servers, under targets should I mention ipaddress of both canisters for fail-over
reason for my doubt is I have 2 node canister name. please check the attached screenshot
We want both the servers to work without a problem even when one of the canister fails.
7-07-2013-8-31-44-AM.png
ASKER CERTIFIED SOLUTION
membership
Create a free account to see this answer
Signing up is free and takes 30 seconds. No credit card required.
ASKER
we installed hyper v 2012 on both Physical servers and we will be installing Clustering on both the servers and SAN will be common storage.
We will be installing virtual machines once on the cluster.
hope that answers your question Arnold.
We will be installing virtual machines once on the cluster.
hope that answers your question Arnold.
ASKER
hi Hanccocka,
Tried it but it is not showing both nodes.
It is just showing one node.
Tried it but it is not showing both nodes.
It is just showing one node.
is it an ibm array?
iscsi for IBM:
http://pic.dhe.ibm.com/infocenter/svc/ic/index.jsp?topic=%2Fcom.ibm.storage.svc.console.doc%2Fsvc_rulesiscsi_334gow.html
I saw this and just wanted to see if it would help also:
http://blogs.technet.com/b/josebda/archive/2009/02/02/step-by-step-using-the-microsoft-iscsi-software-target-with-hyper-v-standalone-full-vhd.aspx
http://blogs.technet.com/b/keithmayer/archive/2013/03/12/speaking-iscsi-with-windows-server-2012-and-hyper-v.aspx#.UdoK-KTD_cs
iscsi for IBM:
http://pic.dhe.ibm.com/infocenter/svc/ic/index.jsp?topic=%2Fcom.ibm.storage.svc.console.doc%2Fsvc_rulesiscsi_334gow.html
I saw this and just wanted to see if it would help also:
http://blogs.technet.com/b/josebda/archive/2009/02/02/step-by-step-using-the-microsoft-iscsi-software-target-with-hyper-v-standalone-full-vhd.aspx
http://blogs.technet.com/b/keithmayer/archive/2013/03/12/speaking-iscsi-with-windows-server-2012-and-hyper-v.aspx#.UdoK-KTD_cs
It does, the container reference is unclear to me what it means. At times, SAN vendors use their own Nomenclature so container may mean/reference a host or something else.
A SAN will have a LUN allocated for use by a host/hosts (in a cluster)
The cluster configuration of resources is how you would determine which NODE has sole access to the LUN.
Only one node (the active node) should be accessing the LUN on the SAN at any one time.
iscsi is a low level SCSI command over Ethernet interaction. Such that should both nodes connect to the iscsi resource, each will have the filesystem table at that time. Now when node 1 sends a write data command. It will update the cached file system table. such that only it will see the changed file or a new file. The other node will have no point of reference. node2 interns issues a write command for another file. similarly it will update its version of the filesystem table and it will only see the file it wrote. Issue hits when both write the same filename or update the same filename. It is also possible that one write overwrites the others. leading to data loss.
I believe all SAN FC/iscsi advise against having the same LUN loaded/accessed by multiple hosts.
A SAN will have a LUN allocated for use by a host/hosts (in a cluster)
The cluster configuration of resources is how you would determine which NODE has sole access to the LUN.
Only one node (the active node) should be accessing the LUN on the SAN at any one time.
iscsi is a low level SCSI command over Ethernet interaction. Such that should both nodes connect to the iscsi resource, each will have the filesystem table at that time. Now when node 1 sends a write data command. It will update the cached file system table. such that only it will see the changed file or a new file. The other node will have no point of reference. node2 interns issues a write command for another file. similarly it will update its version of the filesystem table and it will only see the file it wrote. Issue hits when both write the same filename or update the same filename. It is also possible that one write overwrites the others. leading to data loss.
I believe all SAN FC/iscsi advise against having the same LUN loaded/accessed by multiple hosts.
ASKER
we are using IBM 3700. SAN has to 2 canisters for failover. If one fails it will keep working on the other. But when you check SAN 2 it has 2 nodes.
please check the screenshot.
Hanccocka is right. I added ipadress of canister1 in iscsci initiator -- targets. I was able to see only one node under discovery. to test it I pulled out canister 1, but servers were able to access volumes. It automatically switched to node 2 or canister 2 of SAN
My plan is I will have having 2 LUNS and I will be adding 2 host servers to both the LUNS.
Then I will be setting up a cluster.
Srv1 will be running 2 Virtual servers and Srv2 will be running 2 virtual servers. All the Virtual harddisks for virtual servers will be saved to SAN .
I will be saving both virtual servers on srv1 to LUN 1 and LUN2 for srv2.
I will not be using SAN for any thing else except for Virtual machines.
Can you advise whether this will work?
08-Jul-13-5-08-17-PM.png
please check the screenshot.
Hanccocka is right. I added ipadress of canister1 in iscsci initiator -- targets. I was able to see only one node under discovery. to test it I pulled out canister 1, but servers were able to access volumes. It automatically switched to node 2 or canister 2 of SAN
My plan is I will have having 2 LUNS and I will be adding 2 host servers to both the LUNS.
Then I will be setting up a cluster.
Srv1 will be running 2 Virtual servers and Srv2 will be running 2 virtual servers. All the Virtual harddisks for virtual servers will be saved to SAN .
I will be saving both virtual servers on srv1 to LUN 1 and LUN2 for srv2.
I will not be using SAN for any thing else except for Virtual machines.
Can you advise whether this will work?
08-Jul-13-5-08-17-PM.png
Ok, so a canister in this case means an initiator IP.
I would think for redundancy, each canister would be on a different network providing switch fault tolerance.
You are setting up an active/active cluster for the HOST or will you be clustering the VMs?
I would think for redundancy, each canister would be on a different network providing switch fault tolerance.
You are setting up an active/active cluster for the HOST or will you be clustering the VMs?
ASKER
It will be active active for the host.
I want to create 3 volumes in san
1 for quorum
1 for physical server 1
2nd for physical server 2
After using iscsi initiator to access volumes, I am planning to setup clustering.
After setting up cluster I will be installing 2 virtual servers on server 1 and Virtual hard disks of those virtual machines I will be placing them in Volume 1 of SAN. on second physical server, I will be installing 2 virtual servers and virtual hard disk of those virtual machines will be placed in volume 2 of SAN.
please advise if this will work or should I change anything.
I want to create 3 volumes in san
1 for quorum
1 for physical server 1
2nd for physical server 2
After using iscsi initiator to access volumes, I am planning to setup clustering.
After setting up cluster I will be installing 2 virtual servers on server 1 and Virtual hard disks of those virtual machines I will be placing them in Volume 1 of SAN. on second physical server, I will be installing 2 virtual servers and virtual hard disk of those virtual machines will be placed in volume 2 of SAN.
please advise if this will work or should I change anything.
I believe you need a quorum disk for every cluster group.
You application cluster 1: consisting of a cluster iP, application IP, cluster resource consisting of a quorum disk and additional storage resources as required by the application being clustered.
The same has to exist for cluster 2.
Note the drive lettering between the two cluster groups must be unique.
This deals with a node failure where the remaining node has both clusters running.
You've not said what you are clustering.
Not sure whether it is more economical in terms of consumed resources to have two hyper-v hosts clustered and have individual VMs that shift between them.
I.e. host1 hyper-v has vm1-4 while host2 hyper-v has VM5-7
If either node goes down, the VMs shift to the other.
http://vimeo.com/63520701
May help in explaining the thought.
You application cluster 1: consisting of a cluster iP, application IP, cluster resource consisting of a quorum disk and additional storage resources as required by the application being clustered.
The same has to exist for cluster 2.
Note the drive lettering between the two cluster groups must be unique.
This deals with a node failure where the remaining node has both clusters running.
You've not said what you are clustering.
Not sure whether it is more economical in terms of consumed resources to have two hyper-v hosts clustered and have individual VMs that shift between them.
I.e. host1 hyper-v has vm1-4 while host2 hyper-v has VM5-7
If either node goes down, the VMs shift to the other.
http://vimeo.com/63520701
May help in explaining the thought.
I saw this so I thought I would post it. may be off target:
http://www.aidanfinn.com/?download=Hyper-V Cluster with iSCSI Target
this one just because it has lots of pics:
http://www.msserverpro.com/implementing-windows-server-2012-hyper-v-failover-clustering/
other:
http://technet.microsoft.com/en-us/library/gg610692.aspx
http://www.aidanfinn.com/?download=Hyper-V Cluster with iSCSI Target
this one just because it has lots of pics:
http://www.msserverpro.com/implementing-windows-server-2012-hyper-v-failover-clustering/
other:
http://technet.microsoft.com/en-us/library/gg610692.aspx
ASKER
one last question can I install Cluster on a DC.
I mean can I make on the physical server DC.
I mean can I make on the physical server DC.
ASKER
and Arnold greatly appreciate your help
Thank you very much :)
Thank you very much :)
SOLUTION
membership
Create a free account to see this answer
Signing up is free and takes 30 seconds. No credit card required.
ASKER
hi Arnold,
Thanks your help.
You are right.
I was forced to install Cluster on a DC and was getting all sorts of error messages when I verify cluster.
Demoted it and setup cluster and tested it. Works great.
Thanks your help.
You are right.
I was forced to install Cluster on a DC and was getting all sorts of error messages when I verify cluster.
Demoted it and setup cluster and tested it. Works great.
Not sure. What SAN you are using. It seems canister reflects a host.
Look at the LUN allocations which is.
To cluster the SAN, you would need to have two of them.
in your case it sounds as though the SAN functions as a common storage to cluster two hosts.
You must not access the LUN from two hosts at the same time as that will lead to data corruption.