iSCSI Drives Not Usable for Shared Storage on a Microsoft Cluster

I am still in the process of setting up our virtual server cluster and have run into another problem.

We have an iSCSI SAN unit hooked up to two Windows 2003 servers.  No issues there.

The iSCSI connection from both servers is over an onboard NIC (one of two).  The other NIC is of course the public NIC.
The internal hard drives of both nodes are also on an onboard SATA controller.

I went to configure the first node in the cluster, and all iSCSI drives are ineligible for use as shared resources - the error is: "Drive X: cannot be managed because it is on the same storage bus as the boot disk".  

The NICs are on a different PCI bus (4), however the PCI bridge that they are on is on "bus 0" according to Windows device manager.  The onboard RAID controller also appears to be on "bus 0".  

Is it possible that the way the cluster analyzer enumerates the devices, that the onboard RAID controller and PCI bridge/NICs are causing the error I am seeing?  I find it hard to believe since so many devices are embedded these days, so a bridge device is ultimately sharing a bus at some point (for smaller servers).

Any ideas?  

asmgiAsked:
Who is Participating?

[Webinar] Streamline your web hosting managementRegister Today

x
 
Handy HolderConnect With a Mentor Saggar maker's bottom knockerCommented:
The message refers to storage bus, which is a zone on a fibre channel fabric rather than a PCI bus. It's to prevent LIPs upsetting the SAN fabric, doesn't make much sense in an iSCSI environment, maybe you have not setup seperate VLAN for iSCSI?

HKLM\SYSTEM\CurrentControlSet\Services\ClusSvc\Parameters\ManageDisksOnSystemBuses 0x01 allows shared storage bus but I may be on the wrong track. download.microsoft.com/download/7/b/5/7b555ca0-297d-4a04-a7ea-5f8b0089b249/SAN.doc talks about shared storage busses.

You have a 2 port NIC, 1 for public, one for iSCSI. Where's the heartbeat, it should have a dedicated NIC for best practice.
0
 
asmgiAuthor Commented:
Thank you for your input.  Yes I am aware of the heartbeat technically requiring another adapter.  We may or may not add another for best performance/availability.

The errors in the attached file may shed some light on the issue.  When MSCS enumerates all the disks, they all show up on 'bus 0', this the system will not allow the iSCSI disks to be used as shared storage.  What I am trying to figure it is why MSCS sees the disks this way and where it gets this information from (registry?)




cluster.txt
0
 
asmgiAuthor Commented:
Addition -

I wanted to thank you for that SAN document.  Although it did not cover iSCSI specifically, it did allow me to create the first node in my cluster on the shared SAN disks.  I put that reg key in and re-analyzed during the setup wizard.  It found my quorum disk and the other volumes.

I am still curious why Windows thinks the iSCSI disks are local though, and I am not sure if it's due to the motherboard/resources or if it has something to do with the switch configuration.  To answer your question, no the ports on the switch are not segmented on a VLAN for the iSCSI connections.  We will probably do that before moving the servers into production.

If you or any else have any ideas on this, please let me know.  I'll leave this question open for a bit since my main question is still technically unanswered.
0
Get your problem seen by more experts

Be seen. Boost your question’s priority for more expert views and faster solutions

 
Handy HolderSaggar maker's bottom knockerCommented:
Yes, its wierd since the internal disks certainly aren't on a shared storage bus at all. Only other reference I can find to the error with iSCSI is www.experts-exchange.com/OS/Microsoft_Operating_Systems/Server/2003_Server/Q_21188539.html which doesn't have a solution but just workarounds. It may be that if it isn't vlanned away from the rest of the LAN it just gets confused when enumerating disks on the network and the error is a bit of a red herring.

As a matter of interest what hardware is it?
0
 
Handy HolderSaggar maker's bottom knockerCommented:
Also found http://support.microsoft.com/kb/886569, again it's talking about shared SAN fabric rather than iSCSI but it refers to the updated storport driver that fixes lots of problems. Beware that it also causes problems if you haven't got the latest drivers for hardware RAID controllers.
0
 
asmgiAuthor Commented:
The hardware on both the SAN and cluster nodes is identical, except for the RAID array/controller on the SAN unit.

Custom-built servers with a SuperMicro X7DVL-i motherboard and 2 dual-core CPUs.  Basically, they have an embedded Intel ESB RAID controller for local disks, and all other I/O is controlled by the Intel chipset as well.  8 GB Ram, 2 hard drives in RAID 1 configs, SATA.

The SAN has a 3Ware controller in it, but the rest is the same.

I thought about contacting the vendor that built the systems to see if they are aware of any issues.  These systems were spec'd out this way since we knew what how we were going to use them.

I'll read those documents as well.

Thanks


0
 
asmgiAuthor Commented:
Thanks for this - although it is still unclear why Windows thought the iSCSI devices were local, I am fairly sure it is a hardware/device issue.  Your solution allowed me to complete my cluster and get our VM environment up and running!  It works beautifully.
Thanks!
0
All Courses

From novice to tech pro — start learning today.