chompone
asked on
HP SAN, LUN, and iSCSI help
Our organization just purchased and 'installed' an HP MSA2312 SAN device. Currently it has two physical hard drives installed:
-1 600GB
-1 2TB
It has two storage controllers, each with an ethernet port (for management) and 2 iSCSI gigabit ports.
I created a vdisk using the 2TB drive and 3 equal sized volumes of about 665GB in that vdisk.
The 4 iSCSI ports connect into an HP 2524 Gigabit switch. The servers that will use the SAN connect into this same switch as well.
From what I understand, you cannot 'mount' the same iSCSI drive (volume?) to multiple machines without causing corruption unless you have a clustering file system.
What I would like to do is connect each of the 3 volumes to a different server, but make it possible for the transfer of data to take place over any of the 4 iSCSI ports on the controllers. Is this possible? Does an iSCSI connection have to be mapped to a specific port and specific server? As long as I have different volumes, can I mount them to any number of servers using any combination of the 4 ports?
Please let me know if I need to clarify anything
-1 600GB
-1 2TB
It has two storage controllers, each with an ethernet port (for management) and 2 iSCSI gigabit ports.
I created a vdisk using the 2TB drive and 3 equal sized volumes of about 665GB in that vdisk.
The 4 iSCSI ports connect into an HP 2524 Gigabit switch. The servers that will use the SAN connect into this same switch as well.
From what I understand, you cannot 'mount' the same iSCSI drive (volume?) to multiple machines without causing corruption unless you have a clustering file system.
What I would like to do is connect each of the 3 volumes to a different server, but make it possible for the transfer of data to take place over any of the 4 iSCSI ports on the controllers. Is this possible? Does an iSCSI connection have to be mapped to a specific port and specific server? As long as I have different volumes, can I mount them to any number of servers using any combination of the 4 ports?
Please let me know if I need to clarify anything
How many iscsi ports does the array have you say 2 and 4 also. What are you looking to achieve?
ASKER
The storage array has 2 controllers, each with 2 iSCSI ports for a total of 4.
Eventually, we will probably have 5 or 6 servers connected to the array. I would like to make sure that all servers have all 4 ports available (in case say, 3 of the 4 are maxed out with transfers).
At the same time, I need to make sure this is setup correctly - I've read things about only 1 iSCSI connection per drive and/or volume, otherwise you'll get data corruption..
Eventually, we will probably have 5 or 6 servers connected to the array. I would like to make sure that all servers have all 4 ports available (in case say, 3 of the 4 are maxed out with transfers).
At the same time, I need to make sure this is setup correctly - I've read things about only 1 iSCSI connection per drive and/or volume, otherwise you'll get data corruption..
My experience is with Dell EqualLogic, but you shouldn't have to worry about the physical ports. The MSA should have an address that you connect to, and then the MSA and your iSCSI initiator figure out which ports the traffic gets sent over, particularly if you have MPIO enabled. With MPIO, you may have more than 1 iSCSI connection per volume, but they would all be from the same initiator (server). Say your server has 2 NICs being used for iSCSI traffic, then you would have 2 iSCSI connections for that volume, but they would all be from the same server so that's okay.
I have about 80 active volumes on my SAN, with maybe 2 dozen servers attached, 460 iSCSI connections, and it all feeds through only 3 active 1GB ports, and the ports are never really busy. It's really easy to disconnect an iSCSI volume from a server and reconnect it, so feel free to play around. If you need to redo how you allocate the storage, well that might involve deleting some volumes and starting over. Your MSA should allow you to dynamically grow volumes, so I make my volumes only as large as I need them right now. Making the volumes larger on the SAN and then expanding the partition to fill the additional space takes just a few minutes and can be done online with Windows 2003/XP and higher.
In case you didn't know, you should setup your iSCSI volumes as basic disks on the host. Microsoft doesn't support using dynamic disks with iSCSI.
I have about 80 active volumes on my SAN, with maybe 2 dozen servers attached, 460 iSCSI connections, and it all feeds through only 3 active 1GB ports, and the ports are never really busy. It's really easy to disconnect an iSCSI volume from a server and reconnect it, so feel free to play around. If you need to redo how you allocate the storage, well that might involve deleting some volumes and starting over. Your MSA should allow you to dynamically grow volumes, so I make my volumes only as large as I need them right now. Making the volumes larger on the SAN and then expanding the partition to fill the additional space takes just a few minutes and can be done online with Windows 2003/XP and higher.
In case you didn't know, you should setup your iSCSI volumes as basic disks on the host. Microsoft doesn't support using dynamic disks with iSCSI.
You really should have two switches rather than one, and have a different subnet on each. For testing you can just use two VLANs.
Here's a topology diagram and walkthrough for setting it up, http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?lang=en&cc=hk&taskId=110&prodSeriesId=3687128&prodTypeId=12169&objectID=c01655273
BTW, dynamic disks are now supported on using iSCSI with Windows 2008, but basic disks are still better.
Here's a topology diagram and walkthrough for setting it up, http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?lang=en&cc=hk&taskId=110&prodSeriesId=3687128&prodTypeId=12169&objectID=c01655273
BTW, dynamic disks are now supported on using iSCSI with Windows 2008, but basic disks are still better.
ASKER
andyalder: That link shows how to setup two switches for redundancy/failover. Is there another reason why I should have two switches, or is it primarily for failover:
So I can have one physical hard disk, broken into several volumes, and have several servers connect to each of those volumes, as long as there is only 1 server and 1 iSCSI connection per volume, correct?
The reason this came up in the first place was because I connected two servers two the same volume and mounted the disks, but when I would copy files to the new disk on one server, they wouldn't show up on the other server, and vice versa.
So I can have one physical hard disk, broken into several volumes, and have several servers connect to each of those volumes, as long as there is only 1 server and 1 iSCSI connection per volume, correct?
The reason this came up in the first place was because I connected two servers two the same volume and mounted the disks, but when I would copy files to the new disk on one server, they wouldn't show up on the other server, and vice versa.
Dual switches are primarily for redundancy, if you lose a LAN switch the clients get disconnected, if you lose a SAN switch your data gets a bit corrupt. You've probably heard of the dreaded "lost delayed write data" message where the server fails to write the data to disk.
Yup, you can split one disk into several seperate iSCSI targets and present each to a single host. You can of course present two targets to a host such as data and transaction logs, although not a good idea to have on the same spindles.
NTFS isn't a shared filesystem, the OS stores part of the master file table in RAM to speed up access, shared filesystems deal with that by each server sending all the others messages about what it's updated. NTFS is shared when using Hyper-V cluster, but the files on that are pretty much static in size and position as they're virtual hard disks. Just like an active/passive cluster the files on NTFS cluster aren't ever accessed by more than one host at a time.
Yup, you can split one disk into several seperate iSCSI targets and present each to a single host. You can of course present two targets to a host such as data and transaction logs, although not a good idea to have on the same spindles.
NTFS isn't a shared filesystem, the OS stores part of the master file table in RAM to speed up access, shared filesystems deal with that by each server sending all the others messages about what it's updated. NTFS is shared when using Hyper-V cluster, but the files on that are pretty much static in size and position as they're virtual hard disks. Just like an active/passive cluster the files on NTFS cluster aren't ever accessed by more than one host at a time.
ASKER
Also, can you explain in a little more detail about the dual switches with regard to different subnets?
I can see that having each controller connect to two different switches will provide for failover, but I don't understand why the different subnets are needed..
I can see that having each controller connect to two different switches will provide for failover, but I don't understand why the different subnets are needed..
SOLUTION
membership
Create a free account to see this answer
Signing up is free and takes 30 seconds. No credit card required.
ASKER
Here is the main problem I am having, described below. This is my first time with a SAN device, so that adds to my confusion. Also, I understand that I should have 2 switches for failover, but right now I am just trying to get things set up and working as is.
The MSA2312 has 4 iSCSI ports; 2 per controller. I have plugged in all four iSCSI ports into a gigabit switch. Each server also plugs into this same gigabit switch. There is a 600GB drive and a 2TB drive in the MSA2312. I created 3 volumes from the 2TB drive, each the same size (~665GB). I 'mapped' each volume to all 4 ports of the MSA2312. I did this so that the iSCSI conneciton can be made through any port on the device. Here are the IP addresses for those ports:
192.168.2.240
192.168.2.241
192.168.2.242
192.168.2.243
I configured Microsoft's iSCSI initator on one server, using 192.168.2.240 as the Target portal, then when into disk management and created a disk labeled F: and it is working normally.
I then went to a different server, used iSCSI initiator with target portal 192.168.2.241, and then the F: disk gets mounted in this server automatically. However, this isn't want I want to happen. I wanted to load the second volume, and create a totally different disk/partition for this server, corresponding to the second of the three volumes in the MSA2312. But now, both servers have the same volume loaded, which will lead to corruption.
Basically, how do I tell each server to connect to a specific volume? I can see that if I map a volume to only 1 of the 4 ports on the MSA2312 I can restrict it, but I want to be able to have more than 4 volumes and 4 servers. I know I am doing something wrong both in my understanding and implementation - please help. And thanks andyalder and kevinhsieh for your help so far.
The MSA2312 has 4 iSCSI ports; 2 per controller. I have plugged in all four iSCSI ports into a gigabit switch. Each server also plugs into this same gigabit switch. There is a 600GB drive and a 2TB drive in the MSA2312. I created 3 volumes from the 2TB drive, each the same size (~665GB). I 'mapped' each volume to all 4 ports of the MSA2312. I did this so that the iSCSI conneciton can be made through any port on the device. Here are the IP addresses for those ports:
192.168.2.240
192.168.2.241
192.168.2.242
192.168.2.243
I configured Microsoft's iSCSI initator on one server, using 192.168.2.240 as the Target portal, then when into disk management and created a disk labeled F: and it is working normally.
I then went to a different server, used iSCSI initiator with target portal 192.168.2.241, and then the F: disk gets mounted in this server automatically. However, this isn't want I want to happen. I wanted to load the second volume, and create a totally different disk/partition for this server, corresponding to the second of the three volumes in the MSA2312. But now, both servers have the same volume loaded, which will lead to corruption.
Basically, how do I tell each server to connect to a specific volume? I can see that if I map a volume to only 1 of the 4 ports on the MSA2312 I can restrict it, but I want to be able to have more than 4 volumes and 4 servers. I know I am doing something wrong both in my understanding and implementation - please help. And thanks andyalder and kevinhsieh for your help so far.
ASKER CERTIFIED SOLUTION
membership
Create a free account to see this answer
Signing up is free and takes 30 seconds. No credit card required.
ASKER
Kevin,
Thanks, the initiator name works for me. I also set explicit mapping permissions, so that only certain volumes show up for certain servers. I'll continue to look at the documentation for more detailed admin.
Thanks, the initiator name works for me. I also set explicit mapping permissions, so that only certain volumes show up for certain servers. I'll continue to look at the documentation for more detailed admin.