Link to home
Start Free TrialLog in
Avatar of Jason Yu
Jason YuFlag for United States of America

asked on

how can a slice a volume on HP SAN device and add it to a Linux box?

I have one linux server running out of disk. I checked the server, it looks like used up all HD spaces. I have a HP SAN device on my network, can I slice some of volume on the SAN and mount it on the Linux box? Please provide me some step by step manual for this, thank you.
File-System.jpg
Logical-Volume-Management.jpg
san-device.jpg
SOLUTION
Avatar of arnold
arnold
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of Jason Yu

ASKER

I checked the procedure documents on our share server, there is indeed a doc about how to add disks to ASM, expecially in the first paragraph it has introduced the method to set up ISCSI. But when I check my Linux server, I found there is already some settings on the /etc/iscsi/initiatorname.iscsi file. Could I change the value?
How-to-add-asm-disks.htm
I use Oracle Linux 5.8, please see the results:



[root@neptune mnt]# uname -a
Linux neptune.XXXXXX.net 2.6.32-300.20.1.el5uek #1 SMP Thu Apr 12 17:47:25 EDT 2012 x86_64 x86_64 x86_64 GNU/Linux
[root@neptune mnt]# cat /etc/*-release
Enterprise Linux Enterprise Linux Server release 5.8 (Carthage)
Oracle Linux Server release 5.8
Red Hat Enterprise Linux Server release 5.8 (Tikanga)
[root@neptune mnt]#
I checked the Neptune linux server, there is already a target name in the file:


[root@neptune mnt]# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1988-12.com.oracle:2138da6f1b6b
[root@neptune mnt]#






[root@neptune mnt]# cat /etc/iscsi/iscsid.conf | more
#
# Open-iSCSI default configuration.
# Could be located at /etc/iscsi/iscsid.conf or ~/.iscsid.conf
#
# Note: To set any of these values for a specific node/session run
# the iscsiadm --mode node --op command for the value. See the README
# and man page for iscsiadm for details on the --op command.
#

#############################
# NIC/HBA and driver settings
#############################
# open-iscsi can create a session and bind it to a NIC/HBA.
# To set this up see the example iface config file.

#*****************
# Startup settings
#*****************

# To request that the iscsi initd scripts startup a session set to "automatic".
# node.startup = automatic
#
# To manually startup the session set to "manual". The default is automatic.
node.startup = automatic

# For "automatic" startup nodes, setting this to "Yes" will try logins on each
# available iface until one succeeds, and then stop.  The default "No" will try
# logins on all availble ifaces simultaneously.
node.leading_login = No

# *************
# CHAP Settings
# *************

# To enable CHAP authentication set node.session.auth.authmethod
# to CHAP. The default is None.
#node.session.auth.authmethod = CHAP

# To set a CHAP username and password for initiator
# authentication by the target(s), uncomment the following lines:
#node.session.auth.username = username
#node.session.auth.password = password

# To set a CHAP username and password for target(s)
# authentication by the initiator, uncomment the following lines:
#node.session.auth.username_in = username_in
#node.session.auth.password_in = password_in

# To enable CHAP authentication for a discovery session to the target
# set discovery.sendtargets.auth.authmethod to CHAP. The default is None.
#discovery.sendtargets.auth.authmethod = CHAP

# To set a discovery session CHAP username and password for the initiator
# authentication by the target(s), uncomment the following lines:
#discovery.sendtargets.auth.username = username
#discovery.sendtargets.auth.password = password

# To set a discovery session CHAP username and password for target(s)
# authentication by the initiator, uncomment the following lines:
#discovery.sendtargets.auth.username_in = username_in
#discovery.sendtargets.auth.password_in = password_in

# ********
# Timeouts
# ********
#
# See the iSCSI REAME's Advanced Configuration section for tips
# on setting timeouts when using multipath or doing root over iSCSI.
#
# To specify the length of time to wait for session re-establishment
# before failing SCSI commands back to the application when running
# the Linux SCSI Layer error handler, edit the line.
# The value is in seconds and the default is 120 seconds.
node.session.timeo.replacement_timeout = 120

# To specify the time to wait for login to complete, edit the line.
# The value is in seconds and the default is 15 seconds.
node.conn[0].timeo.login_timeout = 15

# To specify the time to wait for logout to complete, edit the line.
# The value is in seconds and the default is 15 seconds.
node.conn[0].timeo.logout_timeout = 15

# Time interval to wait for on connection before sending a ping.
node.conn[0].timeo.noop_out_interval = 5

# To specify the time to wait for a Nop-out response before failing
# the connection, edit this line. Failing the connection will
# cause IO to be failed back to the SCSI layer. If using dm-multipath
# this will cause the IO to be failed to the multipath layer.
node.conn[0].timeo.noop_out_timeout = 5

# To specify the time to wait for abort response before
# failing the operation and trying a logical unit reset edit the line.
# The value is in seconds and the default is 15 seconds.
node.session.err_timeo.abort_timeout = 15

# To specify the time to wait for a logical unit response
# before failing the operation and trying session re-establishment
# edit the line.
# The value is in seconds and the default is 30 seconds.
node.session.err_timeo.lu_reset_timeout = 30
.............................................



I was wondering if the manual above is correct? How can I modify this file?

thanks.
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
You understand correctly, I am not sure if the file is a default configuration file or somebody edited it. Since it's the original config file, I can just go ahead and follow the manual.

However when I move to step 3 it gives me the error like below:

3. Discover the devices


[root@poseidon ~]# iscsiadm -m discovery -t st -p 10.0.5.4 -I eth1
iscsiadm: discovery login to 10.0.5.4 rejected: initiator failed authorization

iscsiadm: Could not perform SendTargets discovery: iSCSI login failed due to authorization failure
[root@poseidon ~]#
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Same error:



[root@poseidon ~]# iscsiadm -m discovery -t st -p 10.0.5.4
iscsiadm: discovery login to 10.0.5.4 rejected: initiator failed authorization

iscsiadm: Could not perform SendTargets discovery: iSCSI login failed due to authorization failure
[root@poseidon ~]#
not sure what your router table is like you may have to go through eth0.

It's a brand new Oracle Linux Server, I didn't modify any routing table yet.

Does the system currently have any iscsi allocated resources?
If not, access to the iscsi Targets might be limited to specific IPs range of IPs of which this system is not one.

I don't think so, because I am testing this manual on a brand new server. It doesn't have any iscsi allocated resources.

I don't understand the last question, where can I find the "specific IPs range" is it on SAN management interface? In fact, I don't know what the ip "10.0.5.4" means in the above command. The two SAN devices ips are "10.0.5.2" and "10.0.5.3". Is "10.0.5.4" a virtual IP of SAN device?

Thank you for your help.
Good point: I will first try to add this server onto SAN interface.


Have you created a LUN on the P4000 yet and configured it so your initiator has access? iqn.1988-12.com.oracle:2138da6f1b6b is your server's initiator name, you can copy that string from your iscsid.conf and paste it into the HP CMC when you create a new server on that console if needed.

Until you create the server under the CMC I don't think the P4000 will allow it to perform SendTargets discovery and until you've created a new LUN and associated it with your initiator it would only show an empty list of targets anyway unless it's been badly misconfigured.
ASKER CERTIFIED SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Very good, it works now.



[root@poseidon ~]# iscsiadm -m discovery -t st -p 10.0.5.4
10.0.5.4:3260,1 iqn.2003-10.com.lefthandnetworks:bradford:761:talos-fra
10.0.5.4:3260,1 iqn.2003-10.com.lefthandnetworks:bradford:759:talos-data
10.0.5.4:3260,1 iqn.2003-10.com.lefthandnetworks:bradford:660:hermes-fra
10.0.5.4:3260,1 iqn.2003-10.com.lefthandnetworks:bradford:658:hermes-data
10.0.5.4:3260,1 iqn.2003-10.com.lefthandnetworks:bradford:656:cronus-fra
10.0.5.4:3260,1 iqn.2003-10.com.lefthandnetworks:bradford:654:cronus-data
10.0.5.4:3260,1 iqn.2003-10.com.lefthandnetworks:bradford:652:athena-fra
10.0.5.4:3260,1 iqn.2003-10.com.lefthandnetworks:bradford:650:athena-data
10.0.5.4:3260,1 iqn.2003-10.com.lefthandnetworks:bradford:648:asmdata2
10.0.5.4:3260,1 iqn.2003-10.com.lefthandnetworks:bradford:646:asmdata1
[root@poseidon ~]#
andyalder was right,

This server was not listed as a separate server, but it was listed in the RAC group which is dbRAC(3). Unfortunately, the CHAP name was not : iqn.1988-12.net.minkagroup.poseidon:6912b8f5c4a5, somebody worked on this testing server be4 and added it to SAN server list using another CHAP name as in the attached picture.

I am glad it's working on the poseidon server which is a testing server. I will be moving on the production server now.

Thanks. I will update if I get any issues on production.
old-poseidon-server-SAN-CHAP-Nam.jpg
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
It was a separated linux server in a testing environment. I need expand space on this testing server to add one more instance. It is not in the production cluster.

In fact, I didn't understand your question, do you mean if the linux server with Oracle DB running on it in a cluster would get into big trouble if the space is maxed out?

Thank you.
After I make a volume on SAN and assign it to the server, how could I access this LUN from the linux Server?
Avatar of Member_2_231077
Member_2_231077

Is the whole P4000 just for test or are clusters such as dbRAC(3) part of production? If you don't understand my question then logout of the SAN management GUI now and turn the test linux box off preferably by yanking the plug out so it doesn't write to the storage to close the filesystem.
The whole P4000 IS FOR PRODUCTION! And the cluster dbRAC(3) is our main production servers.
I did follow your instruction and powered off Poseidon server by pressing the power button.

Is it very risky? I feel anxious about this, could you explain this to me.

What I did was just modify the CHAP name to match it shows on the Poseidon Server.

iqn.1988-12.net.minkagroup.poseidon:6912b8f5c4a5


The other two servers in this RAC are our Oracle DB servers in production. I am sweating now, what would happen if I leave poseidon server on??

Please help, thank you very much.
In short, you can not have two initiators writing to the same target/lun.
The reason is that iSCSI is an command channel SCSI commands over IP. When the LUN is mapped, the system has information on what is in use and where it is in use.
When a change occurs, the system sends the appropriate commands and gets responses that it uses to update the storage map.
If you have two hosts connect to the same target/LUN.
updates made by host1 are not seen by host2 such that the possibility exist depending on the SAN in use that host1 might write onto a block previously written to by host2.
in simpler term you have host1 and host2 map a disk to the same target/LUN
host1 adds a file filename1 and host2 adds a file filename2. Each will only see the file they created. If you reinitate the iscsi connection (disconnect/reconnect) the host will now see both filename1 and filename2.

If you are testing, see if your SAN has free/unallocated space.  Create a new LUN, create a new iscsi target.  Assign the new LUN to the newly created iscsi target and allocate this to the host you are testing with.
Got it, I think it's safe now.

https://www.experts-exchange.com/questions/28022943/Is-it-safe-to-remove-a-server-from-Server-Cluster-on-HP-P4000-SAN-device.html

I was scared this afternoon and ask another question here for deleting this server from server cluster on SAN Interface. I will post your answer to there too.
Great to know this, thank you very much. What I did was just changed one server's CHAP name in a server cluster to match the server's real CHAP name which was set up in the above

iscsid.conf  file. I think it should be OK if I just add or remove this server from the server cluster. I will read more articles and the manual for this HP San device be4 I go.
Yes, remove it from dbRAC on the SAN. The (3) indicates 3 hosts that you assign shared LUNs to, so it will show as dbRAC(2) afterwards.

Shared LUNs are used in shared storage clustering, both nodes can see the LUN but normaly one is in a read-only state waiting to take over ownership in case the other node fails.

Add it as another stand-alone server like Perseus or make a new group called test and add it to that. If you have a new group and another test server you can have some fun - put both those servers in the group and assign a new LUN to it, format it with ext3 with one test server, then mount it on both of them and write data to it from one of them, you won't even see the files from the other server unless you umount and mount again.
I followed your post and created a new volume and assigned poseidon server to this volume.

But the problem is how could I see this volume on Poseidon server. Is it showed in /dev folder?

Thank you.
testlun.jpg
More /proc/scsi/scsi
Dmesg will list the volume as well.
You need to look on the server HW disk manager.
Lvdiskscan
fdisk -l
[root@poseidon rules.d]# more /proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 01 Id: 00 Lun: 00
  Vendor: Dell     Model: Virtual Disk     Rev: 1028
  Type:   Direct-Access                    ANSI SCSI revision: 06
Host: scsi0 Channel: 00 Id: 00 Lun: 00
  Vendor: ATA      Model: WDC WD2502ABYS-1 Rev: 3B05
  Type:   Direct-Access                    ANSI SCSI revision: 05
Host: scsi0 Channel: 00 Id: 01 Lun: 00
  Vendor: ATA      Model: WDC WD2502ABYS-1 Rev: 3B05
  Type:   Direct-Access                    ANSI SCSI revision: 05
Host: scsi0 Channel: 00 Id: 02 Lun: 00
  Vendor: ATA      Model: WDC WD2502ABYS-1 Rev: 3B05
  Type:   Direct-Access                    ANSI SCSI revision: 05
Host: scsi0 Channel: 00 Id: 03 Lun: 00
  Vendor: ATA      Model: WDC WD2502ABYS-1 Rev: 3B05
  Type:   Direct-Access                    ANSI SCSI revision: 05
Host: scsi3 Channel: 00 Id: 00 Lun: 00
  Vendor: TEAC     Model: DVD-ROM DV-28SW  Rev: R.2A
  Type:   CD-ROM                           ANSI SCSI revision: 05
Host: scsi23 Channel: 00 Id: 00 Lun: 00
  Vendor: LEFTHAND Model: iSCSIDisk        Rev: 9500
  Type:   Direct-Access                    ANSI SCSI revision: 05
Host: scsi25 Channel: 00 Id: 00 Lun: 00
  Vendor: LEFTHAND Model: iSCSIDisk        Rev: 9500
  Type:   Direct-Access                    ANSI SCSI revision: 05
Host: scsi30 Channel: 00 Id: 00 Lun: 00
  Vendor: LEFTHAND Model: iSCSIDisk        Rev: 9500
  Type:   Direct-Access                    ANSI SCSI revision: 05
Host: scsi27 Channel: 00 Id: 00 Lun: 00
  Vendor: LEFTHAND Model: iSCSIDisk        Rev: 9500
  Type:   Direct-Access                    ANSI SCSI revision: 05
Host: scsi22 Channel: 00 Id: 00 Lun: 00
  Vendor: LEFTHAND Model: iSCSIDisk        Rev: 9500
  Type:   Direct-Access                    ANSI SCSI revision: 05
Host: scsi24 Channel: 00 Id: 00 Lun: 00
  Vendor: LEFTHAND Model: iSCSIDisk        Rev: 9500
  Type:   Direct-Access                    ANSI SCSI revision: 05
Host: scsi28 Channel: 00 Id: 00 Lun: 00
  Vendor: LEFTHAND Model: iSCSIDisk        Rev: 9500
  Type:   Direct-Access                    ANSI SCSI revision: 05
Host: scsi26 Channel: 00 Id: 00 Lun: 00
  Vendor: LEFTHAND Model: iSCSIDisk        Rev: 9500
  Type:   Direct-Access                    ANSI SCSI revision: 05
Host: scsi21 Channel: 00 Id: 00 Lun: 00
  Vendor: LEFTHAND Model: iSCSIDisk        Rev: 9500
  Type:   Direct-Access                    ANSI SCSI revision: 05
Host: scsi20 Channel: 00 Id: 00 Lun: 00
  Vendor: LEFTHAND Model: iSCSIDisk        Rev: 9500
  Type:   Direct-Access                    ANSI SCSI revision: 05
Host: scsi32 Channel: 00 Id: 00 Lun: 00
  Vendor: LEFTHAND Model: iSCSIDisk        Rev: 9500
  Type:   Direct-Access                    ANSI SCSI revision: 05
[root@poseidon rules.d]#
very good solution