solaris 8 emulex lun sd.conf

When do we need to add lun information in sd.conf ? how to identify wether HBA card is oracle branded or emulex ? What driver should be loaded into solaris 8 kernel for Emulex card ?

Recently, it took about 3-4 hours to perform re-configuration reboot post edition to sd.conf ...... how can we avoid this late booting cause , sd.conf was edited with 255 lines like :

Below contents are only indicative ***

name= x target=0 lun=0
.
.
xxxxxxx target=0 lun=255

How do i know which target the luns are coming on to  ? so i can mention them coorectly
OS admin OnlyAsked:
Who is Participating?

[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

x
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

arnoldCommented:
usually you can use devfsadm https://docs.oracle.com/cd/E19455-01/806-0625/6j9vfill4/index.html

there are emulex commands to refresh......
the emulex commands
not sure what you are asking.

it seems my memory is not as....
https://docs.oracle.com/cd/E19168-01/819-1274-14/appd_solaris.html answers the need to reconfig after changes on solaris 8.
solaris 9 the devfsadm means no reboot is needed.

http://www.oracle.com/technetwork/server-storage/solaris/overview/emulex-corporation-136533.html
0
OS admin OnlyAuthor Commented:
ok , lets move to an example with set-up like this. i have a sun hardware and 2 Emulex HBA card, and both targets have 256 (0-255) lun entries in sd.conf. If I list the devices from OS level, with powermt , they show me 236 unique logical devices. And we need to get in 400+ more luns in our server. How can this be achieved if at all possible.
0
arnoldCommented:
Not sure I understand the scenario unde which you want/need to allocate 400+ LUNs, presumably from the same San.

In such quantity, the bootup process takes a long time suggesting it is not an optimal
Frequency of needed bootup is of little consequence. I think updates to Solaris 8 are over, but still.

Are these multi path allocation, the same LUN accessible via either path?
0
Big Business Goals? Which KPIs Will Help You

The most successful MSPs rely on metrics – known as key performance indicators (KPIs) – for making informed decisions that help their businesses thrive, rather than just survive. This eBook provides an overview of the most important KPIs used by top MSPs.

andyalderSaggar maker's framemakerCommented:
https://www.ibm.com/support/knowledgecenter/STUVMB/com.ibm.storage.ssic.help.doc/f2c_confignonsunadap_3vdlvy.html says sd.conf isn't used if you have recent drivers.

hbanyware or onecommand manager will tell you if it is an Emulex card.
0
arnoldCommented:
This is a Solaris 8 system.
Not sure the reference to IBM knowledge base applies even if the San is from IBM.
0
OS admin OnlyAuthor Commented:
Answer here

when i had to perform this activity it went like this. The first thing we do in such activity is list the HBA controller using

# luxadm -e port and
# ls -l /dev/cfg/c*

to compare and confirm the hba physical paths. But in this server it reported some error related to *get fcp topology* on luxadm command. The  /dev/cfg/c* list also didnt show the hba paths listed. (What a spanner on server running sol8 on sun fire 480r ....i thought).  So i went on to identify the hba card and its make from
# prtdiag -v and
# grep "lpfc\"$" /etc/path_to_inst

it confirmed me that i was having emulex hba card LP9002  with non native drivers with instance numbers 0 and 1.

Ok so i have the basic info to start with. I ran
# lputil shownodes 0
# lputil shownodes 1

This will show u the current target on server and WWN persistent bindings. They will be for lpfc0t0 and lpfc1t0 based on waht is your lpfc instance number and target is, as shown in the path_to_inst and  shownodes output.

The existing bindings are marked as "mapped FCP nodes" . And for current target and wwn bindings,  the sd.conf file already have 0-255 luns mapped on to each target on each lpfc instance.  

Now we ask the storage administrator to map the lun. Once he confirms maksing of new luns, to server, we ran follwing again :

# lputil shownodes 0
# lputil shownodes 1

This time for each instance it showed new line of output marked as "Automapped FCP node" and new pair of target and WWN.

This shows the registration of new pair of targetsa and WWN,  for new luns, automatically mapped between hba port and san switch port.

Now since this automapped and is bound to change between every reboot and mapping of lun, we do persistent bindings of new target on lpfc and the respective WWN.

> Open lpfc.conf and append details in binding section or run lputil to do cli based option driven menu.
> Open sd.conf and for each  lpfc instance and  target id,  mention 256 luns. The way it is in sd.conf.
Reconfigure reboot.
# reboot -- -r

And there were the luns.

But the reconfigure reboot didn't solve issue with luxadm -e port issue and absent dev files for hba confollers. Less that all fine.
0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
OS admin OnlyAuthor Commented:
I performed the activity successfully as explained in the comment.
0
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Server Hardware

From novice to tech pro — start learning today.