AIX - How to add LUN to a server

AIX25
AIX25 used Ask the Experts™
on
My SAN admin will be adding 3 x 200G of SAN (emc) to a AIX server running 5.3. What do I need to do to add the LUN to the server?
Comment
Watch Question

Do more with

Expert Office
EXPERT OFFICE® is a registered trademark of EXPERTS EXCHANGE®
Most Valuable Expert 2013
Top Expert 2013
Commented:
Hi,

direct attached or over VIOS?

If direct attached, and if you have the required drivers installed (EM Powerpath)
you'll just have to run "cfgmgr" (or the EMC equivalent, but I'm rather sure that cfgmgr will call it on its own),
add the new disks (hdiskpowerx) to an existing VG by means of

extendvg vgname hdiskpowerx

or create a new VG by means of

mkvg ...

and youre done.

If the LUNs are accessible over multiple paths (which implies that your SAN admin configured the correct host mappings) the EMC drivers will detect this and configure the paths accordingly. No particular action required.

Accessing the new LUNs over VIOS involves essentially the same steps as above, but you must additionally make the LUNs available to the LPAR (over VSCSI) between  "cfgdev" on VIOS and extendvg/mkvg on the LPAR by means of

chdev -dev hdiskpowerx -attr pv=yes
mkvdev ...

on VIOS and  "cfgmgr" on the LPAR.

If there is more than one VIOS the disks must be configured as "no_reserve" on each VIOS:

chdev -dev hdiskpowerx -attr reserve_policy=no_reserve
chdev -dev hdiskpowerx -attr pv=yes

Then run mkvdev on each VIOS and cfgmgr plus extendvg/mkvg on the LPAR,  as described above.


wmp

Author

Commented:
I believe its a directly attached server...correct?

root@server[/]# prtconf -L
LPAR Info: 1 00-7777H
root@server[/]#
Most Valuable Expert 2013
Top Expert 2013

Commented:
No, it's an LPAR, otherwise prtconf would have returned "-1" or NULL or both.

But this has nothing to do with directly attached LUNs or LUNs over VIOS,
since you can well have FC adapters attached to an LPAR.

So please run

lsdev -Cc adapter |grep fcs

Do you see fcs0 (fcs1, fcs2,...) ?
Exploring SharePoint 2016

Explore SharePoint 2016, the web-based, collaborative platform that integrates with Microsoft Office to provide intranets, secure document management, and collaboration so you can develop your online and offline capabilities.

Author

Commented:
Yes, I do see fsc0,1, and 2.

root@server[/]# lsdev -Cc adapter |grep fcs
fcs0    Available 01-08 FC Adapter
fcs1    Available 05-08 FC Adapter
fcs2    Available 09-08 FC Adapter
root@server[/]#
Most Valuable Expert 2013
Top Expert 2013
Commented:
OK, so you have dedicated FC adapters.

Do you have three single port adapters? (Seems so).

Anyway, if your SAN admin did their job you should now run:

lspv

(to get a "before" image)

cfgmgr

and

lspv

(to get an "after" image)

Are there new hdiskpowerx devices now?

Author

Commented:
The SAN admin is doing his part either tonight or early tomrrow morning, and it should be ready for me in the late AM. I will update my post once the SAN admin has completed his part.

Author

Commented:
Yes, there are new ones.

hdiskpower66    none                                None
hdiskpower67    none                                None
hdiskpower68    none                                None

The SAN admin gave me 3 x 200G of SAN.

I need to create a new vg called dbvg03 and a new fs called dbdata03 and allocate all 600G (3x200G of SAN) to the new fs dbdata03. Please assist
Most Valuable Expert 2013
Top Expert 2013
Commented:
OK,

the first step is essentially the same as with our "paging space" case.

Are you using standard, big or scalable VGs in your environment?

Check with "lsvg vgname":

Standard VGs have "MAX PVs:        32/MAX PPs per PV:     1016",
big VGs have "MAX PVs:        128/MAX PPs per PV:     1016"
and scalable VGs have usually "MAX PVs:      1024/MAX PPs per PV: (not set, unlimited)".

I'd always suggest scalable VGs. They offer the highest flexibility and allow for much larger disks (who knows?).

Basically, create a scalable VG with:

mkvg -S -y dbvg03 -s 256 hdiskpower66 hdiskpower67 hdiskpower68

Now create a logical volume and a filesystem.

Again, do you have your own naming conventions for LVs, or do you leave LV naming to LVM?
Do you use Inline Logging or the standard external logging? Do you have naming conventions for log volumes? Do you use one logvolume for all FS in a VG, or do you use one logvolume per FS?

Once we know the answers we can proceed with LV and FS creation.

Author

Commented:
There were a couple of vgnames with 128 and some with 1024. I went ahead and used the scalable option you provided above and created dbvg03 successfully.

hdiskpower66    0006924d40de9e17                   dbvg03         active
hdiskpower67    0006924d40da5fad                    dbvg03         active
hdiskpower68    0006924d40e55c65                   dbvg03         active

We use either option, but I would like to leave the LV naming to LVM.

I'm not sure about the Inline Logging or the standard external logging. How do I check that?

I don't believe we have a naming convention for log volumes.

I dont know if we use one logvolume for all FS in a VG, or one logvolume per FS. How do I check that?

I already have a FS named dbdata01 that I could use to model off of...if that helps...
Most Valuable Expert 2013
Top Expert 2013

Commented:
Yes, issue

mount | grep dbdata01

(if it's mounted).

Look for the LV name (/dev/xxxxxxx), the mountpoint and the logvolume (log=/dev/xxxxxx)

and post the result.

Author

Commented:
root@server[/]# mount | grep dbdata01
         /dev/vg19lv01    /dbdata01       jfs2   Nov 18 21:29 rw,cio,log=/dev/vg19loglv00
root@server[/]#
Most Valuable Expert 2013
Top Expert 2013
Commented:
OK, here we go.

Seems that you employ an own naming scheme, so I'll try to adhere.

First, the log:

mklv -t jfs2log  -y vg03loglv00  dbvg03  1
logform /dev/vg03loglv00


(Answer "yes" to the prompt).

Should LVM complain about the name "vg03loglv00" please choose a different one.

Now the Logical Volume:

mklv -t jfs2  -e x   -x 8192  -y vg03lv01  dbvg03  2048

Again, should LVM complain about the name "vg03lv01" please choose a different one.
As you can see I configure 2048 PPs. The VG should have roundabout 2400.
You can enlarge the LV/FS later at any time.
Should you wish to use up all the space immediately take the value beneath "FREE PPs" from "lsvg dbvg03"
instead of "2048".

Now the Filesystem:

crfs -v jfs2 -d vg03lv01 -m /dbdata03 -A yes
mount /dbdata03


Now:

Done.

Good luck!

wmp

Author

Commented:
I need to get the FS to 600G, but it fails..please see below.

root@server[/]# df -g /dbdata03
Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on
/dev/vg03lv01    512.00    511.92    1%        4     1% /dbdata03

root@server[/]# chfs -a size=600G /dbdata03
0516-404 allocp: This system cannot fulfill the allocation request.
        There are not enough free partitions or not enough physical volumes
        to keep strictness and satisfy allocation requests.  The command
        should be retried with different allocation characteristics.
Most Valuable Expert 2013
Top Expert 2013

Commented:
Which value (in Megabytes) do you see beneath " FREE PPs: " in the output of "lsvg dbvg03"?

That's the maximum you can add, let's call it "mmmm"

chfs -a size=+mmmmM

3 * 200 GB disk size do not result in 600 GB usable LV size under LVM - there's some overhead!

Author

Commented:
Thank you! Yes, I didn't antcipate the overhead :)

Do more with

Expert Office
Submit tech questions to Ask the Experts™ at any time to receive solutions, advice, and new ideas from leading industry professionals.

Start 7-Day Free Trial