Still celebrating National IT Professionals Day with 3 months of free Premium Membership. Use Code ITDAY17


AIX : JFS Filesystem and Big VG

Posted on 2009-07-10
Medium Priority
Last Modified: 2013-11-17
Hi Experts,

I would like to create a Big Volume Group with the filesystems as attached excel sheet.

Could you all pls advice what's the appropriate PP size I should put when configure the Big VG?

I heard that there's mbpi limitation for JFS.

Thank you very much and have a nice day.

Best Regards,
Terrence Tan
Question by:terrencetan
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
  • 4
  • 2
LVL 68

Expert Comment

ID: 24822305
Hi again,
why would you use a BIG VG and jfs?
Use a SCALABLE VG and jfs2 and there's virtually no limit.
You will have no limit for PPs per PV, and "enough" PPs per VG (default is 131072) , so you can freely choose your PP size. I'd recommend 128-256 MB, maybe 256-512 MB for the two big ones,  .... vgds002 and .... vgds005.
Inodes are allocated dynamically with jfs2, so no problem with nbpi (that value doesn't exist in jfs2).
If you have to stay with Big VG and jf2, please let me know. I'll help you.

Author Comment

ID: 24831318
Dear woolmilkporc,

I'm sad to say that the customer choose to use Big VG and JFS filesystem.

I might need your advice on the PP size of VG for all the VGs with filesystems size as attached doc.

Thank you very much and have a nice day.

Best Regards,
Terrence Tan
LVL 68

Expert Comment

ID: 24831697
sounds not very sensible, but anyway ...
Sorry for not having asked before, but I need to know the maximum sizes of the physical disks that make up your volume groups (now and with possible extensions in the future) for the calculation of supported PP sizes.

Technology Partners: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

LVL 68

Expert Comment

ID: 24832256
,,, and one more question: Will there be many small files or only rather few big ones un the new filesystems? Please tell me on a per-VG basis!

Author Comment

ID: 24834542
Dear woolmilkporc,

Actually for the Production Server filesystems are share filesystems and Shared VG (HACMP)
The VGs consists of multiple EMC LUN disks. Each LUNs are 17GB. (veri weird LUN Size - but Japanese standard). All this filesystems will be another bcv copy which will mounted on the Dev Server.

On the Dev server, all filesystems are VGs are not cluster filesystems.

Any big files in the filesystems so far i hav no idea. But I already informed customer max file size only 64GB and customer guaranteed there will not hav file size exceed this limit.

This is really first time to meet this kind of weird situation. Really need ur expertize to advice what's the right PP size of the VG.

Is it better combination for Scalable VG and JFS filesystem? Does this scalable VG supported in HACMP environment?

Thanks in advanced.

Best Regards.
Terrence Tan
LVL 68

Accepted Solution

woolmilkporc earned 2000 total points
ID: 24834964
Hi Terrence,

what I understood so far:

1) each LUN will only be 17 GB in size
2) VG sizes range from 130 GB to 1.370 GB
3) files are not huge, but also not tiny (not only 10K-100K or so)
4) some of the VGs will run with HACMP shared/enhanced-concurrent

1) means that we'll not hit the 1016 PP per PV limit with a PP size > 16M. But it means, too, that with the BigVG limit of 128 physical volumes you can never have a VG > 2.176 GB (unformatted).
2) means that with the biggest VG size of 1.370 GB we will need a PP size of at least 64 MB, making nearly 22.000 PPs for that VG. This gives good granularity, but also gives much overhead in partition table processing, so I'd tend to choose 256 MB as the PP size (at least for the big one).
3) means that we could live with the standard jfs nbpi setting of 4096, although 32768 or 65536 might be better, to save space (less inodes = more space for data).
You will most probably have files > 2GB, so you will need to make the filesystems "large file enabled" (bf=true).
4) HACMP shared/enhanced-concurrent is supported with any VG type (standard/big/scalable) and also with jfs as well as jfs2.

That said, my recommendations:

- VG: PP size 256 MB for "aybq53-vgds005", 128 MB for the rest (but you could take 256 MB for all VGs, to keep consistency)
- jfs: nbpi 32768, bf=true, agsize 64 (default)

Scalable VG in combination with jfs will only raise the 2.176 GB per VG limit. All other considerations above stay valid, yet scalable VG will allow for much more growth.

jfs2, on the other hand, will make the considerations regarding nbpi and bf=true unnecessary.

There's no sound reason for big VG and jfs, as far as I can tell. Scalable VG and jfs2 are far better.
Just keep in mind to follow IBM's recommendation of not using jfs2's inline-logging with HACMP.

Good luck!


Featured Post

Free Tool: Port Scanner

Check which ports are open to the outside world. Helps make sure that your firewall rules are working as intended.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Using libpcap/Jpcap to capture and send packets on Solaris version (10/11) Library used: 1.      Libpcap ( Version 1.2 2.      Jpcap( Version 0.6 Prerequisite: 1.      GCC …
Why Shell Scripting? Shell scripting is a powerful method of accessing UNIX systems and it is very flexible. Shell scripts are required when we want to execute a sequence of commands in Unix flavored operating systems. “Shell” is the command line i…
Learn how to find files with the shell using the find and locate commands. Use locate to find a needle in a haystack.: With locate, check if the file still exists.: Use find to get the actual location of the file.:
In a previous video, we went over how to export a DynamoDB table into Amazon S3.  In this video, we show how to load the export from S3 into a DynamoDB table.

670 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question