DB2 Cache Hit Ratio

Our 400 has recently slowed down quite a bit since installing a new Java application server.  Our server only has 256mb of RAM (model 250).  We are looking at adding another 256 or 512.  I come from Windows / SQL Server world, and there is a perfomance monitor for SQL that lets you see the database cache hit ratio so you can see what percentage of your queries are hitting memory instead of having to go to disk.  There has to be a similar monitor for AS/400 DB2, but I'm at a loss to discover it.  I've tried STRDBMON (also available through Ops Navigator) but that mainly gives statistics on table scans, index hits, etc.  I've also looked at STRPEX but am a little unsure of the syntax or whether it would acheive what I'm after.

Any help would be greatly appreciated.  

Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.


OS/400 (and therefore DB2 under OS/400) is built around a "single-level store" model. By definition, the operating system itself never thinks anything is being accessed outside of physical memory. Physical memory (as defined in single-level store on a 64-bit address system) consists of contiguous memory addresses through the entire 64-bit range as far as OS/400 knows. It is only down below OS/400, in the virtual machine, that there's any concept of a separation between real physical memory and virtual memory.

So, _every_ database access always is from "memory".

Now, I know that doesn't answer your question; it just provides some background for what follows.

In order to get anything related to the statistics you want, you'll want to look into OS/400 Work Management. An example of the most basic view of these statistics would be what you see when the DSPSYSSTS command is run. (You will want to set that display for an assistance level of at least 'Intermediate'. Press <F21> when the display is shown to choose assistance level. I usually use 'Advanced'.)

Among other elements, the display shows info about database faulting and paging for each memory pool. In order to make useful sense out of what you see, proper configuration of Work Management is needed. In a default configuration, there's no way to separate out faults/pages for various processes. You can only see aggregate numbers for everything including even OS/400's access of its own tables.

Database reads result in pages from a table being swapped into/out of real physical memory in exactly the same way that pages from an executing program might be swapped (or any other object). DB2 doesn't do the swapping. It simply executes an access to a memory address within the 64-bit space. The virtual machine determines if that address page is in memory. So, DB2 doesn't know. But there are indications such as in DSPSYSSTS that are externalized to help.

By configuring work management -- creating separate memory pools and defining the ways that work gets associated with those pools -- you have a setup customized to your business needs. Then when you run database processes, you can tune according to faults/pages that result. Same for Java. Same for CPU-intensive processes. Same for communications. Same for whatever type of work your system will do.

You can leave it at the default and get generic info about the whole system. Or you can split the work into categories and tune according to type of work. Or you can split and tune down to specific functions within single processes. (E.g., you could create a work subsystem that ran a single process and had three memory pools. You could explicitly load the executing program into pool #1, explicitly load one database table into pool #2 and load a 2nd table into pool #3. If you chose, those three would always be resident in real physical memory if you had the space.)

In short, your best starting point is with Work Management. Of course, a 256MB model 250 isn't exactly the best choice for business processing, but it can be done. Expect it to take some work setting it up.


Barry HarperConsultantCommented:
I agree with Tom that memory is the likely performance constraint, but on a small system, you should also check out:

1) Disk drive performance
To check how busy the disk arms are, use the Work with Disk Status (WRKDSKSTS) command. After a minimum of 5 minutes and maximum of 15 minutes, use the F5 key to refresh the statistics. You can use the F11 key to see all the columns. You are mainly interested the arm busy level column, with values preferably below 20 %. As an aside, check the disk protection status as well; if you see 'degraded' in that column, then you have a hardware problem, such as a failed cache battery. Disks with no cache will cause a noticeable change in response.

2) CPU performance
To check how busy the CPU is, use the Work with Active Jobs (WRKACTJOB) command. Use the same interval and refresh commands as above. Look at the total CPU used at the top of the screen to see how busy your CPU is.

You should also check into using iSeries Navigator. You can set up collection services to gather stats and view them graphically. If you are interested in more on this, let us know.

If you have any questions, please post back! We would be happy to look at things for you!

Barry mentions a good point... refresh.

Even for DSPSYSSTS, no measurement will be remotely reliable unless it is allowed to collect info for a few minutes at least. Depending on what activity is going on, 5 mins might be okay, 15 is better before pressing <F5>. Measurements should be taken under conditions of a normal system load.

CPU _might_ be a factor, but that can be very hard to judge. If a system is working at peak efficiency for its designated kinds of work, a value near or at 100% utilization might be ideal. In a multi-user, multi-tasking system, part of the point is to use CPU cycles that would otherwise be wasted. Work management should be used to control this by ensuring that run-priorities always surrender CPU as needed.

DASD utilization _can_ be similar, but is much less likely. Far more likely is that you'll want to watch it to detect bottle-necks as Barry suggests. If this ends up being the case, adding more disk arms _might_ be the best solution.

His final suggestion is also great. A number of us will be willing to add posts as long as you're willing to keep supplying info.

OWASP: Avoiding Hacker Tricks

Learn to build secure applications from the mindset of the hacker and avoid being exploited.

lakers2003Author Commented:
Thank you all for your excellent comments.  I changed to the "Advanced" display of DSPSYSTS and took a couple of snapshots at different times this morning, under what I would call normal usage.  Same with WRKDSKSTS.  I read that Page Faults is a per-second value.  In that case it seems a little high.  I'm a little unclear as to what the purpose of memory pools are.  Again, I mainly deal with Windows SQL systems.  There, you can track (by process) page faults, page fault delta, io reads/write, etc. in a very fine grained manner.  SQL Server also has a bevy of performance counters you can use to similar end.  If you have, say, 4 memory pools as our 400 does below, does that mean each process must be associated with a specific pool and cannot use RAM outside it's specified pool?  That seems unlikely, because it could result in a lot of wasted memory.

As for the fact that our 400 is a little underpowered, yes I agree.  The bulk of our processing is done on Windows systems and we only use the 400 only to run a legacy application for a few of our stores.  However, I have always been impressed with how many users it can support on such limited hardware.  Now though, we may be feeling the pain of our recent upgrade hardware wise.

We upgraded to a web-based Java GUI emulator from Seagull, and I feel certain that has contributed to our performance issue.  I recently read that the newest version of OS/400 supports web based GUI natively.  Have any of you had experience with this?  If it really is what I think it is, it could save us a bundle in lincensing fees.  I believe our service agreement includes OS upgrades, and we are currently at V5R2.  Additionally, it could offer better performance than the component we're using.

From what I can discern from the snapshots below, CPU isn't an issue.  Do the page numbers look high?

--First snapsot--
% CPU used . . . . . . . :       23.0    System ASP . . . . . . . :    25.77 G
 % DB capability  . . . . :        5.4    % system ASP used  . . . :    56.4254
 Elapsed time . . . . . . :   00:10:56    Total aux stg  . . . . . :    25.77 G
 Jobs in system . . . . . :       6584    Current unprotect used . :     1853 M
 % perm addresses . . . . :       .015    Maximum unprotect  . . . :     2222 M
 % temp addresses . . . . :       .057                                          
 Sys      Pool   Reserved    Max  ----DB-----  --Non-DB---  Act-   Wait-  Act-  
 Pool    Size M   Size M     Act  Fault Pages  Fault Pages  Wait   Inel   Inel  
   1      65.89     38.59  +++++     .0    .0    3.7   4.8   12.5     .0     .0
   2     163.95       .69     92     .4   1.7    1.8   7.1  293.4     .0     .0
   3      23.58       .00     10    1.5   2.5    3.8   9.2   16.8     .0     .0
   4       2.55       .00      5     .0    .0     .2    .7    1.8     .0     .0
Elapsed time:   00:08:55                                                                                    
             Size    %     I/O   Request   Read  Write   Read  Write    %    
Unit  Type    (M)  Used    Rqs  Size (K)    Rqs   Rqs     (K)   (K)   Busy    
   1  6717   8589  56.3   10.0      7.6     6.9    3.1    7.0    8.9     6    
   2  6717   8589  56.4    6.0      8.4     3.6    2.4    8.7    7.8     4    
   3  6717   8589  56.4    7.6      7.9     4.4    3.2    8.8    6.7     5          
--END 1st snapshot--        

--2nd snapshot--      
% CPU used . . . . . . . :       34.7    System ASP . . . . . . . :    25.77 G
% DB capability  . . . . :       20.4    % system ASP used  . . . :    56.5105
Elapsed time . . . . . . :   00:21:18    Total aux stg  . . . . . :    25.77 G
Jobs in system . . . . . :       6590    Current unprotect used . :     1873 M
% perm addresses . . . . :       .015    Maximum unprotect  . . . :     2222 M
% temp addresses . . . . :       .057                                          
Sys      Pool   Reserved    Max  ----DB-----  --Non-DB---  Act-   Wait-  Act-  
Pool    Size M   Size M     Act  Fault Pages  Fault Pages  Wait   Inel   Inel  
  1      67.80     38.61  +++++     .0    .0    2.3   3.0   10.9     .0     .0
  2     150.33       .69     92     .6   2.3    1.0   3.8  290.9     .0     .0
  3      35.30       .00     10    1.2   2.2    4.0   9.9   11.8     .0     .0
  4       2.55       .00      5     .0    .0     .1    .4    1.5     .0     .0
 Elapsed time:   00:19:26                                                      
              Size    %     I/O   Request   Read  Write   Read  Write    %    
 Unit  Type    (M)  Used    Rqs  Size (K)    Rqs   Rqs     (K)   (K)   Busy    
    1  6717   8589  56.5    6.8      8.1     4.4    2.3    7.2    9.9     4    
    2  6717   8589  56.5    5.3      8.1     3.1    2.1    8.0    8.1     3    
    3  6717   8589  56.5    5.8      7.7     3.2    2.6    8.6    6.6     4    
--End 2nd snapshot--      

(A whole bunch of stuff follows. Lots of words, but not actually a lot of work. Some detail steps would be repeated over and over, but each is essentially the same when repeated. *NONE* of it should be done until all is read. Steps that result in "displaying" stuff can be done while reading; steps that result in "changing" stuff shouldn't be done until at least the 2nd time through. AFAIK, nothing will get 'broken' by anything here, but typos _can_ cause performance degradation.)

A web-based emulator could indeed make a difference. This naturally would involve not only the normal system support for interactive sessions, but also the middleware to converse with the HTTP server and the HTTP server itself. (Not familiar with it, so this is a guess...) Plus perhaps an application server such as Tomcat or Websphere App Server.

Overall, your stats look pretty good.

As for memory pools, there are essentially four "kinds" of pools. Your DSPSYSSTS shows three kinds.

SysPool 1 is named the *MACHINE pool. (You can toggle the names by pressing <F11> in Advanced mode.)  The virtual machine uses this pool for stuff that it does and much of OS/400 runs there as well. No need to say much about it except you can influence it by setting a system value:

 ==>  wrksysval  qmchpool

From the list (which was restricted to the single value QMCHPOOL), you can display or change the size of it. I'd leave it alone.

SysPool 2 is named *BASE. While a whole bunch of stuff commonly runs there, you should be aiming at having essentially _nothing_ running there. *BASE is where _unused_ memory sits until some process asks for more memory. If processes are running in *BASE, then the memory isn't available for dynamic tuning until that process is swapped out. Now, since by the looks of it, you're running an almost totally default configuration, that doesn't make much difference to you since almost _everything_ except actual interactive sessions and spooler/writer tasks are running directly in *BASE anyway. (Minus *MACHINE tasks.)

SysPool 3 looks to be the *INTERACT pool. When an emulator session signs on, the process itself runs there.

SysPool 4 would be *SPOOL. If you don't have much printing going on, it looks to be about as minimum as you can get.

Now, back to *BASE...

Here's where it gets difficult to analyze: Everything is in *BASE. All of your TCP/IP jobs are running there. All host server jobs are running there. All communications jobs are running there. In short, everything but interactive sessions, spooler jobs, much of OS/400 and the VM, is using the same memory pages concurrently. An FTP session will want to use memory that the telnet server wants for example.

Now, that's no big deal except we have no way to know what process is involved when faulting or paging or waits are reported. Nor do we have any way to know which processes are requesting memory or any other resources. Everything is in a single aggregate.

Further, because everything is in the single memory pool, builtin functions such as auto-performance adjustments can't be made in any useful sense. (It looks as if the function is turned on since *SPOOL is at the minimum memory and MaxAct values look to be attempted adjusted.)

Enough on that for the moment. A comment on your DASD....

In general, it looks pretty good. That is, you don't seem to be taxing your capability much. %-busy is small and you haven't approached knee-of-the curve on capacity. However, if at all possible, I would add a fourth drive. Not to add capacity, but to (1) add another set of disk arms and (2) make RAID a rational option. I'd be nervous running this configuration without basic RAID protection and running RAID on 3 drives is a performance nightmare.

I'd go to almost any used parts dealer and buy at least a couple more drives, trying to keep one in reserve for possible replacement. AS/400 DASD is relatively expensive, but the intelligence in them offsets the cost. Plus, they're (almost always) very reliable. And when I added the next drive, I'd do it while enabling RAID. Pure peace of mind. Minor performance drag to go to RAID will be offset by bumping arms by 33%.

Enough on DASD for now.

Unfortunately, the real next steps are a _lot_ of detail. Generally, it only would be done once, but it's not a bad idea to do it via a CL program so it can be run again if necessary, not to mention documenting steps.

It starts with:

 ==>  wrksbs

This brings up a list of subsystems that are active on your system. Generally, the ones that are active at any normal business time are the only ones you need to be concerned about. The display also shows how "System" pools are associated with each subsystem. Note that a subsystem may have up to ten pool associations. (The numbers in the list refer to _system_ pool numbers which can be seen on the DSPSYSSTS display.)

You can enter option 5=Display against any subsystem to get a menu of attributes to review. Subsystems are the fundamental components of work management.

From the subsystem description menu, there are three options that will show you what is happening -- 2=Pool definitions, 7=Routing entries and 10=Prestart job entries.

When you take option 2 from the menu for each subsystem, what you'll see is that most of your subsystems have one or two pools associated with them. The first pool associated with almost any subsystem will be the system *BASE pool. Subsystems such as QINTER and QSPL will probably have two pools, *BASE as the first subsystem pool and either *INTERACT or *SPOOL as the second.

Once system pools are associated with a subsystem, routing entries (and prestart job entries for many server jobs) determine which _subsystem_ memory pool will be used. If QINTER shows *BASE assigned for subsystem pool 1 and *INTERACT as subsystem pool 2, then the QINTER routing entry that has sequence number 9999 will probably show that most interactive work will be routed to subsystem pool 2 (which cross-references back to _system_ pool 3, named *INTERACT; system pool numbers for *INTERACT and *SPOOL might be reversed or otherwise different for various reasons.)

By reviewing each subsystem to see which ones only have *BASE, you reach the starting point of actual work. The existence of such active subsystems tells you that you should consider activating at least one additional shared pool. I rarely start with less than two added shared pools.

Basic configuration of shared pools:

 ==>  wrkshrpool

This brings a work list of "shared" pools on your system. *MACHINE and *BASE are the first two listed, *INTERACT is probably next, followed by *SPOOL. Then comes a list of 60 generic "shared" pools, none of which seem active on your system. (I've never been clear why IBM includes *MACHINE and *BASE here. I much prefer them to be thought of as different kinds of pools even though they do get "shared". Convenience, I suppose.)

A lot can be done here. Note at the bottom that F11=Display tuning data. Ignore that for now since we have _no_ idea what might be useful for those values. We'll let the system handle almost always anyway. (But you _can_ tweak things when you really know what you want.)

For now, I would only work with *SHRPOOL1 and *SHRPOOL2. And all I would do is set Defined Size to 2.55M, MaxActive to 5 for *SHRPOOL1 and 20 for *SHRPOOL2, and *CALC for Paging Option.

MaxActive will be different for the two because those pools will be used for two different purposes. We grabbed a minimum amount of memory to minimize any disruption if we're doing this in the middle of the day. Also, we have no idea yet what values are any good. (We won't know until we watch the system adapt.)

This prepares two shared pools, makes them available but doesn't associate them with any work. As much as possible, memory management will pull memory out of *BASE and assign it to our pools to meet the settings we entered. Some tasks will be a litlle more memory starved while this happens, but we grabbed a couple small chunks.

Now, we can associate the pools with subsystems. Three likely ones to start with are QBATCH, QPGMR (if used) and QCMN (if QCMN is active). Leave QCTL alone. QINTER and QSPL probably won't be changed. Others such as QSERVER, QSYSWRK, QUSRWRK and QSNADS can be dealt with in an alternative way, mostly via the 2nd shared pool we defined.

To associate a pool with a subsystem:

 ==>  chgsbsd  qbatch pools( (1 *base) (2 *shrpool1) )

QBATCH would then have two system pools available -- *BASE and *SHRPOOL1. The numbers determine _subsystem_ pool numbers.

(By now, you should know you can type the name of a command and press <F4> so that the command parameters are all prompted for you. You also should be _very_ aware that you can move the cursor to almost anywhere on the screen at any time and press <F1> to display 'help' for whatever is at the place on the screen. This can be done while at a command line, while prompting, while viewing any system display, in short pretty much any time you wonder "What's THAT mean?")

Do the same for subsystems QPGMR (if active) and QCMN (if active).

Then do similar for QSERVER and QSYSWRK, except use *SHRPOOL2. Also QUSRWRK if it's active on your system. I can't imagine why you'd have QSNADS active, but that one too if you use it.

If you have other subsystems that aren't mentioned, they _most_ likely would be in the QBATCH group. (And not harmed by treating them like QBATCH.)

Before the next changes, we'll want to be sure the system will react by adjusting performance. We don't want to let all kinds of work start in those pools without being sure that the system has been allowed to make automatic adjustments. We've only set a few megabytes aside -- that ain't enough for any real work.


 ==>  wrksysval  qpfradj

We'll work with the system value for performance adjustments. Use option 5=Display to see what is set. You'll want either 2 or 3. I wouldn't use 2 unless you're sure you have all adjustments appropriate for IPL time. You're better off not letting the system revert after every IPL back to your IPL settings and then having it work its way through adjustments to eventually get back to your normal settings. Besides, you haven't determined what your IPL settings ought to be.

If it's 3=Automatic adjustment, then that's already done (even if it isn't doing anything useful right now; you're better off having it at 0=No adjustment). If it isn't 3, I recommend changing it to 3 before going on. This tells the system that it should be making adjustments, particularly while the following changes are made.

Now, all of your active subsystems (except QCTL, the controlling subsystem which you don't need/want to mess with) have a shared memory pool available that isn't *BASE as well as being able to continue using *BASE as has always been done. And memory and activity adjustments will be made as work goes on and while we tweak the configuration. This is where the grunt detail begins.

For QBATCH, there likely is only one thing to do:

 ==>  chgrtge qbatch  seqnbr( 9999 ) poolid( 2 )

Since you probably have most defaults for QBATCH, other routing entries won't make much difference. The only sequence numbers that might be of interest are ones that show a 'Compare value' of either 'QCMDB' or 'QCMDI' when you display the list of QBATCH routing entries. (Ignore what that means for now. You'll want to change their poolids, but no need to worry yet about how a 'Compare value' works.) The one for seq# 9999 is a catch-all and possibly is the only one that affects work on your system.

And there are almost certainly no prestart entries for the QBATCH subsystem. If there are, you'll know what to do with them by the time we're done.

The poolid(2) parameter refers to whatever pool was assigned in the 2nd slot for that QBATCH subsystem. We used (2 *shrpool1) when we changed the QBATCH subsystem with the CHGSBSD command, so the routing entry has been told to route work into subsystem pool 2. Any job that comes in to QBATCH and is selected by this routing entry will now run in *SHRPOOL1.

You could run the CHGSBSD command later against QBATCH and perhaps specify (2 *shrpool50). From that time on, work would be routed to *SHRPOOL50 by that routing entry because that would be the pool that's named in the 2nd position. Or you could add a 3rd subsystem pool and run the CHGRTGE command to point to the 3rd subsystem pool.

Once the basic structure is in place, granularity is easier.

But QBATCH is trivial. Let's look at QSERVER.

If you list routing entries _and_ prestart job entries for QSERVER subsystem, you'll find a number of routing entries that need to be changed plus the prestarts. You already have an example of the CHGRTGE command. It would be run for each of listed routing entry sequence number for the QSERVER subsystem. You might have five routing entries to change which isn't the worst to come. And you'll have eight or nine prestarts to change as well:

 ==>  chgpje  qserver  pgm( QPWFSERVSO ) poolid( 2 )

In the case of prestart entries for a subsystem, you select by program name instead of sequence number. Routing entries are sequential due to how they're used; prestarts are specific individuals. In both cases, you need the list to know what number or name to reference. Unfortunately, there's no simple way to get the list of either without some programming except displaying in a terminal session. (Though copy/paste to a text document is handy.) For some reason, IBM has yet to make this reasonably available through the iSeries Navigator GUI.

The previous example changed a QSERVER prestart job entry to use whatever pool was assigned to subsystem pool slot #2. In our case, that would be the *SHRPOOL2 system pool. In QBATCH, we were associating with *SHRPOOL1; now we're associating with *SHRPOOL2.

The point is that we're starting to split work away from the *BASE memory pool out into separate memory pools. And we're starting that separation by also running basic batch work in one isolated shared pool and system server batch work in a second isolated shared pool.

Be aware that some of these subsystems have a _lot_ of routing entries. And, yes, you want to change each one. You should be able to retrieve previous commands on a command line by pressing <F9>. That makes typing easier. If <F9> doesn't retrieve, we can fix it. Generally, extra spaces between things are fine. That can help with formatting for easier typing too.

Once the routing entries and prestarts are separated out by pools, the system will start new tasks in those pools. By being separated, we can start seeing what's happening on the DSPSYSSTS screen because faults/paging will now show what's going on in those pools separate from *BASE.

If we also activated a 3rd and 4th shared pool and associated them with subsystems, we could start routing down to individual tasks. We could, for example, run nothing but the FTP server in *SHRPOOL3. By doing that, we could collect any possible statistics needed on how the virtual machine was servicing the FTP server.

Once pools are associated, work can be routed to them simply by running CHGRTGE or CHGPJE, depending on what we're doing. There are levels beyond that, but that's enough for now.

In general, changes become effective as soon as you make them. It won't necessarily cause any active task to be reported in the new pool, but the next task that starts will route accordingly. Since performance adjust is turned on, memory and activity levels will be adjusted as the system detects the need.

Adjustment is _not_ instantaneous. Memory is shuffled between pools in chunks. If more is needed, a new chunk is moved. Until stats are collected after an adjustment, it can't know if another chunk is needed. It also watches to see how (or if) the pool that loses the memory is affected. If negative effect, then maybe a next chunk won't be granted.

Run-priorities of different tasks help determine where and when shifts are made.

Basic principle: *ALL* memory comes into a shared pool by removing it from *BASE. And *ALL* memory is released from a shared pool back into *BASE. It's always a two-step process from one shared pool to another.

That's why *BASE can be a critical factor in performance adjustment. If jobs are active in *BASE, spare pages of memory can be taken for use right in place rather than being assigned where needed. At best, there can be virtual storage paging that is doubled for memory moving through *BASE from one shared pool to another.

Appropriate pools lets us see the effect of processes in isolation. It lets us follow how changing workloads use different resources as adjustments occur. It also can quickly show us if we're memory starved or not.

When we see pools that regularly fluctuate, even in a fairly narrow range, and *BASE stays at its minimum allowed size, we immediately know "We need more memory." If *BASE regularly has some rational amount above its minimum and performance degradation is evident, we can minimize thoughts of needing more memory almost at a glance.

When we associate two additional shared pools to the various batch subsystems beyond just *BASE, we have a ready tool to help track activity in a multi-user environment. We can see the results of individual routings if needed.

Aside from all the above, there is another "kind" of pool. I think of *MACHINE, *BASE, shared pools and the fourth -- private pools. I mention these here for some completeness. You might never need one.

A private pool is an explicitly declared pool of memory. This pool is off-limits to performance adjust. You might want to guarantee that an object is memory resident or that a definite amount of memory is explicitly available to a process. When you run a CHGSBSD command, instead of supplying a named pool, you can state an actual amount of memory. The amount is removed from *BASE and reserved for you.

You can route work through it without being concerned that it will shrink or grow. (OTOH, it won't shrink or grow according to need.) Or you can force it to be cleared (CLRPOOL, can also clear named pools) to ensure that no other task is currently using it. You can bring an object into it on demand -- SETOBJACC. Until you say otherwise, that object will remain memory-resident as long as it fits. If it doesn't fit, as much as will fit will be resident and paging will be within that pool. (You can bring objects into shared pools too, but shared pools are, well... shared.)

Statistics can be collected on performance. Reports can be generated by the Print Pool Report (PRTPOLRPT) command. Performance collection needs to be activated. And without prior configuration such as in this post, performance data can be useless.

A whole bunch of stuff starts being possible once configured.

An AS/400 can be used as a file or print server. Or as a database server. Or HTTP, SMTP, FTP or whatever server. But at its core, it's a multi-user _application_ server. All other services can run simultaneously without needing separate boxes.

IBM ships them with just the basic configurations that allow them to work "pretty well" for 90% of the businesses that buy them. But they also include all of the necessary tools to allow not only the additional 10% to reach "pretty well" but also the main 90% to reach "very well".

Of course, IBM is happy to sell more memory, CPU upgrades, faster DASD controllers and bigger DASD caches. It's just that it isn't necessary for most. What's usually sufficient is picking up on how to use the tools that came as part of the purchase price.

That's chapter 1. Let us know when you need more detail or more steps.


Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
Barry HarperConsultantCommented:
Some quick notes on memory:
a) Think of a memory pool as a place for jobs/tasks with similar resource requirements. If you notice that the pool sizes changed between samples, that is because the system tunes itself and moves memory around as required; the bounds and priorities for this can be found with Work with Shared Pools (WRKSHRPOOL) command. You can confirm if the automatic performance adjuster setting by using Display System Value (DSPSYSVAL) command:
b) The pools are:
Pool 1 is the machine pool, where system critical tasks run.
Pool 2 is the base pool, where less critical system tasks (and batch jobs by default) run
Pool 3 is the interactive pool where users with telnet sessions run; sometimes batch jobs can run here too.
Pool 4 is the spool pool where the printers and remote output queues run.
c) Paging rates are usually not an issue; they are generally indicate the amount of work going on. Performance is affected more by faulting rates, especially in the machine pool.

Look at the Work with Active Jobs (WRKACTJOB) command to check out what jobs are using what resources. Finer granularity can be found if the performance monitor is running. If you do not have the performance tools, iSeries Navigator in Client Access can be used to dig deeper.

Nothing jumps out at me as causing poor performance. My next step would be to look at more detailed performance data, looking especially at seizes and locks. Did these snapshots occur during times of poor performance?

Tom will have suggestions as well.
Barry HarperConsultantCommented:
Posted at the same time.....Tom _had_ suggestions already...LOL
lakers2003Author Commented:
Thank you guys.  After a lot of time figuring some of this stuff out we have decided to add 512MB more memory.  We ran into even more serious performance issues after adding a lot of records to a few tables in the database.  Lookups really started to lag, which is a sure sign of a memory bottleneck. Shutting down some services improved the query performance.  Thanks guys.
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
IBM System i

From novice to tech pro — start learning today.