• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 902
  • Last Modified:

How can I track memory utilisation on a Solaris10 server

I have a Solaris10 server running two non-global zones. OVPA is running on the global zone and tells me that the memory utilisation is climbing day on day. However when I do a 'ps -elfZ' over a period of time the SZ column does not show the same increase. In fact day on day it is broadly the same. The increase may well be the buffer cache as 'ps' will not be counting this, so how might I be able to prove this.
What will happen when memory utilisation (according to OVPA) reaches 100%.
Oracle is running on one of the zones.
  • 2
  • 2
2 Solutions

You can use : prstat -Z

you see details by zone  like this example

but on this system you have no non global zone only global zone zoneid 0

but in your case you must see other line by zone

  1253 http       16M   13M sleep   59    0   0:04:19 0.0% httpd/29
 27204 root     3608K 3600K cpu0    59    0   0:00:00 0.0% prstat/1
  1238 root     5920K 3440K sleep   59    0   0:01:31 0.0% rotatelogs/1
  1252 http       16M   13M sleep   59    0   0:04:26 0.0% httpd/29
  1251 http       16M   13M sleep   59    0   0:04:33 0.0% httpd/29
  1313 noaccess  178M  102M sleep   59    0   0:31:15 0.0% java/39
 27200 root     3168K 2768K sleep   59    0   0:00:00 0.0% bash/1
  1361 root       16M   14M sleep   59    0   0:11:51 0.0% ovcd/28
   121 daemon   4664K 3664K sleep   59    0   0:00:03 0.0% kcfd/5
   291 root     2088K 1232K sleep   59    0   0:00:00 0.0% smcboot/1
 26450 root     2568K 2168K sleep   59    0   0:00:00 0.0% ttymon/1
   270 root     2512K 2008K sleep   59    0   0:00:00 0.0% ttymon/1
   258 root     2176K 1672K sleep   59    0   0:00:00 0.0% sac/1
  1231 root     9248K 6064K sleep   59    0   0:01:17 0.0% httpd/1
   292 root     2088K 1232K sleep   59    0   0:00:00 0.0% smcboot/1
   254 root     7024K 5392K sleep   59    0   0:00:17 0.0% inetd/4
   142 root     6144K 5584K sleep   59    0   0:01:17 0.0% nscd/31
   377 root     5152K 1848K sleep   59    0   0:00:00 0.0% automountd/2
   252 daemon   2464K 2080K sleep   60  -20   0:00:00 0.0% lockd/2
   248 daemon   2872K 2544K sleep   59    0   0:00:00 0.0% statd/1
   132 root     4712K 4128K sleep   59    0   0:00:01 0.0% picld/6
   241 daemon   2872K 2360K sleep   59    0   0:00:00 0.0% rpcbind/1
   212 root     1440K 1032K sleep   59    0   0:00:00 0.0% efdaemon/1
   120 root     3592K 2912K sleep   59    0   0:00:00 0.0% devfsadm/7
     0       81  300M  423M   2.6%   1:35:56 0.2% global

You can use dtrace too some scripts you can use for your probleme  :




I hope that help you

wasfg01Author Commented:
prstat -Z is giving me similar numbers to ps -elfZ. I am worried that when memory util (according to OVPS) reaches 100% we will see lots of bad paging going on, annonymous and executables, which will slow things down considerably.
I doubt I will get authority to run 3rd party scripts on this box unfortunately, which I have to say look very useful.
Can I be sure that it is the filesystem buffer cache that is getting bigger and therefore doing what it is supposed to do?
hello ,

dtrace is a standard tools of solaris10

it is just script you copy and paste

about solaris filesystem cache you can found maybe info at this url :


but you should look at your appli oracle for see what is the root cause of probleme

maybe some process oracle go to infinte loop

Regarding your concern:
    The increase may well be the buffer cache as 'ps' will not be counting this, so how might I be able to prove this.

The tool you need is mdb. Look at snippet for a sample output for a server with 8GB of memory, where the kernel uses 3355MB, the page cache (your buffer cache) uses 153MB and there is 1939MB of free memory.

Also note that if you are using ZFS in Solaris 10, the ZFS buffer is accounted as kernel memory, not page cache (hence the very large 3355MB of kernel memory in the sample output). To get more insight into the ZFS ARC (as the ZFS buffer is called), there's a handy script at:

Note however that 'free' memory is effectively wasted, as it is not working for you (in the same way that money in your pocket is not generating interests). Solaris tries to minimize free memory (up to a point) by caching block device access so as to speed-up future disk access. Such buffers can be freed very easily should an application request more memory, so it is not a problem.

Hope this helps,
# mdb -k
Loading modules: [ unix krtld genunix specfs dtrace cpu.AuthenticAMD.15 ufs md ip sctp usba fcp 
fctl lofs zfs random nfs crypto fcip cpc logindmux ptm ipc ]
> ::memstat
Page Summary                Pages                MB  %Tot
------------     ----------------  ----------------  ----
Kernel                     859062              3355   41%
Anon                       675625              2639   32%
Exec and libs                7994                31    0%
Page cache                  39319               153    2%
Free (cachelist)           110881               433    5%
Free (freelist)            385592              1506   19%
Total                     2078473              8119
Physical                  2049122              8004

Open in new window

wasfg01Author Commented:
Thanks guys. This turned out to be more complex than I first thought. Appreciate your time.
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

Join & Write a Comment

Featured Post

Get expert help—faster!

Need expert help—fast? Use the Help Bell for personalized assistance getting answers to your important questions.

  • 2
  • 2
Tackle projects and never again get stuck behind a technical roadblock.
Join Now