[Webinar] Streamline your web hosting managementRegister Today

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 1167
  • Last Modified:

Interpreting 'top' output in Solaris

Hello,

The 'prstat' output in a solaris machine shows as given below. How do I get more info about a particular process ID. Suppose in the given output, if I want to know more details about the PID 6645, how can i see that? Like which file it is accessing and what command it executes etc.
Here if you see, most of the processes are runing as oracle. But my DBA team wants to know the actual command it executes and what that PID is actually doing?

In addition, I need one more detail regarding 'top' output. You could see the iowait value is 28.0%.  I want to know which disk is causing such high IOWAIT issue. I tried with iostat command but it doesn't help me much. Please advice.

root@mach3 # top
last pid:  3236;  load avg:  8.44,  8.19,  9.99;       up 22+11:18:25                                                                               14:20:37
879 processes: 869 sleeping, 1 running, 4 zombie, 5 on cpu
CPU states: 18.6% idle, 41.5% user, 11.9% kernel, 28.0% iowait,  0.0% swap
Memory: 32G phys mem, 6989M free mem, 32G total swap, 31G free swap

   PID USERNAME LWP PRI NICE  SIZE   RES STATE    TIME    CPU COMMAND
  6645 oracle     1  30    0   14G   11G cpu/20   4:43  4.67% oracle
 15454 oracle     1  59    0   14G   11G run      6:37  2.55% oracle
  6647 oracle    11  52    0   14G   11G sleep    4:53  2.18% oracle
 12218 oracle     1  50    0   14G   11G sleep    4:50  2.15% oracle
 21419 oracle     1  52    0   14G   11G sleep    1:43  1.99% oracle
 13192 oracle     1  60    0   14G   11G sleep   12:26  1.78% oracle
 14988 oracle     1  22    0   14G   11G cpu/17  14:23  1.36% oracle
 15741 oracle    11  59    0   14G   11G sleep   28:26  1.33% oracle
 12247 oracle     1  59    0   14G   11G sleep    4:41  1.23% oracle
 14614 oracle     1  58    0   14G   11G cpu/22  13:03  1.14% oracle
 15438 oracle     1  60    0   14G   11G sleep   10:35  1.13% oracle
 25704 oracle     1  59    0   14G   11G sleep    0:07  0.96% oracle
 12385 oracle     1  42    0   14G   11G sleep   14:41  0.90% oracle
 12373 oracle     1  59    0   14G   11G sleep   14:03  0.89% oracle
 14606 oracle     1  49    0   14G   11G sleep   10:54  0.89% oracle
0
amandowara
Asked:
amandowara
  • 4
  • 3
  • 2
  • +1
3 Solutions
 
TintinCommented:
To see the down and dirty details of a process, you can use truss, eg:

truss -p 6645

For the I/O wait, post the output of

iostat -x 3

0
 
TintinCommented:
Additionally, I would highly recommend installing Brendon Gregg's Dtrace utilities.

http://www.brendangregg.com/dtrace.html
0
 
wwnosalCommented:
Regarding monitoring the process basing on pid you could try to use the following commands

lsof
truss
pstack

For the iowait time if you don't like iostat you could try to use with option -d AFAIR (don't have solaris at hand at the moment).
hope this helps
0
[Webinar] Improve your customer journey

A positive customer journey is important in attracting and retaining business. To improve this experience, you can use Google Maps APIs to increase checkout conversions, boost user engagement, and optimize order fulfillment. Learn how in this webinar presented by Dito.

 
wwnosalCommented:
Argh, it looks that I forgot what should by used with option -d. I meant sar.
0
 
amandowaraAuthor Commented:

Attached is the IOSTAT output:
 
root@mach3 # iostat -x
                  extended device statistics
device       r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b
1/md100     13.7   57.3  776.2 1001.1  0.0  2.7   38.6   0   8
1/md200    602.8   66.8 9115.9  533.7  0.0 14.1   21.1   0  91
1/md210    596.6   25.8 8911.9  205.1  0.0 13.9   22.3   0  91
1/md220      6.2   41.0  204.0  328.6  0.0  0.2    4.3   0  15
1/md300     78.3   49.1 1168.4  418.3  0.0  3.1   24.2   0  11
1/md400     13.6   53.4  428.3  429.5  0.0  0.1    1.2   0   5
1/md410      0.0    0.1    0.1    3.1  0.0  0.0   23.2   0   0
1/md420     13.6   53.4  428.2  426.4  0.0  0.1    1.2   0   5
1/md500    149.3   43.6 2551.1  384.9  0.0  1.7    8.6   0  50
1/md600     37.1  168.3 1678.3 1374.7  0.0  0.4    1.7   0  16
1/md610     35.7    0.1 1626.6    2.9  0.0  0.2    6.3   0   9
1/md620      1.4  168.1   51.7 1371.8  0.0  0.1    0.8   0   9
md0          0.3    1.0    3.6    5.2  0.0  0.0   29.0   1   1
md1          0.2    0.0    1.7    3.1  0.0  0.0   24.1   0   0
md3          0.0    0.0    0.0    5.4  0.0  0.0  108.7   0   0
md4          0.5    1.8   11.3    3.8  0.0  0.0   23.7   1   1
md6          0.0    0.0    0.0    0.0  0.0  0.0    3.7   0   0
md10         0.2    1.0    1.8    5.2  0.0  0.0   25.3   0   1
md11         0.1    0.0    0.8    3.0  0.0  0.0   35.6   0   0
md13         0.0    0.0    0.0    5.4  0.0  0.0  100.6   0   0
md14         0.2    1.8    5.7    3.8  0.0  0.0   19.8   0   1
md16         0.0    0.0    0.0    0.0  0.0  0.0    4.3   0   0
md20         0.2    1.0    1.8    5.2  0.0  0.0   16.6   0   1
md21         0.1    0.0    0.8    3.0  0.0  0.0   34.0   0   0
md23         0.0    0.0    0.0    5.4  0.0  0.0   91.5   0   0
md24         0.2    1.8    5.6    3.8  0.0  0.0   14.5   0   1
md26         0.0    0.0    0.0    0.0  0.0  0.0    4.4   0   0
md100        0.0    0.0    0.2    0.0  0.0  0.0    8.8   0   0
md101        0.0    0.0    0.1    0.0  0.0  0.0    9.4   0   0
md102        0.0    0.0    0.0    0.0  0.0  0.0    6.5   0   0
md103        0.0    0.0    0.0    0.0  0.0  0.0   10.2   0   0
md200        0.0    0.0    0.2    0.0  0.0  0.0    8.3   0   0
md201        0.0    0.0    0.1    0.0  0.0  0.0    7.0   0   0
md202        0.0    0.0    0.0    0.0  0.0  0.0   10.1   0   0
md203        0.0    0.0    0.0    0.0  0.0  0.0    8.8   0   0
sd0          0.0    0.0    0.0    0.0  0.0  0.0    0.6   0   0
ssd0         0.5    2.8    8.3   17.5  0.0  0.1   17.5   0   1
ssd1         0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
ssd2         0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
ssd3         0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
ssd4         0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
ssd5         0.5    4.1    8.3   18.1  0.0  0.1   22.9   0   3
ssd18       13.7   57.3  776.2 1001.0  0.0  2.7   38.6   0   8
ssd19      602.8   66.8 9115.6  533.7  0.0 14.1   21.0   0  91
ssd20       78.3   49.1 1168.4  418.2  0.0  3.1   24.2   0  11
ssd21       13.6   53.4  428.3  429.5  0.0  0.1    1.2   0   5
ssd22      149.3   43.6 2551.0  384.9  0.0  1.6    8.6   0  49
ssd23       37.1  168.3 1678.2 1374.7  0.0  0.3    1.7   0  16
ssd24        0.0    0.0    0.2    0.0  0.0  0.0    8.8   0   0
ssd25        0.0    0.0    0.2    0.0  0.0  0.0    8.3   0   0
nfs2         0.0    0.0    0.0    0.0  0.0  0.0    0.2   0   0
nfs3        25.0  156.6  594.3 4965.5 24.4  2.3  147.4  25  26
root@mach3 #

Open in new window

0
 
TintinCommented:
According to your iostat output, the I/O wait is all due to one of your NFS mounted directories.
0
 
pitorenCommented:
I'd also look st disk corresponding to device name ssd19 - 91% busy. Av service time is low, but I wonder if you are asking here because the system is performing below the levels the DBA is expecting/hoping?

Output of metastat would be useful

Kevin



0
 
pitorenCommented:
As for getting command line args (and environment variables) look at "pargs -e -a -p <pid>" if using Solaris 10+.

Before that I think you are stuck with

/usr/proc/bin/pflags <pid>
/usr/bin/ps -f -p <pid>

K
0
 
amandowaraAuthor Commented:
Here am attaching the 'metastat' output for your verification
mach3-metastat-output.txt
0
 
amandowaraAuthor Commented:
Thanks for your inputs.
0
 
amandowaraAuthor Commented:
Thankyou
0

Featured Post

The 14th Annual Expert Award Winners

The results are in! Meet the top members of our 2017 Expert Awards. Congratulations to all who qualified!

  • 4
  • 3
  • 2
  • +1
Tackle projects and never again get stuck behind a technical roadblock.
Join Now