Solved

Solaris Disk issue

Posted on 2008-10-06
7
755 Views
Last Modified: 2013-12-27
Hi,

In a Solaris 9, Sun-Fire V890 machine, when i run the 'iostat -x' (shown below), it shows that some disks are having high read/write cycles and average service time is at higher rate. But i could find any information about the particular disks. Like where those disks are mounted, what filesystem it uses and what slices are carved out from them etc. I checked in /dev, /etc/vfstab, by running metastat command etc.

root@mach3 # iostat -x
                  extended device statistics
device       r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b
1/md100     13.5   57.4  767.5 1002.5  0.0  2.8   38.8   0   8
1/md200    603.4   67.3 9153.3  538.1  0.0 14.2   21.2   0  91
1/md210    597.2   26.0 8947.8  206.4  0.0 14.0   22.5   0  91
1/md220      6.2   41.4  205.6  331.7  0.0  0.2    4.3   0  15
.
.
.
ssd18       13.5   57.4  767.6 1002.6  0.0  2.7   38.8   0   8
ssd19      603.5   67.3 9154.7  538.3  0.0 14.2   21.1   0  91
ssd20       70.2   49.5 1090.0  421.7  0.0  2.5   21.2   0  10
ssd21       13.6   53.7  431.1  431.8  0.0  0.1    1.2   0   5
ssd22      147.4   43.5 2543.0  384.0  0.0  1.6    8.5   0  49
ssd23       37.4  168.2 1694.6 1374.8  0.0  0.3    1.7   0  16
.
.
.
In the above output, I want to find the info about ssd19, ssd22 & ssd23.  Please let me know how can i get the details about these devices.

Thanks.
PS: 'df' output is attached in snippet.
root@mach3 # uname -a
SunOS mach3 5.9 Generic_122300-15 sun4u sparc SUNW,Sun-Fire-V890
root@mach3 # df -k
Filesystem            kbytes    used   avail capacity  Mounted on
/dev/md/dsk/d0       34145829 8326073 25478298    25%    /
/proc                      0       0       0     0%    /proc
mnttab                     0       0       0     0%    /etc/mnttab
fd                         0       0       0     0%    /dev/fd
swap                 15395488   23864 15371624     1%    /var/run
swap                 15375648    4024 15371624     1%    /tmp
/dev/md/dsk/d102     516351569   65554 511122500     1%    /u102
/dev/md/dsk/d103     516351569   65555 511122499     1%    /u103
/dev/md/dsk/d101     842686176 127279359 706979956    16%    /u101
/dev/md/dsk/d202     516351569 108739611 402448443    22%    /u202
/dev/md/dsk/d201     842686176 25038538 809220777     4%    /u201
/dev/md/dsk/d203     516351569   65555 511122499     1%    /u203
/dev/md/dsk/d4       13429995 3032374 10263322    23%    /u00
/dev/md/dsk/d3       59903389   59417 59244939     1%    /data1
/dev/md/ihotdr_data/dsk/d410
                     283992675 5482614 275670135     2%    /global/6140a/raid10/u21
/dev/md/ihotdr_data/dsk/d610
                     283992675 166250069 114902680    60%    /global/6140a/raid10/u27
/dev/md/ihotdr_data/dsk/d420
                     283992675 82537567 198615182    30%    /global/6140a/raid10/u31
/dev/md/ihotdr_data/dsk/d500
                     574147509 218637280 349768754    39%    /global/6140a/raid10/u34
/dev/md/ihotdr_data/dsk/d620
                     283992675 77303948 203848801    28%    /global/6140a/raid10/u37
/dev/md/ihotdr_data/dsk/d100
                     574147509 324660758 243745276    58%    /global/6140b/raid10/u11
/dev/md/ihotdr_data/dsk/d210
                     283992675 193061342 88091407    69%    /global/6140b/raid10/u14
/dev/md/ihotdr_data/dsk/d300
                     574147509 40418711 527987323     8%    /global/6140b/raid10/u17
/dev/md/ihotdr_data/dsk/d220
                     283992675 9438051 271714698     4%    /global/6140b/raid10/u24
/dev/md/dsk/d6        498039   41774  406462    10%    /global/.devices/node@1
/dev/md/dsk/d8        498039   41766  406470    10%    /global/.devices/node@2
172.29.23.250:/nethdd/dba_space
                     2917810816 1196943444 1720867372    42%    /retention_vol0
root@mach3 #

Open in new window

0
Comment
Question by:amandowara
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
  • 4
  • 2
7 Comments
 
LVL 22

Assisted Solution

by:blu
blu earned 300 total points
ID: 22650903
Try using "iostat -x -n -p -m"
0
 

Author Comment

by:amandowara
ID: 22652257
Hi Blu, By giving your command, I get the output as follows:

   13.7   57.4  773.9 1004.0  0.0  2.7    0.3   38.5   0   8 c6t600A0B800032D99A0000038747C6C5E5d0
  603.4   67.1 9139.0  536.2  0.0 14.1    0.0   21.1   0  91 c6t600A0B800032D8B20000037347C6C5AAd0
   75.1   49.4 1135.6  420.4  0.0  3.0    0.0   24.1   0  10 c6t600A0B800032D8B20000037147C6C490d0
   13.7   53.7  430.5  431.5  0.0  0.1    0.0    1.2   0   5 c6t600A0B800032EC580000037047C6BDA4d0
  148.7   43.8 2550.2  386.2  0.0  1.6    0.0    8.5   0  49 c6t600A0B800032ED180000035947C6BD47d0
   37.3  168.8 1687.1 1379.2  0.0  0.3    0.0    1.7   0  16 c6t600A0B800032EC580000036E47C6BD00d0
    0.0    0.0    0.2    0.0  0.0  0.0    0.0    8.8   0   0 c6t600A0B800032D99A0000045447DFBC1Fd0
    0.0    0.0    0.2    0.0  0.0  0.0    0.0    8.3   0   0 c6t600A0B800032D99A0000045647DFBCE7d0


Corresponding 'iostat -x' output is:

ssd18       13.7   57.4  773.9 1004.1  0.0  2.7   38.7   0   8
ssd19      603.4   67.1 9139.0  536.2  0.0 14.1   21.1   0  91
ssd20       75.1   49.4 1135.6  420.4  0.0  3.0   24.1   0  10
ssd21       13.7   53.7  430.5  431.5  0.0  0.1    1.2   0   5
ssd22      148.7   43.8 2550.2  386.2  0.0  1.6    8.6   0  49
ssd23       37.3  168.8 1687.1 1379.2  0.0  0.3    1.7   0  16
ssd24        0.0    0.0    0.2    0.0  0.0  0.0    8.8   0   0
ssd25        0.0    0.0    0.2    0.0  0.0  0.0    8.3   0   0
nfs2         0.0    0.0    0.0    0.0  0.0  0.0    0.2   0   0
nfs3        25.1  157.4  597.5 4992.0 24.6  2.4  147.4  25  26

However am unable to trace the actual disk using the output in this format "c6t600A0B800032D99A0000038747C6C5E5d0".  Please tell me how can i determine the disk info with this output?

0
 

Accepted Solution

by:
amandowara earned 0 total points
ID: 22652271
For example for ssd19, the corresponding output is
 603.4   67.1 9139.0  536.2  0.0 14.1    0.0   21.1   0  91 c6t600A0B800032D8B20000037347C6C5AAd0

How can I find to which disk it belongs to ?
0
Technology Partners: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

 
LVL 22

Expert Comment

by:blu
ID: 22652824
Unfortunately, those are in fact the disks. If you go to /dev/dsk and do an ls there, you will find that those ugly names are really the device names for the disks. Did you get any higher level disk names earlier in the output?
0
 
LVL 2

Assisted Solution

by:pitoren
pitoren earned 200 total points
ID: 22655528
need metastat output to see mapping from mdXXX names to underlying devices, either the ssdXX names or horrible c6t600A0B800032D8B20000037.... names.

Really this kind of setup needs a design document that describes how (also why and when) the various disks / 6140s are connected to the various servers, and for what purpose each LUN / disk. That doc should be maintained by sysadmin and DBA.

Kevin







0
 

Author Comment

by:amandowara
ID: 22664624
Thanks for all your inputs. Your inputs helped me to trace the disks which are having issues. But for some reason, we couldn't fix the actual problem. We tried to reduce the IOwait by stopping all the applications running on those disks but still the disks were showing 90% busy and high r/w cycles and it was causing high CPU utilization as well. Finally we decided to crash those disks and convert them from RAID5 to RAID10.
0
 

Author Comment

by:amandowara
ID: 22664639
Thanks
0

Featured Post

Free Tool: SSL Checker

Scans your site and returns information about your SSL implementation and certificate. Helpful for debugging and validating your SSL configuration.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

When you do backups in the Solaris Operating System, the file system must be inactive. Otherwise, the output may be inconsistent. A file system is inactive when it's unmounted or it's write-locked by the operating system. Although the fssnap utility…
In tuning file systems on the Solaris Operating System, changing some parameters of a file system usually destroys the data on it. For instance, changing the cache segment block size in the volume of a T3 requires that you delete the existing volu…
Learn how to get help with Linux/Unix bash shell commands. Use help to read help documents for built in bash shell commands.: Use man to interface with the online reference manuals for shell commands.: Use man to search man pages for unknown command…
Learn how to navigate the file tree with the shell. Use pwd to print the current working directory: Use ls to list a directory's contents: Use cd to change to a new directory: Use wildcards instead of typing out long directory names: Use ../ to move…

726 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question