Solved

I have high "db file scattered read" times and not sure where to start by fixing it

Posted on 2006-06-23
3
3,482 Views
Last Modified: 2013-12-11
As you can see below heavy wait times on actually everything not sure where to start on this one, can anybody point me in the right direction?

Event      Total waits      Waits/s      % total      Total wait time      Wait time/s      %  total      Avg Wait
db file scattered read      1,427,013      66.65      79.40      10,003,740.00      443.06      22.25      6.65
SQL*Net break/reset to client      104      0.00      0.00      435,080.00      0.00      0.00      0.00
db file sequential read      496,018      2.66      3.16      213,150.00      6.48      0.33      2.44
buffer busy waits      455      5.89      7.02      75,730.00      1,533.23      77.00      260.11
SQL*Net more data to client      926,010      0.00      0.00      67,180.00      0.00      0.00      0.00
enqueue      27      0.00      0.00      27,890.00      0.00      0.00      0.00
control file parallel write      34,268      0.32      0.39      25,970.00      0.00      0.00      0.00
control file sequential read      133,178      0.06      0.08      23,590.00      0.00      0.00      0.00
log file parallel write      5,295      0.19      0.23      14,630.00      0.00      0.00      0.00
log file sync      4,222      0.19      0.23      13,530.00      0.00      0.00      0.00
latch free      573      0.52      0.62      11,590.00      8.42      0.42      16.25
log file switch (checkpoint incomplete)      14      0.00      0.00      9,470.00      0.00      0.00      0.00
log file switch (archiving needed)      14      0.00      0.00      9,340.00      0.00      0.00      0.00
log file switch completion      66      0.00      0.00      6,940.00      0.00      0.00      0.00
SQL*Net message to client      1,339,510      3.89      4.63      6,580.00      0.00      0.00      0.00
file open      4,788      0.00      0.00      6,160.00      0.00      0.00      0.00
direct path read      33,171      2.91      3.47      5,090.00      0.00      0.00      0.00
log file sequential read      1,282      0.00      0.00      2,480.00      0.00      0.00      0.00
log buffer space      5      0.00      0.00      1,830.00      0.00      0.00      0.00
refresh controlfile command      16,540      0.00      0.00      1,300.00      0.00      0.00      0.00
direct path write      3,778      0.65      0.77      520.00      0.00      0.00      0.00
file identify      399      0.00      0.00      230.00      0.00      0.00      0.00
process startup      1      0.00      0.00      80.00      0.00      0.00      0.00
log file single write      146      0.00      0.00      80.00      0.00      0.00      0.00
LGWR wait for redo copy      11      0.00      0.00      10.00      0.00      0.00      0.00
library cache pin      3      0.00      0.00      10.00      0.00      0.00      0.00
db file parallel write      922      0.00      0.00      0.00      0.00      0.00      0.00
SQL*Net more data from client      116      0.00      0.00      0.00      0.00      0.00      0.00
0
Comment
Question by:sea2sky
3 Comments
 
LVL 16

Expert Comment

by:MohanKNair
ID: 16981450
The statspack report shows the TOP 5 wait events. What are those events? Pay more attention to those and analyze how to reduce the waits.
Also compare the statspack report when the database is functioning normally and during the activity period.
0
 
LVL 2

Expert Comment

by:shahidns
ID: 17000508
Try rebuilding all your indexes for a change. This should solve some of the problem.
0
 
LVL 4

Accepted Solution

by:
sudhi022299 earned 250 total points
ID: 17041256
I wouldn't go straight to rebuilding the indexes. As mentioned by MohanKNair, I would start with the 'Top 5' and go from there. Even in a perfectly *acceptable* running systems, there are bound to be i/o events. So don't just jump to them and start changing things. The things that should give you a clue are:

1. Top 5 wait events of statspack report.
2. Top SQLs doing lot of logical IO's
3. Top SQLs consuming memory (if memory consumption is of a concern)
...

Also don't forget to dig answers for "From when did things go to an unacceptable state?" "What was the time values of an acceptable state" etc.

Have fun tuning.
0

Featured Post

Gigs: Get Your Project Delivered by an Expert

Select from freelancers specializing in everything from database administration to programming, who have proven themselves as experts in their field. Hire the best, collaborate easily, pay securely and get projects done right.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Suggested Solutions

Why doesn't the Oracle optimizer use my index? Querying too much data Most Oracle developers know that an index is useful when you can use it to restrict your result set to a small number of the total rows in a table. So, the obvious side…
How to Create User-Defined Aggregates in Oracle Before we begin creating these things, what are user-defined aggregates?  They are a feature introduced in Oracle 9i that allows a developer to create his or her own functions like "SUM", "AVG", and…
Via a live example, show how to restore a database from backup after a simulated disk failure using RMAN.
This video explains what a user managed backup is and shows how to take one, providing a couple of simple example scripts.

776 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question