Solved

Oracle import utility running 10 times slower than normal

Posted on 2009-06-30
8
361 Views
Last Modified: 2013-12-19
I am having an issue with an import that has been running for 27 hours. Normally this import has taken 7 hours in the past. I am trying to import about 163 gig of data. The export was done using the old exp utility so I can not use datapump. I have tried the usual modifications to get the import to work faster which I will list later. Has anyone experience a slow down like this before and what can I do to resolve the issue. My UNIX team has offered very little so far so I am stuck with figuring this out on my own. I am running on an AIX server (5.3). importing from a SAN disk using a pipe that happens to be on a NAS drive to the database. In the past as I said this import took about 7 hours. So far I have tried the following to speed up the import to no avail.

Used a larger BUFFER size  we used 90M instead of 64M and saw an increase in performance but nothing significant. I think I sehould have increased it even higher and will next time I attempt the import.
 INDEXES=N during import  we did this also.
The database is not in archive log mode.

Will award extra points for a viable solution

0
Comment
Question by:batzman
  • 4
  • 4
8 Comments
 
LVL 40

Assisted Solution

by:mrjoltcola
mrjoltcola earned 250 total points
ID: 24750751
Perhaps are you importing over the network?

Has the SAN configuration changed?

Is CPU consumption high on the database server?

Were your past imports done in the same time of day with the same system load?

0
 

Author Comment

by:batzman
ID: 24752655
thanks for your comment. Not importing over a network. Yes CPU consumption is high. The import was started over the weekend during a very light system load.
0
 
LVL 40

Assisted Solution

by:mrjoltcola
mrjoltcola earned 250 total points
ID: 24756220
CPU consumption is high from your import or from other activity on the system?

I would be asking the SA guys whether something changed on the SAN or if your network link was reprovisioned to a different switch, etc.

Anything in the alert log of interest?

Its highly unusual for that much of a symptom change if nothing else on the system changed. You could try to review AWR/ADDM reports if you had a baseline to go from. If you had no baseline, that might not be much help except at least to identify the _current_ bottleneck.
0
 

Author Comment

by:batzman
ID: 24757122
yes CPU usage is very high and right now I cant do anything about this. I have increased the buffer parm to double what is was. From 120 to 250 M. I also was using a pipe that existed on an NFS mount. I beleive that this was hurting me also. I moved the pipe to a local SAN disk. I am rerunning the import again now. I ran the import with trace on for 30 minutes and sent the output to Oracle for review. I also increased my SGA to doulble the size it was and turned off logging for all indexes. I could not turn of the index logging for LOBS but I will have to make do with what I did. Hopefully what I added here will help someone in the future. I think I will close this question as there appears to be no magic bullet for this. If I see significant increase in performance I will update this question before I close it.
0
PRTG Network Monitor: Intuitive Network Monitoring

Network Monitoring is essential to ensure that computer systems and network devices are running. Use PRTG to monitor LANs, servers, websites, applications and devices, bandwidth, virtual environments, remote systems, IoT, and many more. PRTG is easy to set up & use.

 
LVL 40

Accepted Solution

by:
mrjoltcola earned 250 total points
ID: 24757204
Its not really a buffer issue, its probably more of a "how fast can we write redo and data to disk" issue. You could decrease the checkpoint intervals by tweaking the MTTR_RECOVERY_TARGET. Check that your redo logs are on dedicated volumes or spindles.

Increasing SGA might help indirectly by increasing the redo log buffer, but otherwise, the increased cache wont help much, I think.

I think NFS was probably the biggest bottleneck. I was not aware that you were using NFS vs previously using some other SAN protocol (iSCSI?).

Anyway, good luck, I hope it works out. Feel free to ping back here if you get more info, even if the question is closed.
0
 

Author Comment

by:batzman
ID: 24799830
No solution. I am continuing working with Oracle to discover a resolution. I will bounce the server the next scheduled maintenance  to start fresh on the next attempt of this import. I have read that AIX sometimes does not release memory when proceses complete. Until then I did complete the import. It took 62 hours to complete
0
 

Author Closing Comment

by:batzman
ID: 31598589
As usual thanks to the community for responding. I have not completly solved my issue but I apprciate all of the feedback. I will continue to work this issue with Oracle support.
0
 
LVL 40

Expert Comment

by:mrjoltcola
ID: 24803292
Let us know what you find, good luck.
0

Featured Post

PRTG Network Monitor: Intuitive Network Monitoring

Network Monitoring is essential to ensure that computer systems and network devices are running. Use PRTG to monitor LANs, servers, websites, applications and devices, bandwidth, virtual environments, remote systems, IoT, and many more. PRTG is easy to set up & use.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Subquery in Oracle: Sub queries are one of advance queries in oracle. Types of advance queries: •      Sub Queries •      Hierarchical Queries •      Set Operators Sub queries are know as the query called from another query or another subquery. It can …
Cursors in Oracle: A cursor is used to process individual rows returned by database system for a query. In oracle every SQL statement executed by the oracle server has a private area. This area contains information about the SQL statement and the…
This video shows, step by step, how to configure Oracle Heterogeneous Services via the Generic Gateway Agent in order to make a connection from an Oracle session and access a remote SQL Server database table.
This video shows how to set up a shell script to accept a positional parameter when called, pass that to a SQL script, accept the output from the statement back and then manipulate it in the Shell.

932 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

21 Experts available now in Live!

Get 1:1 Help Now