Solved

Oracle Database 11gR2 (11.2.0.2) LGWR blocking sessions, Performance very slow

Posted on 2013-06-19
10
2,061 Views
Last Modified: 2013-10-24
Dear all,

this is a call for all Oracle & Solaris Experts out there.

Currently I'm expiriencing a very strange issue on a quite large Oracle DB installation, running on Solaris 10 x64.

Background infos:

- HP Z420 Workstation
- 16GB RAM
- Adaptec 7805Q RAID Controller (6x 1TB VelociRaptor, 2x 512GB SSD MaxCache, RAID6)
- Solaris 10 x64 (01/13)
- Oracle 11gR2 (11.2.0.2)


Problem:
I did install the DB, following the best practises guides for Solaris Installations using a ZFS store.
The oracle user has been exclusively deeded 12GB RAM, SGA_TARGET parameter was set to 10GB, SGA_MAX_SIZE to 12GB.
For the ZFS mount, the usual Performance practises have been followed, like blocksize=4K, logbias=througput, compression=off, primary_cache=all.

Even that, we are expiriencing that after about 60 minutes of running the LGWR process comes up and blocks all kinds of other sessions, performance drops below the carpet and all Advisors start reporting bad I/O performance.

I already did attache some more Redo-logfiles to the DB, it helped, but just very little.

We also tried filesystem performance checks, showing a max performance of 900MB/sec. I know this is not how Oracle is using the FS, but it gives an introduction about the system's usual performance.


Thank you very much for any help

Regards

Blindy
0
Comment
Question by:BlindOracle
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
  • 3
  • 3
  • 2
  • +2
10 Comments
 
LVL 37

Expert Comment

by:Geert Gruwez
ID: 39260138
have you tried it with less memory ?
8Gb target and max_size 9Gb ?
giving it the limit and not leaving anything for other apps running with the same user ... ?
0
 
LVL 35

Expert Comment

by:johnsone
ID: 39260477
Based on the information you have, it looks more to me like an I/O problem.  You seem to switch logs after 60 minutes which is why lgwr is picking up and taking over the I/O.  How big are your logs?  More frequent switching of smaller logs may help.  Also, we always wrote logs to their own file system.  That should alleviate some of the I/O bottleneck.  Also, you are going to pay a pretty heavy write penalty with that raid level.  A separate file system possibly just mirrored or mirrored and striped for the archive logs would help.
0
 
LVL 15

Expert Comment

by:Franck Pachot
ID: 39260795
Hi,

I'm not sure that blocksize=4K is good for log writer. It often write larger i/o size. The default 128k should be better.

Idem for logbias=througput the default latency should be better for log writer. Sessions are waiting on 'log file sync' at commit I suppose. This is all about latency.

And you use zfs caching, but let very low memory for it: 12GB SGA + PGA + system ... the 16GB are quicly reached.

Regards,
Franck.
0
SharePoint Admin?

Enable Your Employees To Focus On The Core With Intuitive Onscreen Guidance That is With You At The Moment of Need.

 
LVL 62

Expert Comment

by:gheist
ID: 39263727
ZFS uses a lot of RAM for 6TB just 6GB will be consumed by ZFS vnode structures alone.

I am backing option to reduce oracle mem (6GB for cache, so SGA must be like half of the rest)
0
 

Author Comment

by:BlindOracle
ID: 39264216
Dear all,
thank you very much for the suggestions.
About franckpachot's reply, an Oracle whitepaper suggests my set values for the ZFS as best practise...:S

I'm going to play with the memory settings and uptate this thread.

Please feel free to get me additional suggestions.

Thank you

Blindy
0
 
LVL 62

Expert Comment

by:gheist
ID: 39264582
Stop oracle
Check output of FREE
Half can be used for oracle (you need some 2GB free for all oracle memory leeches around database)
0
 
LVL 15

Expert Comment

by:Franck Pachot
ID: 39265316
>> an Oracle whitepaper suggests my set values for the ZFS as best practice
For datafiles maybe. But are you sure they recommand that for redo ?
And for datafiles, do they really rcommand 4k record size ? I suppose that your Oracle blocks are greater than that, no ?
0
 

Author Comment

by:BlindOracle
ID: 39265448
For redo logs, the suggestion is 128k and logbias=latency.
0
 
LVL 62

Accepted Solution

by:
gheist earned 500 total points
ID: 39265570
ZFS optimal blocksize is 128k
So it is good for DWH, not so good for OLTP.
0
 
LVL 15

Expert Comment

by:Franck Pachot
ID: 39265626
>> For redo logs, the suggestion is 128k and logbias=latency.
Yes that's make sense !
 If you can test that, your lgwr performance should be better.
After being sure that you do not allocate too much memory.
0

Featured Post

Independent Software Vendors: We Want Your Opinion

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Cursors in Oracle: A cursor is used to process individual rows returned by database system for a query. In oracle every SQL statement executed by the oracle server has a private area. This area contains information about the SQL statement and the…
How to Unravel a Tricky Query Introduction If you browse through the Oracle zones or any of the other database-related zones you'll come across some complicated solutions and sometimes you'll just have to wonder how anyone came up with them.  …
This video shows, step by step, how to configure Oracle Heterogeneous Services via the Generic Gateway Agent in order to make a connection from an Oracle session and access a remote SQL Server database table.
This video shows how to copy a database user from one database to another user DBMS_METADATA.  It also shows how to copy a user's permissions and discusses password hash differences between Oracle 10g and 11g.

734 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question