Solved

Oracle Database 11gR2 (11.2.0.2) LGWR blocking sessions, Performance very slow

Posted on 2013-06-19
10
1,897 Views
Last Modified: 2013-10-24
Dear all,

this is a call for all Oracle & Solaris Experts out there.

Currently I'm expiriencing a very strange issue on a quite large Oracle DB installation, running on Solaris 10 x64.

Background infos:

- HP Z420 Workstation
- 16GB RAM
- Adaptec 7805Q RAID Controller (6x 1TB VelociRaptor, 2x 512GB SSD MaxCache, RAID6)
- Solaris 10 x64 (01/13)
- Oracle 11gR2 (11.2.0.2)


Problem:
I did install the DB, following the best practises guides for Solaris Installations using a ZFS store.
The oracle user has been exclusively deeded 12GB RAM, SGA_TARGET parameter was set to 10GB, SGA_MAX_SIZE to 12GB.
For the ZFS mount, the usual Performance practises have been followed, like blocksize=4K, logbias=througput, compression=off, primary_cache=all.

Even that, we are expiriencing that after about 60 minutes of running the LGWR process comes up and blocks all kinds of other sessions, performance drops below the carpet and all Advisors start reporting bad I/O performance.

I already did attache some more Redo-logfiles to the DB, it helped, but just very little.

We also tried filesystem performance checks, showing a max performance of 900MB/sec. I know this is not how Oracle is using the FS, but it gives an introduction about the system's usual performance.


Thank you very much for any help

Regards

Blindy
0
Comment
Question by:BlindOracle
  • 3
  • 3
  • 2
  • +2
10 Comments
 
LVL 37

Expert Comment

by:Geert Gruwez
ID: 39260138
have you tried it with less memory ?
8Gb target and max_size 9Gb ?
giving it the limit and not leaving anything for other apps running with the same user ... ?
0
 
LVL 34

Expert Comment

by:johnsone
ID: 39260477
Based on the information you have, it looks more to me like an I/O problem.  You seem to switch logs after 60 minutes which is why lgwr is picking up and taking over the I/O.  How big are your logs?  More frequent switching of smaller logs may help.  Also, we always wrote logs to their own file system.  That should alleviate some of the I/O bottleneck.  Also, you are going to pay a pretty heavy write penalty with that raid level.  A separate file system possibly just mirrored or mirrored and striped for the archive logs would help.
0
 
LVL 15

Expert Comment

by:Franck Pachot
ID: 39260795
Hi,

I'm not sure that blocksize=4K is good for log writer. It often write larger i/o size. The default 128k should be better.

Idem for logbias=througput the default latency should be better for log writer. Sessions are waiting on 'log file sync' at commit I suppose. This is all about latency.

And you use zfs caching, but let very low memory for it: 12GB SGA + PGA + system ... the 16GB are quicly reached.

Regards,
Franck.
0
PRTG Network Monitor: Intuitive Network Monitoring

Network Monitoring is essential to ensure that computer systems and network devices are running. Use PRTG to monitor LANs, servers, websites, applications and devices, bandwidth, virtual environments, remote systems, IoT, and many more. PRTG is easy to set up & use.

 
LVL 62

Expert Comment

by:gheist
ID: 39263727
ZFS uses a lot of RAM for 6TB just 6GB will be consumed by ZFS vnode structures alone.

I am backing option to reduce oracle mem (6GB for cache, so SGA must be like half of the rest)
0
 

Author Comment

by:BlindOracle
ID: 39264216
Dear all,
thank you very much for the suggestions.
About franckpachot's reply, an Oracle whitepaper suggests my set values for the ZFS as best practise...:S

I'm going to play with the memory settings and uptate this thread.

Please feel free to get me additional suggestions.

Thank you

Blindy
0
 
LVL 62

Expert Comment

by:gheist
ID: 39264582
Stop oracle
Check output of FREE
Half can be used for oracle (you need some 2GB free for all oracle memory leeches around database)
0
 
LVL 15

Expert Comment

by:Franck Pachot
ID: 39265316
>> an Oracle whitepaper suggests my set values for the ZFS as best practice
For datafiles maybe. But are you sure they recommand that for redo ?
And for datafiles, do they really rcommand 4k record size ? I suppose that your Oracle blocks are greater than that, no ?
0
 

Author Comment

by:BlindOracle
ID: 39265448
For redo logs, the suggestion is 128k and logbias=latency.
0
 
LVL 62

Accepted Solution

by:
gheist earned 500 total points
ID: 39265570
ZFS optimal blocksize is 128k
So it is good for DWH, not so good for OLTP.
0
 
LVL 15

Expert Comment

by:Franck Pachot
ID: 39265626
>> For redo logs, the suggestion is 128k and logbias=latency.
Yes that's make sense !
 If you can test that, your lgwr performance should be better.
After being sure that you do not allocate too much memory.
0

Featured Post

PRTG Network Monitor: Intuitive Network Monitoring

Network Monitoring is essential to ensure that computer systems and network devices are running. Use PRTG to monitor LANs, servers, websites, applications and devices, bandwidth, virtual environments, remote systems, IoT, and many more. PRTG is easy to set up & use.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Suggested Solutions

Introduction A previously published article on Experts Exchange ("Joins in Oracle", http://www.experts-exchange.com/Database/Oracle/A_8249-Joins-in-Oracle.html) makes a statement about "Oracle proprietary" joins and mixes the join syntax with gen…
How to Create User-Defined Aggregates in Oracle Before we begin creating these things, what are user-defined aggregates?  They are a feature introduced in Oracle 9i that allows a developer to create his or her own functions like "SUM", "AVG", and…
This video explains at a high level with the mandatory Oracle Memory processes are as well as touching on some of the more common optional ones.
Via a live example, show how to take different types of Oracle backups using RMAN.

776 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question