[Okta Webinar] Learn how to a build a cloud-first strategyRegister Now

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 2433
  • Last Modified:

Oracle Database 11gR2 (11.2.0.2) LGWR blocking sessions, Performance very slow

Dear all,

this is a call for all Oracle & Solaris Experts out there.

Currently I'm expiriencing a very strange issue on a quite large Oracle DB installation, running on Solaris 10 x64.

Background infos:

- HP Z420 Workstation
- 16GB RAM
- Adaptec 7805Q RAID Controller (6x 1TB VelociRaptor, 2x 512GB SSD MaxCache, RAID6)
- Solaris 10 x64 (01/13)
- Oracle 11gR2 (11.2.0.2)


Problem:
I did install the DB, following the best practises guides for Solaris Installations using a ZFS store.
The oracle user has been exclusively deeded 12GB RAM, SGA_TARGET parameter was set to 10GB, SGA_MAX_SIZE to 12GB.
For the ZFS mount, the usual Performance practises have been followed, like blocksize=4K, logbias=througput, compression=off, primary_cache=all.

Even that, we are expiriencing that after about 60 minutes of running the LGWR process comes up and blocks all kinds of other sessions, performance drops below the carpet and all Advisors start reporting bad I/O performance.

I already did attache some more Redo-logfiles to the DB, it helped, but just very little.

We also tried filesystem performance checks, showing a max performance of 900MB/sec. I know this is not how Oracle is using the FS, but it gives an introduction about the system's usual performance.


Thank you very much for any help

Regards

Blindy
0
BlindOracle
Asked:
BlindOracle
  • 3
  • 3
  • 2
  • +2
1 Solution
 
Geert GruwezOracle dbaCommented:
have you tried it with less memory ?
8Gb target and max_size 9Gb ?
giving it the limit and not leaving anything for other apps running with the same user ... ?
0
 
johnsoneSenior Oracle DBACommented:
Based on the information you have, it looks more to me like an I/O problem.  You seem to switch logs after 60 minutes which is why lgwr is picking up and taking over the I/O.  How big are your logs?  More frequent switching of smaller logs may help.  Also, we always wrote logs to their own file system.  That should alleviate some of the I/O bottleneck.  Also, you are going to pay a pretty heavy write penalty with that raid level.  A separate file system possibly just mirrored or mirrored and striped for the archive logs would help.
0
 
Franck PachotCommented:
Hi,

I'm not sure that blocksize=4K is good for log writer. It often write larger i/o size. The default 128k should be better.

Idem for logbias=througput the default latency should be better for log writer. Sessions are waiting on 'log file sync' at commit I suppose. This is all about latency.

And you use zfs caching, but let very low memory for it: 12GB SGA + PGA + system ... the 16GB are quicly reached.

Regards,
Franck.
0
Prepare for your VMware VCP6-DCV exam.

Josh Coen and Jason Langer have prepared the latest edition of VCP study guide. Both authors have been working in the IT field for more than a decade, and both hold VMware certifications. This 163-page guide covers all 10 of the exam blueprint sections.

 
gheistCommented:
ZFS uses a lot of RAM for 6TB just 6GB will be consumed by ZFS vnode structures alone.

I am backing option to reduce oracle mem (6GB for cache, so SGA must be like half of the rest)
0
 
BlindOracleAuthor Commented:
Dear all,
thank you very much for the suggestions.
About franckpachot's reply, an Oracle whitepaper suggests my set values for the ZFS as best practise...:S

I'm going to play with the memory settings and uptate this thread.

Please feel free to get me additional suggestions.

Thank you

Blindy
0
 
gheistCommented:
Stop oracle
Check output of FREE
Half can be used for oracle (you need some 2GB free for all oracle memory leeches around database)
0
 
Franck PachotCommented:
>> an Oracle whitepaper suggests my set values for the ZFS as best practice
For datafiles maybe. But are you sure they recommand that for redo ?
And for datafiles, do they really rcommand 4k record size ? I suppose that your Oracle blocks are greater than that, no ?
0
 
BlindOracleAuthor Commented:
For redo logs, the suggestion is 128k and logbias=latency.
0
 
gheistCommented:
ZFS optimal blocksize is 128k
So it is good for DWH, not so good for OLTP.
0
 
Franck PachotCommented:
>> For redo logs, the suggestion is 128k and logbias=latency.
Yes that's make sense !
 If you can test that, your lgwr performance should be better.
After being sure that you do not allocate too much memory.
0

Featured Post

Get your Disaster Recovery as a Service basics

Disaster Recovery as a Service is one go-to solution that revolutionizes DR planning. Implementing DRaaS could be an efficient process, easily accessible to non-DR experts. Learn about monitoring, testing, executing failovers and failbacks to ensure a "healthy" DR environment.

  • 3
  • 3
  • 2
  • +2
Tackle projects and never again get stuck behind a technical roadblock.
Join Now