Solved

init.ora values

Posted on 2001-06-30
7
1,444 Views
Last Modified: 2012-06-21
Our Server configuration is as follows: Dual PIII 700 Xeon Processor, 1 GB RAM, 3*18 GB SCSI Hard Disks.

We plan to use Win2k as our OS. The Server will work primarily as a database server (8.1.7) but would also function as a file-server.

There are about 50 oracle users, with about 30-40 using concurrently our application developed in Developer 2k (shortly to be upgraded to 6i).

I need to know the optimum parameter values in the init.ora file for these:
open_cursors, db_block_buffers, db_block_size, shared_pool_size, large_pool_size, processes, sort_area_size and any other relevant parameters.

I am also wondering if we can use the PARALLEL option since we have 2 CPUs?

What are the tradeoffs for using archived-redo-logs, and will it be useful for our scenario?

My intention is to learn as much as I can without causing performance lags. So any and all help/advice will be appreciated.

0
Comment
Question by:newbie123
7 Comments
 
LVL 9

Accepted Solution

by:
dbalaski earned 100 total points
Comment Utility
Well,   it is really difficult to give those parameters without seeing the applications and being a little more familar with its overall operation.    So  let me say this:   any answer given will be really subjective!

> I need to know the optimum parameter values in the init.ora file for these:
> open_cursors, db_block_buffers, db_block_size, shared_pool_size, large_pool_size, processes, > sort_area_size,   and any other relevant parameters.


db_block_size  --   should be set to hold your largest row without causing migration.
    By default on my systems,   I generally start with an 8K blocksize
   The one thing I can say is that a 2K blocksize generally is too small and really doesn't help performance
     I joke around and tell people
    "Use a 2K blocksize if you want to stress test oracle on your hardware platform."

db_block_buffers -- the rule of thumb is the "total SGA"  should not exceed around 1/2 the node's
     real memory.      This is really dependant upon  a number of factors -- how much data,  # of users
     (remember that user processes take up memory in PGA/UGA)..   etc
     You really need to watch your database to determine the optimal size...

shared_pool_size --  again,  there is the rule of thumb -- but it really depends upon the database --
    how many tables and other objects  (remember, the SHARED_POOL  holds the library cache, dictionary
    cache,  & the  shared_sql area ) ...  It will also depend upon the type of sqlstatements issued...

Okay --   these two are key parameters    
     From the little information given,      I would start with (and modify later as I watched the hit rates)
db_block_size   = 8192    # 8K block size
db_block_buffers   = 12800     #  100MB for db_cache
shared_pool_size  = 33554432  # 32MB for shared_pool

large_pool_size --  this is really only an applicatable parameter if you are using  1) Parallel Query
  --or--  2)  Multi-Threaded Server  ---    if not using either,  then set to  zero

processes --  set this to    (total # of concurrent users) + (total # of oracle background processes)
          you said you will have apx 40 concurrent connections?    the total # of background processes
      depend upon your setup  (it is at least 5)
I would set it to processes to at least 50.

sort_area_size ==   this is another parameter you need to watch --  if you are noticing high # of sorts to disk
vs memory sorts,  then you are going to need to increase this.    probably start with 256K or  512K

log_buffer ==  imporatant parameter --  hold the blocks to be written to the redo logs
generally   I figure a good starting place is apx   (25* db_block_size  )...   in the case of 8K blocksize,
then it would be:  204800

Open_cursors --  also dependant --   in several oracle D2K applications I have administered    it ranged from  500  to 900  .....   start with 500


As far as parallel --  yes you can use it with multiple CPU  --  I really wouldn't touch it yet ---   you need to get a handle on the basics a little more before experimenting....

Archived redo logs ---   a definate good choice,  especially if you want point in time recovery !
Downsides --   you are going to need space for the redo logs to be placed.  You are also going to need to
back them up and delete them (after they are backed up)...    
If the detination for the archive logs fills up -- the oracle database will stop until space is cleared..

From your questions --   I am going to STRONGLY SUGGEST  that you consider taking a class from ORACLE
Oracle8x Backup and Recovery Workshop   --  Course ID: 30021_TCC  |  Duration: 3 Days

Backup and recovery operations are CRITICAL for a DBA  
You need to get a good handle on the options and good practices...

If you cannot take the course,  I would at least recommend  you purchase the following book:
Oracle 8i Backup and Recovery Handbook:  Implement Sound Data Protection Techniques
by Rama Velpuri, Anand Adkoli, George Williams
 McGraw-Hill Professional Publishing; ISBN: 0072127171   -- apx $40


Hope you find this advice  helpful and hope it answers your question...
Sincerely,
dBalaski

0
 
LVL 2

Expert Comment

by:noriegaa
Comment Utility
I recommend the following parameters for a large configuration:

open_cursors = 1800
db_block_buffers = 24625
db_block_size = 16384
shared_pool_size = 65536000
large_pool_size = 32768000
#java_pool_size = 22102400
processes = 2000
sort_area_size = 256

This will leave about half a gigabyte for other usage in your server for other purposes, and to leverage its performance.  If your database is not really that large, then you may choose to reduce the number of buffers accordingly.  A large block will help you with any data warehousing needs, such as OWB.

It is a terrific idea to use the parallel server:

I also recommend for you to run Oracl Expert from OEM once you start using your database to increase performance and improve the quality of your database services.  

Anthony

These values are "arbitrary", since I do not know all the details, but you may need to configure them yourself.  
parallel_server_instances = 7

parallel_min_servers = 1

parallel_max_servers = 5

It is always good to use archivelog mode, even in some development and QA environments.


Make it count!

Best wishes with your project...
0
 
LVL 9

Expert Comment

by:dbalaski
Comment Utility
I really don't agree/understand noriegaa's proposed answer
Especially since newbie123's server is going to be used as a File Server as well
(I do not think you should dedicated the Oracle Database Server Node to being a file server as well as the database server -- i've always found dedicating the server to a task -- either as a database server, or file server -- has lead to better economies of scale.)

1) Parallel server --   having a single box with 2 CPUs does not mean parallel server -- I think you mean parallel query..

2) why PROCESSES=2000 ??
  He is going to have apx 40 conncurrent users out of 50
  I can see going doubling the processes (plus the adding the background) to about 80 or 90 -- but 2000

3) sort_area_size=256
   This parameters is in bytes,  and to quote the Oracle
Manual:
"SORT_AREA_SIZE --
Range of values:  
Minimum: the value equivalent of six database blocks
Maximum: operating system dependent  
SORT_AREA_SIZE specifies in bytes the maximum amount of memory Oracle will use for a sort. "

 well -- 256 bytes I guarantee is way too small  and I with your proposed blocksize --  it should be at LEAST:
sort_area_size = 98304


4) why set the large pool yet?
The large pool allocation heap is used in multi-threaded server systems for session memory, by parallel execution for message buffers, and by backup processes for disk I/O buffers.   setting it will not really gain you anything unless using parallel query --   and that requires additional setup and planning for proper adminstration.

5) Why such a large blocksize --  
In NT,  I believe the maximum is:
Max DB_BLOCK_SIZE:  16384
I really believe it would be more prudent to have a smaller blocksize within the guidelines I specified above.   There is a trade-off to be considered between blocksize, average rowsizes, and system I/O.    

To quote Oracle:
============================
Disadvantages of bigger blocks:
*  Increasing the Oracle block size will require more memory to keep the same amount of buffers in memory in the buffer cache. If you decided to reduce the number of buffers to adjust it to the memory size, you are increasing the possibility of contention in the buffer cache.
 
*  If you have high transactional concurrency to a segment, using bigger blocks is just going to make the concurrency even higher. If you are paying close attention to how you set initrans and maxtrans, this is not a problem. But if you are sloppy and just hike the database block size without thinking about initrans and maxtrans, then you might end up intensifying a performance problem.
 
*  The way Oracle Server writes undo into rollback segment blocks wastes a lot of memory in your database buffer cache if your database blocks are really large, your undo per transaction is really small, and your transaction concurrency is high (i.e., two or more transactions per online rollback segment). The penalty will show up as more paging than you need of database buffers into your buffer cache in the SGA.

Verdict:
*  Use db_block_size = 2048 only if you really know what you are doing. The best use of 2KB database blocks that I know is in stress tests in which you are trying to drive server workload artificially high so you can analyze the bottlenecks less expensively (i.e., without generating mountains of test data and test transactions).
 
*  Use db_block_size = 8192 for most large transactional processing systems. This represents a good balance between advantages and the disadvantage for undo segments.
 
*  Use db_block_size of larger than 8KB for OLTP systems in which your data structures drive the block size to a naturally larger size. This will help to  to avoid chained and migrated rows.
 
*  Use db_block_size of larger values than 8KB for systems in which your undo generation is not a meaningful part of your workload. Data warehouses fit this profile. With bigger blocks, you reduce total system I/O setup costs dramatically, yet you incur none of the disadvantages that you would incur in an OLTP system because people generally are not executing transactions (inserts, updates, deletes, and selects for update).
============================


The open_cursors are high -- but that is okay --
Increasing the init.ora parameter 'OPEN_CURSORS' may cause Oracle to pre-allocate more PGA memory (in the SGA if using MTS) to keep track of cursor descriptors.  
However, if those cursors are never used, it should not have much effect on performance. In turn,  way the applications utilizes number of open cursors available and will have much greater impact on performance.



Those are parameters noriegaa suggested are really cookbook -- without really looking into your needs.

I believe with the use of your server a small footprint might be more prudent from a performance standpoint.   Remember -- if you find the performance bad -- you can always adjust higher -- configuring a system  too large to begin with  is just throwing resources unecessarily at a situation without good justification...
 

my opinion,
dBalaski
0
PRTG Network Monitor: Intuitive Network Monitoring

Network Monitoring is essential to ensure that computer systems and network devices are running. Use PRTG to monitor LANs, servers, websites, applications and devices, bandwidth, virtual environments, remote systems, IoT, and many more. PRTG is easy to set up & use.

 

Author Comment

by:newbie123
Comment Utility
I appreciate your answer Noriegaa, but I found dbalaski's comments more suitable.

Hats off to you dbalaski, nice explanations, and good advice. BTW, you say:
"Parallel server --   having a single box with 2 CPUs does not mean parallel server". I am curious to know what sort of configuration is reqd to run parallel server: a unix box with 8/16 CPUs?

Thanks
0
 
LVL 9

Expert Comment

by:dbalaski
Comment Utility
Hi Newbie123

Thanks,

Parallel server is having multiple instances runing on 2 or more nodes connecting to a single set of database files.
In essence -- each node can concurrently execute transactions against the same database.

Oracle Parallel Server (OPS)  is run in a clustered environment --  It runs is a clustered environment under such O/Ss as OpenVMS, Unix, or NT
The key here is a clustered environment that is sharing disks. It could be a bunch of 1 CPU servers, or multiple CPU server  (running any of the above O/Ss)..
I feel Openvms is the best environment because its clustering is the most advanced and the ease of setting up the shared disks -- in vms this is very easy, while in Unix or NT it requires setting up RAW devices. (raw devices are a pain to manage, especially with a database)..

OPS is far more work to maintain and performance tune than just a single node database -- I managed a large OPS environment for three years --  it requires a lot of care.
(expect apx 1/3 more time in just tuning mangement)
--------------------
The other option which was mentioned in the above threads was Parallel Query -- which is not to be confused with Parallel Server..
With Parallel Query, multiple processes work together simultaneously to execute a single SQL statement by dividing the work necessary to execute a statement among multiple processes.   This also requires additional care in setting up --  This is why I warned against it for now --  investigate the options once the  system grows in size -- it would be a bit premature to do it now with a small tables...

Hope this helps answer your questions,
sincerely,
dBalaski



0
 
LVL 2

Expert Comment

by:noriegaa
Comment Utility
There are currently  two or three certified cluster solutions for Win2K (Advanced) Server. HP released recently a 4 node solutions, which is top of the line, while IBM has a two-node solution.  These are also appropriate for failoversolutions.  It looks as if some typos were made, I meant processes = 200 and sort_area_size = 256K.  

However, processes = 2000 will not affect the tuning in Oracle8i at all, in any manner.  This is from a several-day research in a similar environment to leverage connections from 5 to a 100 concurrent connections. This is also a comment formally made by Oracle Support.  Any sort_AREA_SIZE between 64K and 256K should be all right.   I think you have enough space to use it.  I do recommend a block of 16384, simply because Win2K has the capability to handle it, for both FAT and NTFS format, and even larger blocks, like 32K.  16K is the minimum block size required for some data warehousing environments solutions and tools, including Oracle Warehouse Builder, which cannot be installed otherwise.  If you are using Designer or Forms, or any other application or a huge batch process, then my cursor parameter is good.  It will not affect tuning, and you will never have to worry about it, since MTS is essentially not needed in the initial configuration.  Therefore, my recommended parameters are:

                  open_cursors = 1800
                   db_block_buffers = 24625
                   db_block_size = 16K
                   shared_pool_size = 64M
                   large_pool_size = 32M
                   #java_pool_size = 22M
                   processes = 200
                  sort_area_size = 256K

Good luck...

You may also try for paralle solutions using Veritas, this is a leading solution, but check for $$$ before you make any investment.
0
 
LVL 6

Expert Comment

by:M-Ali
Comment Utility
Great stuff, guys. This is a great place to learn.

Just a final :) question - If i decide to opt in for the "parallel query" option - is it a part of the default installation or does it need to be installed as an add-on? Any special settings required?

BTW my database is pretty big, with some tables having a few hundred thousand records. So long we have been using Oracle 7.3.2, - i just want to make sure we extract every possible drop of juice out of 8.1.7

Thanx
0

Featured Post

PRTG Network Monitor: Intuitive Network Monitoring

Network Monitoring is essential to ensure that computer systems and network devices are running. Use PRTG to monitor LANs, servers, websites, applications and devices, bandwidth, virtual environments, remote systems, IoT, and many more. PRTG is easy to set up & use.

Join & Write a Comment

Suggested Solutions

Title # Comments Views Activity
Number Format 1 43
Value of 0's not appearing. 9 49
consolidate 4 lines of oracle query output to 1 line 4 41
sql query 9 18
Note: this article covers simple compression. Oracle introduced in version 11g release 2 a new feature called Advanced Compression which is not covered here. General principle of Oracle compression Oracle compression is a way of reducing the d…
From implementing a password expiration date, to datatype conversions and file export options, these are some useful settings I've found in Jasper Server.
This video shows, step by step, how to configure Oracle Heterogeneous Services via the Generic Gateway Agent in order to make a connection from an Oracle session and access a remote SQL Server database table.
This video shows how to copy an entire tablespace from one database to another database using Transportable Tablespace functionality.

772 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

10 Experts available now in Live!

Get 1:1 Help Now