Link to home
Start Free TrialLog in
Avatar of newbie123
newbie123

asked on

init.ora values

Our Server configuration is as follows: Dual PIII 700 Xeon Processor, 1 GB RAM, 3*18 GB SCSI Hard Disks.

We plan to use Win2k as our OS. The Server will work primarily as a database server (8.1.7) but would also function as a file-server.

There are about 50 oracle users, with about 30-40 using concurrently our application developed in Developer 2k (shortly to be upgraded to 6i).

I need to know the optimum parameter values in the init.ora file for these:
open_cursors, db_block_buffers, db_block_size, shared_pool_size, large_pool_size, processes, sort_area_size and any other relevant parameters.

I am also wondering if we can use the PARALLEL option since we have 2 CPUs?

What are the tradeoffs for using archived-redo-logs, and will it be useful for our scenario?

My intention is to learn as much as I can without causing performance lags. So any and all help/advice will be appreciated.

ASKER CERTIFIED SOLUTION
Avatar of dbalaski
dbalaski

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of noriegaa
noriegaa

I recommend the following parameters for a large configuration:

open_cursors = 1800
db_block_buffers = 24625
db_block_size = 16384
shared_pool_size = 65536000
large_pool_size = 32768000
#java_pool_size = 22102400
processes = 2000
sort_area_size = 256

This will leave about half a gigabyte for other usage in your server for other purposes, and to leverage its performance.  If your database is not really that large, then you may choose to reduce the number of buffers accordingly.  A large block will help you with any data warehousing needs, such as OWB.

It is a terrific idea to use the parallel server:

I also recommend for you to run Oracl Expert from OEM once you start using your database to increase performance and improve the quality of your database services.  

Anthony

These values are "arbitrary", since I do not know all the details, but you may need to configure them yourself.  
parallel_server_instances = 7

parallel_min_servers = 1

parallel_max_servers = 5

It is always good to use archivelog mode, even in some development and QA environments.


Make it count!

Best wishes with your project...
I really don't agree/understand noriegaa's proposed answer
Especially since newbie123's server is going to be used as a File Server as well
(I do not think you should dedicated the Oracle Database Server Node to being a file server as well as the database server -- i've always found dedicating the server to a task -- either as a database server, or file server -- has lead to better economies of scale.)

1) Parallel server --   having a single box with 2 CPUs does not mean parallel server -- I think you mean parallel query..

2) why PROCESSES=2000 ??
  He is going to have apx 40 conncurrent users out of 50
  I can see going doubling the processes (plus the adding the background) to about 80 or 90 -- but 2000

3) sort_area_size=256
   This parameters is in bytes,  and to quote the Oracle
Manual:
"SORT_AREA_SIZE --
Range of values:  
Minimum: the value equivalent of six database blocks
Maximum: operating system dependent  
SORT_AREA_SIZE specifies in bytes the maximum amount of memory Oracle will use for a sort. "

 well -- 256 bytes I guarantee is way too small  and I with your proposed blocksize --  it should be at LEAST:
sort_area_size = 98304


4) why set the large pool yet?
The large pool allocation heap is used in multi-threaded server systems for session memory, by parallel execution for message buffers, and by backup processes for disk I/O buffers.   setting it will not really gain you anything unless using parallel query --   and that requires additional setup and planning for proper adminstration.

5) Why such a large blocksize --  
In NT,  I believe the maximum is:
Max DB_BLOCK_SIZE:  16384
I really believe it would be more prudent to have a smaller blocksize within the guidelines I specified above.   There is a trade-off to be considered between blocksize, average rowsizes, and system I/O.    

To quote Oracle:
============================
Disadvantages of bigger blocks:
*  Increasing the Oracle block size will require more memory to keep the same amount of buffers in memory in the buffer cache. If you decided to reduce the number of buffers to adjust it to the memory size, you are increasing the possibility of contention in the buffer cache.
 
*  If you have high transactional concurrency to a segment, using bigger blocks is just going to make the concurrency even higher. If you are paying close attention to how you set initrans and maxtrans, this is not a problem. But if you are sloppy and just hike the database block size without thinking about initrans and maxtrans, then you might end up intensifying a performance problem.
 
*  The way Oracle Server writes undo into rollback segment blocks wastes a lot of memory in your database buffer cache if your database blocks are really large, your undo per transaction is really small, and your transaction concurrency is high (i.e., two or more transactions per online rollback segment). The penalty will show up as more paging than you need of database buffers into your buffer cache in the SGA.

Verdict:
*  Use db_block_size = 2048 only if you really know what you are doing. The best use of 2KB database blocks that I know is in stress tests in which you are trying to drive server workload artificially high so you can analyze the bottlenecks less expensively (i.e., without generating mountains of test data and test transactions).
 
*  Use db_block_size = 8192 for most large transactional processing systems. This represents a good balance between advantages and the disadvantage for undo segments.
 
*  Use db_block_size of larger than 8KB for OLTP systems in which your data structures drive the block size to a naturally larger size. This will help to  to avoid chained and migrated rows.
 
*  Use db_block_size of larger values than 8KB for systems in which your undo generation is not a meaningful part of your workload. Data warehouses fit this profile. With bigger blocks, you reduce total system I/O setup costs dramatically, yet you incur none of the disadvantages that you would incur in an OLTP system because people generally are not executing transactions (inserts, updates, deletes, and selects for update).
============================


The open_cursors are high -- but that is okay --
Increasing the init.ora parameter 'OPEN_CURSORS' may cause Oracle to pre-allocate more PGA memory (in the SGA if using MTS) to keep track of cursor descriptors.  
However, if those cursors are never used, it should not have much effect on performance. In turn,  way the applications utilizes number of open cursors available and will have much greater impact on performance.



Those are parameters noriegaa suggested are really cookbook -- without really looking into your needs.

I believe with the use of your server a small footprint might be more prudent from a performance standpoint.   Remember -- if you find the performance bad -- you can always adjust higher -- configuring a system  too large to begin with  is just throwing resources unecessarily at a situation without good justification...
 

my opinion,
dBalaski
Avatar of newbie123

ASKER

I appreciate your answer Noriegaa, but I found dbalaski's comments more suitable.

Hats off to you dbalaski, nice explanations, and good advice. BTW, you say:
"Parallel server --   having a single box with 2 CPUs does not mean parallel server". I am curious to know what sort of configuration is reqd to run parallel server: a unix box with 8/16 CPUs?

Thanks
Hi Newbie123

Thanks,

Parallel server is having multiple instances runing on 2 or more nodes connecting to a single set of database files.
In essence -- each node can concurrently execute transactions against the same database.

Oracle Parallel Server (OPS)  is run in a clustered environment --  It runs is a clustered environment under such O/Ss as OpenVMS, Unix, or NT
The key here is a clustered environment that is sharing disks. It could be a bunch of 1 CPU servers, or multiple CPU server  (running any of the above O/Ss)..
I feel Openvms is the best environment because its clustering is the most advanced and the ease of setting up the shared disks -- in vms this is very easy, while in Unix or NT it requires setting up RAW devices. (raw devices are a pain to manage, especially with a database)..

OPS is far more work to maintain and performance tune than just a single node database -- I managed a large OPS environment for three years --  it requires a lot of care.
(expect apx 1/3 more time in just tuning mangement)
--------------------
The other option which was mentioned in the above threads was Parallel Query -- which is not to be confused with Parallel Server..
With Parallel Query, multiple processes work together simultaneously to execute a single SQL statement by dividing the work necessary to execute a statement among multiple processes.   This also requires additional care in setting up --  This is why I warned against it for now --  investigate the options once the  system grows in size -- it would be a bit premature to do it now with a small tables...

Hope this helps answer your questions,
sincerely,
dBalaski



There are currently  two or three certified cluster solutions for Win2K (Advanced) Server. HP released recently a 4 node solutions, which is top of the line, while IBM has a two-node solution.  These are also appropriate for failoversolutions.  It looks as if some typos were made, I meant processes = 200 and sort_area_size = 256K.  

However, processes = 2000 will not affect the tuning in Oracle8i at all, in any manner.  This is from a several-day research in a similar environment to leverage connections from 5 to a 100 concurrent connections. This is also a comment formally made by Oracle Support.  Any sort_AREA_SIZE between 64K and 256K should be all right.   I think you have enough space to use it.  I do recommend a block of 16384, simply because Win2K has the capability to handle it, for both FAT and NTFS format, and even larger blocks, like 32K.  16K is the minimum block size required for some data warehousing environments solutions and tools, including Oracle Warehouse Builder, which cannot be installed otherwise.  If you are using Designer or Forms, or any other application or a huge batch process, then my cursor parameter is good.  It will not affect tuning, and you will never have to worry about it, since MTS is essentially not needed in the initial configuration.  Therefore, my recommended parameters are:

                  open_cursors = 1800
                   db_block_buffers = 24625
                   db_block_size = 16K
                   shared_pool_size = 64M
                   large_pool_size = 32M
                   #java_pool_size = 22M
                   processes = 200
                  sort_area_size = 256K

Good luck...

You may also try for paralle solutions using Veritas, this is a leading solution, but check for $$$ before you make any investment.
Great stuff, guys. This is a great place to learn.

Just a final :) question - If i decide to opt in for the "parallel query" option - is it a part of the default installation or does it need to be installed as an add-on? Any special settings required?

BTW my database is pretty big, with some tables having a few hundred thousand records. So long we have been using Oracle 7.3.2, - i just want to make sure we extract every possible drop of juice out of 8.1.7

Thanx