?
Solved

Oracle9i not using main memory

Posted on 2006-04-03
29
Medium Priority
?
1,180 Views
Last Modified: 2008-02-01
Hello All,

I have the following scenario:
Oracle 9i is installed in machine with 2 AMD processors (1.3GHz each). 2GB RAM. Linux OS. 200 GB Hard Disk.

Oracle is running but is NOT using the main memory. When we do bulk load or any db operation oracle prefers
not to use the main memory but the read/write operations are very slow. Oracle is damn slow.

Why? Why oracle doesn't use the main memory?

Regards,
fargo

0
Comment
Question by:fargo
  • 13
  • 12
  • 2
  • +2
29 Comments
 
LVL 48

Expert Comment

by:schwertner
ID: 16359594
How you understand that Oracle doesn't use the RAM.
To understand the volume of RAM Oracle uses invoke Oracle enterprise manager
and look at the instance SGA (System Global Area) parameters. There are also GUI based
advisors that will advise you how to increase/decrease the size of the SGA.
Alternatively you can issue:
SQL> show sga

Total System Global Area  680600380 bytes
Fixed Size                   455484 bytes
Variable Size             360710144 bytes
Database Buffers          318767104 bytes
Redo Buffers                 667648 bytes

Normally there are two components that need adjustment:

Database Buffer Cache
Shared Pool

Increase Database Buffer Cache
and Oracle will work faster.
0
 
LVL 11

Author Comment

by:fargo
ID: 16359765
Hi,

"select * from v$sga" gives me the following
Total SGA                  208.93 MB
Fixed Size                   450780 bytes
Variable Size             285212672 bytes
Database Buffers          33554432 bytes
Redo Buffers                 667648 bytes

Do you see anything wrong?

Moreover how to increase the Database Buffer Cache? Does it mean i need to increase the init paramater named "db_block_buffers". The db_block_size is set to 8192.

Moreover, is it possible to change the parameters for an oracle session just for test to see the actual performance gain, without restarting the db server??

fargo




0
 
LVL 143

Expert Comment

by:Guy Hengel [angelIII / a3]
ID: 16359784
>Moreover, is it possible to change the parameters for an oracle session just for test to see the actual performance gain, without restarting the db server??
no.

>Moreover how to increase the Database Buffer Cache? Does it mean i need to increase the init paramater named "db_block_buffers". The db_block_size is set to 8192.
yes

>Do you see anything wrong?
no

>Oracle is damn slow.
every query, even on very small tables?
is oracle the only application on the machine?
0
Get free NFR key for Veeam Availability Suite 9.5

Veeam is happy to provide a free NFR license (1 year, 2 sockets) to all certified IT Pros. The license allows for the non-production use of Veeam Availability Suite v9.5 in your home lab, without any feature limitations. It works for both VMware and Hyper-V environments

 
LVL 11

Author Comment

by:fargo
ID: 16359836
Hi,

Basically the SELECT statements are more or less ok. Only point where i see problem is when we import some big data to this db and some bulk load operations.
Yes, Oracle is the only application running in the server.

In the init.ora file i see that the db_block_buffers is 0. What should be the criteria to set the db_block_buffers value. Moreover, how can i make sure that changing this value can very well increase the performance?

regards,
fargo
0
 
LVL 11

Author Comment

by:fargo
ID: 16360281
Hi

I read that "db_block_buffers is not included in the init.ora by default as it has been deprecated." This means one has to see the db_cache_size parameter in the init.ora. Here is our setting

DB_CACHE_SIZE=33554432

What next?

fargo
0
 
LVL 48

Accepted Solution

by:
schwertner earned 2000 total points
ID: 16360868
DB_CACHE_SIZE=33554432

is abnormally low!

It is in fact
DB_CACHE_SIZE=32M

It has to be over 200M to have a normally working instance.
In production it goes higher, but it depends on thr RAM you have on the computer.

You need also RAM for OS.
0
 
LVL 11

Author Comment

by:fargo
ID: 16360897
Hi,

Yeah i felt it too and now i m gonna make it higher to around 1000 MB.

Following is the check i did

Total RAM -- 2000 MB
less
PGA for 100 users is coming to   -- 262 MB
20% reserve for OS -- 400

comes to 1400 MB (approx) which is free for data buffers.

So i decided to set it to 1000 MB.

Is this ok? Or is there any other way to calculate about it.

fargo
0
 
LVL 48

Expert Comment

by:schwertner
ID: 16361069
Overestimating is also wrong. It also causes delays because of the so called latches.
Do it stepwise.
500MB is enough for the fidt iteration.
After that look in OEM --> Memory --->(most of SGA components have advisors, but they need some time of activity to collect statistics).
0
 
LVL 11

Author Comment

by:fargo
ID: 16361316
i set db_buffer_cache to 500 MB.

And now i m importing something like 180 MB of data using dump. But it takes more than expected. The memory used by oracle is still not that much and i believe, the operation is using disk than memory.

any ideas? what other things should i check?

The OEM advice is giving a constant graph. Means the memory parameters are set correct. What about the pool_size (shared, long, java), do we need to change the settings?

fargo

0
 
LVL 19

Expert Comment

by:actonwang
ID: 16361364
Hi fargo,

       During your table loading, do you have some constraints in the table or triggers or indexes? Try to disable them while you import large data.
       Usually it has nothing to do with db buffer.

Acton
0
 
LVL 48

Expert Comment

by:schwertner
ID: 16361436
The small SGA prevents the DB to work normally. Bulk loading is rare event, but the DB has to work without bottlenecks after this.
0
 
LVL 19

Expert Comment

by:actonwang
ID: 16361505
hhmm.. We ran into this a lot because we did load a lot of big tables.

fargo, would you see to verify if I am right? I feel that your table must have some constraints or trigger on it.

Acton
0
 
LVL 7

Expert Comment

by:vishal68
ID: 16368083
What is the method you use for bulk loads. From your comments, it looks  like Oracle Import. If it is Oracle import, what are the parameters you are using. The parameters can have a big impact on the import performance

HTH
Vishal
0
 
LVL 11

Author Comment

by:fargo
ID: 16368170
Hi,

Acton
>>fargo, would you see to verify if I am right? I feel that your table must have some constraints or trigger on it.
I agree with you and it's logical that if we disable the constraints and indexes, the import will work faster.

But friends, the problem is not just refining the import or bulk load but to have a good performing database setup. With increase in db_cache_size, i agree that the performance is gained..but very marginal.

Example case: I have a dump for a table with around 180000 rows and it takes 1.5 minutes to load(disabling the indexes and constraints). It's too much of time i believe. Moreover in TOAD i see that it gives following performance warnings...

1) High Parse to Execute ratio
2) High Parse CPU Overhead

P.S: Please don't relate above messages with any query or something. I believe some db parameter needs to be tuned.

Thanks
fargo
0
 
LVL 48

Expert Comment

by:schwertner
ID: 16368259
Hello,

if you hope that you can load 180,000 (if this is not typo) records for less then 1.5 minutes then you have to buy an 6-8 processor system and use the parallel
options of Oracle. I guess that your computer is a very powerfull box to reach this excellent productivity.
I assure you that this is very good time and it is not realistic to expect better performance (for this number of records of course).

About your second issue - the prevalent of hard parses. This is known issue and it is based on the fact that Oracle exports prepares the insert statements
without using bind variables. So every statement expires hard parsing and this is registered from Oracle.
To improve things (but only in some extent, do not expect miracles) you can ask Oracle to reuse statements and avoid in many cases the hard parsing.
You have to change CURSOR_SHARING parameter in your SPFILE:
set it 'similar'
0
 
LVL 11

Author Comment

by:fargo
ID: 16368315
Hi,

the CURSOR_SHARING at the moment is set to EXACT and u propose it to set to SIMILAR..correct?
What does PCTFREE too low for a table indicates?

Do you see that i should change the shared_pool size or java_pool_size or large_pool_size ? I still see that making bulk opertions, oracle doesn't really use good CPU memeory or main memory.

Thx
fargo
0
 
LVL 11

Author Comment

by:fargo
ID: 16368340
Also that

db_block_size and db_file_multiblock_read_count can have effect? presently set to

db_block_size = 8192
db_file_multiblock_read_count = 32

regards,
fargo
0
 
LVL 48

Expert Comment

by:schwertner
ID: 16368348
Yes, set it to SIMILAR. This will improve in some extent the reusability of the INSERT statements.

PCTFREE (default 10%) is the free space leaved in every block for further extensions of the records placed in the block. This is done to avoid moving the row in another block and making chain of disk pointers.

This parameter will not help you. I think you can concentrate on DIRECT import that avoid the usage of the SQL engine of Oracle and so avoid the overhead from parsing. But I have never used it and details could be found in Oracle docs - "Utilities".
0
 
LVL 48

Expert Comment

by:schwertner
ID: 16368362
Forget this. It is for sequential reads. The blocksize is good and the blocks are buffered in the DBcache.
Please do not panic! Your system works fine. You can damage Oracle if you do under stress changes in SPFILE.
0
 
LVL 11

Author Comment

by:fargo
ID: 16369104
Hi schwertner,

Thanks for your inputs. I do made changes as discussed here. But i still suspect the performance is not good (why? because i m comparing with performance in other machines). In my opinion, the I/O operation..the read/write operation is not performing well.

>>Please do not panic! Your system works fine.
No No, I m in no panic state...the only thing i m trying is to get the understanding of the issue.

Thanks.
fargo
0
 
LVL 48

Expert Comment

by:schwertner
ID: 16369917
Tablespaces should be placed on diffrent disks (4 at least), RAID problems, etc.

If applications do not use bind variables - next problem.

You have to run STATSPACK to collect statistics and to analyze the wait events.
0
 
LVL 11

Author Comment

by:fargo
ID: 16386664
Hi,

My problem as stated is not completely solved. But all your comments were helpful.

If anyone can point me to some good links regarding the oracle init parameters setting and tuning (Except Oracle documentatin..those are very deep)

thx
fargo
0
 
LVL 48

Expert Comment

by:schwertner
ID: 16389435
Thanks Fargo,

but this what you want is a hard task.

Only for CURSOR_SHARING  on the OTN one could find articles 20 pages long.

The main trio is:

DB_BUFFER_CACHE
SHARED_POOL

Also you have to pay attention on the REDO LOG CACHE - If it is big enough.
0
 
LVL 11

Author Comment

by:fargo
ID: 16415539
hi schwertner, the db is not performing good since we change the filesystem from ext2 to reiserfs format. could this be the reason? if yes, why? I can open up a new question if you think it's that long a discussion.
0
 
LVL 48

Expert Comment

by:schwertner
ID: 16417335
Hi Fargo, I am in Germany now. Everything in my small project works fine so I can help you.

The filesystem can not cause delay in the DB.
But the DBA should regularly collect and store statistics about the DB.
This is a procedure that should be scheduled to run
e.g. one time weekly.
If you need directions I can post the script.

Also (I know you use SuSE Linux) on the Linux machine
do not install application servers of any kind and especially the SAP Appl. Server.
AS will load heavilly the machine ... We had bad experience making this mistake.

0
 
LVL 11

Author Comment

by:fargo
ID: 16417427
thanks for your reply. In germany.. where? I am not a DBA, but i m trying to be one. It would be good if you can post the script. I am not sure what is going wrong with the machine..db setup seems to be fine. I am completely stumped at the moment.
0
 
LVL 48

Expert Comment

by:schwertner
ID: 16417496
Ich bin momentan in Walldorf, Baden- Wurttemberg.
Die SAP Zentralle liegt 100 meter von unserer Firma.

As user SYS:

CREATE OR REPLACE PROCEDURE compute_statistics IS
BEGIN
   dbms_stats.gather_database_stats(cascade=>true);
END compute_statistics;

execute compute_statistics;

-- Schedule the job

set serveroutput on
set linesize 10000
variable x number;
begin
   DBMS_OUTPUT.enable(100000);  
   dbms_job.submit(:x,'compute_statistics;',trunc(sysdate),'trunc(sysdate+7)');
   commit;
   dbms_output.put_line(TO_char(:x));
end;

0
 
LVL 11

Author Comment

by:fargo
ID: 16417542
Vielen Dank. Ich wohne in Karlsruhe auch in Baden-Württemberg.

i will check the stats.
0
 
LVL 48

Expert Comment

by:schwertner
ID: 16417930
Interessant!
Wir benutzen auch SuSE Linux und Oracle.
Warscheinlich können wir in Kontakt bleiben.
Mein email ist in mein Profile.
Grüße!
0

Featured Post

Upgrade your Question Security!

Add Premium security features to your question to ensure its privacy or anonymity. Learn more about your ability to control Question Security today.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Introduction A previously published article on Experts Exchange ("Joins in Oracle", http://www.experts-exchange.com/Database/Oracle/A_8249-Joins-in-Oracle.html) makes a statement about "Oracle proprietary" joins and mixes the join syntax with gen…
When it comes to protecting Oracle Database servers and systems, there are a ton of myths out there. Here are the most common.
This video shows how to copy a database user from one database to another user DBMS_METADATA.  It also shows how to copy a user's permissions and discusses password hash differences between Oracle 10g and 11g.
This video shows how to set up a shell script to accept a positional parameter when called, pass that to a SQL script, accept the output from the statement back and then manipulate it in the Shell.
Suggested Courses
Course of the Month15 days, 16 hours left to enroll

850 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question