Oracle9i not using main memory

Hello All,

I have the following scenario:
Oracle 9i is installed in machine with 2 AMD processors (1.3GHz each). 2GB RAM. Linux OS. 200 GB Hard Disk.

Oracle is running but is NOT using the main memory. When we do bulk load or any db operation oracle prefers
not to use the main memory but the read/write operations are very slow. Oracle is damn slow.

Why? Why oracle doesn't use the main memory?

Regards,
fargo

LVL 11
fargoAsked:
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

schwertnerCommented:
How you understand that Oracle doesn't use the RAM.
To understand the volume of RAM Oracle uses invoke Oracle enterprise manager
and look at the instance SGA (System Global Area) parameters. There are also GUI based
advisors that will advise you how to increase/decrease the size of the SGA.
Alternatively you can issue:
SQL> show sga

Total System Global Area  680600380 bytes
Fixed Size                   455484 bytes
Variable Size             360710144 bytes
Database Buffers          318767104 bytes
Redo Buffers                 667648 bytes

Normally there are two components that need adjustment:

Database Buffer Cache
Shared Pool

Increase Database Buffer Cache
and Oracle will work faster.
0
fargoAuthor Commented:
Hi,

"select * from v$sga" gives me the following
Total SGA                  208.93 MB
Fixed Size                   450780 bytes
Variable Size             285212672 bytes
Database Buffers          33554432 bytes
Redo Buffers                 667648 bytes

Do you see anything wrong?

Moreover how to increase the Database Buffer Cache? Does it mean i need to increase the init paramater named "db_block_buffers". The db_block_size is set to 8192.

Moreover, is it possible to change the parameters for an oracle session just for test to see the actual performance gain, without restarting the db server??

fargo




0
Guy Hengel [angelIII / a3]Billing EngineerCommented:
>Moreover, is it possible to change the parameters for an oracle session just for test to see the actual performance gain, without restarting the db server??
no.

>Moreover how to increase the Database Buffer Cache? Does it mean i need to increase the init paramater named "db_block_buffers". The db_block_size is set to 8192.
yes

>Do you see anything wrong?
no

>Oracle is damn slow.
every query, even on very small tables?
is oracle the only application on the machine?
0
Ultimate Tool Kit for Technology Solution Provider

Broken down into practical pointers and step-by-step instructions, the IT Service Excellence Tool Kit delivers expert advice for technology solution providers. Get your free copy now.

fargoAuthor Commented:
Hi,

Basically the SELECT statements are more or less ok. Only point where i see problem is when we import some big data to this db and some bulk load operations.
Yes, Oracle is the only application running in the server.

In the init.ora file i see that the db_block_buffers is 0. What should be the criteria to set the db_block_buffers value. Moreover, how can i make sure that changing this value can very well increase the performance?

regards,
fargo
0
fargoAuthor Commented:
Hi

I read that "db_block_buffers is not included in the init.ora by default as it has been deprecated." This means one has to see the db_cache_size parameter in the init.ora. Here is our setting

DB_CACHE_SIZE=33554432

What next?

fargo
0
schwertnerCommented:
DB_CACHE_SIZE=33554432

is abnormally low!

It is in fact
DB_CACHE_SIZE=32M

It has to be over 200M to have a normally working instance.
In production it goes higher, but it depends on thr RAM you have on the computer.

You need also RAM for OS.
0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
fargoAuthor Commented:
Hi,

Yeah i felt it too and now i m gonna make it higher to around 1000 MB.

Following is the check i did

Total RAM -- 2000 MB
less
PGA for 100 users is coming to   -- 262 MB
20% reserve for OS -- 400

comes to 1400 MB (approx) which is free for data buffers.

So i decided to set it to 1000 MB.

Is this ok? Or is there any other way to calculate about it.

fargo
0
schwertnerCommented:
Overestimating is also wrong. It also causes delays because of the so called latches.
Do it stepwise.
500MB is enough for the fidt iteration.
After that look in OEM --> Memory --->(most of SGA components have advisors, but they need some time of activity to collect statistics).
0
fargoAuthor Commented:
i set db_buffer_cache to 500 MB.

And now i m importing something like 180 MB of data using dump. But it takes more than expected. The memory used by oracle is still not that much and i believe, the operation is using disk than memory.

any ideas? what other things should i check?

The OEM advice is giving a constant graph. Means the memory parameters are set correct. What about the pool_size (shared, long, java), do we need to change the settings?

fargo

0
actonwangCommented:
Hi fargo,

       During your table loading, do you have some constraints in the table or triggers or indexes? Try to disable them while you import large data.
       Usually it has nothing to do with db buffer.

Acton
0
schwertnerCommented:
The small SGA prevents the DB to work normally. Bulk loading is rare event, but the DB has to work without bottlenecks after this.
0
actonwangCommented:
hhmm.. We ran into this a lot because we did load a lot of big tables.

fargo, would you see to verify if I am right? I feel that your table must have some constraints or trigger on it.

Acton
0
vishal68Commented:
What is the method you use for bulk loads. From your comments, it looks  like Oracle Import. If it is Oracle import, what are the parameters you are using. The parameters can have a big impact on the import performance

HTH
Vishal
0
fargoAuthor Commented:
Hi,

Acton
>>fargo, would you see to verify if I am right? I feel that your table must have some constraints or trigger on it.
I agree with you and it's logical that if we disable the constraints and indexes, the import will work faster.

But friends, the problem is not just refining the import or bulk load but to have a good performing database setup. With increase in db_cache_size, i agree that the performance is gained..but very marginal.

Example case: I have a dump for a table with around 180000 rows and it takes 1.5 minutes to load(disabling the indexes and constraints). It's too much of time i believe. Moreover in TOAD i see that it gives following performance warnings...

1) High Parse to Execute ratio
2) High Parse CPU Overhead

P.S: Please don't relate above messages with any query or something. I believe some db parameter needs to be tuned.

Thanks
fargo
0
schwertnerCommented:
Hello,

if you hope that you can load 180,000 (if this is not typo) records for less then 1.5 minutes then you have to buy an 6-8 processor system and use the parallel
options of Oracle. I guess that your computer is a very powerfull box to reach this excellent productivity.
I assure you that this is very good time and it is not realistic to expect better performance (for this number of records of course).

About your second issue - the prevalent of hard parses. This is known issue and it is based on the fact that Oracle exports prepares the insert statements
without using bind variables. So every statement expires hard parsing and this is registered from Oracle.
To improve things (but only in some extent, do not expect miracles) you can ask Oracle to reuse statements and avoid in many cases the hard parsing.
You have to change CURSOR_SHARING parameter in your SPFILE:
set it 'similar'
0
fargoAuthor Commented:
Hi,

the CURSOR_SHARING at the moment is set to EXACT and u propose it to set to SIMILAR..correct?
What does PCTFREE too low for a table indicates?

Do you see that i should change the shared_pool size or java_pool_size or large_pool_size ? I still see that making bulk opertions, oracle doesn't really use good CPU memeory or main memory.

Thx
fargo
0
fargoAuthor Commented:
Also that

db_block_size and db_file_multiblock_read_count can have effect? presently set to

db_block_size = 8192
db_file_multiblock_read_count = 32

regards,
fargo
0
schwertnerCommented:
Yes, set it to SIMILAR. This will improve in some extent the reusability of the INSERT statements.

PCTFREE (default 10%) is the free space leaved in every block for further extensions of the records placed in the block. This is done to avoid moving the row in another block and making chain of disk pointers.

This parameter will not help you. I think you can concentrate on DIRECT import that avoid the usage of the SQL engine of Oracle and so avoid the overhead from parsing. But I have never used it and details could be found in Oracle docs - "Utilities".
0
schwertnerCommented:
Forget this. It is for sequential reads. The blocksize is good and the blocks are buffered in the DBcache.
Please do not panic! Your system works fine. You can damage Oracle if you do under stress changes in SPFILE.
0
fargoAuthor Commented:
Hi schwertner,

Thanks for your inputs. I do made changes as discussed here. But i still suspect the performance is not good (why? because i m comparing with performance in other machines). In my opinion, the I/O operation..the read/write operation is not performing well.

>>Please do not panic! Your system works fine.
No No, I m in no panic state...the only thing i m trying is to get the understanding of the issue.

Thanks.
fargo
0
schwertnerCommented:
Tablespaces should be placed on diffrent disks (4 at least), RAID problems, etc.

If applications do not use bind variables - next problem.

You have to run STATSPACK to collect statistics and to analyze the wait events.
0
fargoAuthor Commented:
Hi,

My problem as stated is not completely solved. But all your comments were helpful.

If anyone can point me to some good links regarding the oracle init parameters setting and tuning (Except Oracle documentatin..those are very deep)

thx
fargo
0
schwertnerCommented:
Thanks Fargo,

but this what you want is a hard task.

Only for CURSOR_SHARING  on the OTN one could find articles 20 pages long.

The main trio is:

DB_BUFFER_CACHE
SHARED_POOL

Also you have to pay attention on the REDO LOG CACHE - If it is big enough.
0
fargoAuthor Commented:
hi schwertner, the db is not performing good since we change the filesystem from ext2 to reiserfs format. could this be the reason? if yes, why? I can open up a new question if you think it's that long a discussion.
0
schwertnerCommented:
Hi Fargo, I am in Germany now. Everything in my small project works fine so I can help you.

The filesystem can not cause delay in the DB.
But the DBA should regularly collect and store statistics about the DB.
This is a procedure that should be scheduled to run
e.g. one time weekly.
If you need directions I can post the script.

Also (I know you use SuSE Linux) on the Linux machine
do not install application servers of any kind and especially the SAP Appl. Server.
AS will load heavilly the machine ... We had bad experience making this mistake.

0
fargoAuthor Commented:
thanks for your reply. In germany.. where? I am not a DBA, but i m trying to be one. It would be good if you can post the script. I am not sure what is going wrong with the machine..db setup seems to be fine. I am completely stumped at the moment.
0
schwertnerCommented:
Ich bin momentan in Walldorf, Baden- Wurttemberg.
Die SAP Zentralle liegt 100 meter von unserer Firma.

As user SYS:

CREATE OR REPLACE PROCEDURE compute_statistics IS
BEGIN
   dbms_stats.gather_database_stats(cascade=>true);
END compute_statistics;

execute compute_statistics;

-- Schedule the job

set serveroutput on
set linesize 10000
variable x number;
begin
   DBMS_OUTPUT.enable(100000);  
   dbms_job.submit(:x,'compute_statistics;',trunc(sysdate),'trunc(sysdate+7)');
   commit;
   dbms_output.put_line(TO_char(:x));
end;

0
fargoAuthor Commented:
Vielen Dank. Ich wohne in Karlsruhe auch in Baden-Württemberg.

i will check the stats.
0
schwertnerCommented:
Interessant!
Wir benutzen auch SuSE Linux und Oracle.
Warscheinlich können wir in Kontakt bleiben.
Mein email ist in mein Profile.
Grüße!
0
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Oracle Database

From novice to tech pro — start learning today.

Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.