Cache Advantage and disadvantage for big result set

Hi friends

I have a sql which returns a very big result set. The query points to only one table which has more than 10 million rows. I want to try the cache mechanism on that table,..will it degrade the performance of the database..?. More than 100 users at a time are using this sql. Please give me a soultion for this. And also please give me the advantages of logging and no logging concepts.

Thankyou nd good luck

regds, Jim
LVL 1
jimaricat072100Asked:
Who is Participating?

[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

x
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

hberndtCommented:
Hi,

the main difference between logging and nologging while creating a table is that in case of nologging no redo log is written when the table is created. If you use logging, the table can be recreated during recovery. Using nologging an extra backup is necessary.

NOCACHE specifies that the blocks retrieved are placed at the least recently used end of the LRU List in the buffer cache when a full table scan is performed.

It sounds like a full table scan in your case. I never mentioned a big degrade of my databases. The LRU list tells ORACLE, which blocks where least recently used (as the name tells :-) ). These blocks will be overwritten first. Using cache, the blocks will stay longer and longer in the buffer cache.

You have to watch out for enough memory for the buffer cache.

IMHO:
Check out if it is possible to reduce the number of retrieved rows. In many times users don't need so much data. This will really increase performance.

Rgds,
Holger
0
schwertnerCommented:
The table (as you described) is to big to be cached in the RAM.
In most cases the users do not use all rows and even all columns of the rows. If this is the case try to investigate the user needs and to create some frequently used views over this table. The view are expected to decrease the amount of the selected data. the users shoul use the views instead thoe whole table. Some views are not updatable, so if use views for inserts and updates you will face some troubles, but we know how to overcome them.
0
jimaricat072100Author Commented:
thankyou for the feedbacks and sugesstions,

this table is daily updating and i did'nt find any frequently used where conditions,..
This table is having 2 columns, one for the word,other for record id. Its just a kind of search engine table. So the user can search for any word, and all the records for that will bring up.

I 'am using bitmap indexing, but for big resultset its very slow. Even i thought of creating views as suggested by schwertner, but for big resultsets creating views also taking time.

I incresed sort_area_size to 5 MB,.but the same performance. The main problem for this sql is its having a sorting on one field. If i eliminate the sort, its better,.but not that much fast. The time gained from this elimination of sorting will loose at the time of programatically sorting in Java. So overall the speed is same. I'm realy in Question mark ,..whether to increase the memory, processor...!!,...please advice

thanx again
Jim
0
schwertnerCommented:
Try to increase SGA

To see how fast the SGA gets using x$bh:
select       state, count(*)
from       x$bh
group by      state;

STATE        COUNT(*)                                                                                
---------       -----------------
        0             371                                                                                
        1             429                                                                                

In the above result:
Total DB_BLOCK_BUFFERS = 800
Total that have been used = 429
Total that have NOT been used = 371

Try to increase DB_BLOCK_BUFFERS to work in the RAM as much as possible instead using  disk I/O operations.



Pin the table in the RAM:

ALTER TABLE TEST_TAB
CACHE;

Another approach is  to partition this table based on a criteria known only by you. Because you know how the table is organized.
0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
jimaricat072100Author Commented:
Partitioning made it big success,..Thankyou schwertner
0
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Oracle Database

From novice to tech pro — start learning today.