Oracle queries sometimes are awfully slow, why??

Hi all,

I have a table with 3 million rows. The table has an id column and 16 fp columns. All columns are indexed. The types of the fp columns are INTEGER.

I perform the following query:

select count(*) from MYTABLE where BITAND(fp1,2144163094) = 2144163094 AND BITAND(fp2,1689182963) = 1689182963 AND BITAND(fp3,180991820) = 180991820 AND ... ... ...  AND BITAND(fp,-2023498656) = -2023498656;

So I use BITAND on all fp (INTEGER type) colums.

This query takes 17 second to perform. Is it possible? Is the Oracle's BITAND function too slow or the integer type of the oracle?  Should I use RAW type?

I have a P4 2.4G , 2 GB RAM, Win XP, Oracle 9.2.0.1 .

Anyway the 'select count(*) from MYTABLE' takes 17 second as well. Maybe some Oracle tuning should help.(?)
 
thanks,
fifty_
fifty_Asked:
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

grim_toasterCommented:
Have you tried to run an explain plan on your query?  What was the outcome?
0
seazodiacCommented:
ok, try these steps:

1. make sure your query is very selective (i.e., the rows in the results should be less than 5% of the total rows 3millions)
2. there is another caveat with BITAND function. BITAND returns a binary integer, so to use it in SQL you will need to convert it to a numeric.
That means when you are using BITAND function in where clause like this:
BITAND(fp2,1689182963) = 1689182963, oracle has to do some implicit data conversion(from binary integer to a numeric). to make it more efficient, in your where condition, explicity convert it by using this : BITAND(fp2,1689182963) + 0 = 1689182963,

So for you to speed up the query, you should Build a FUNCTION-BASED index on all the fp columns like this:

CREATE INDEX fp1_mytable_ix ON mytable (BITAND(fp1) +0);

of course, in case you don't know already, you need to make sure the following things are done before you use function-based index:

You must have the system privelege query rewrite to create function based indexes on tables in your own schema.
You must have the system privelege global query rewrite to create function based indexes on tables in other schemas
For the optimizer to use function based indexes, the following session or system variables must be set:
QUERY_REWRITE_ENABLED=TRUE
QUERY_REWRITE_INTEGRITY=TRUSTED


0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
andrewstCommented:
Seazodiac, I don't understand what you just said: If BITAND returns a binary_integer which Oracle will implicitly convert, how does changing to BITAND(...)+0 make any difference?  Surely Oracle will still do the implicit conversion either before or after it adds the 0?  In any case, why would an explicit conversion (e.g. using TO_NUMBER) be faster than an implicit one?
0
Ultimate Tool Kit for Technology Solution Provider

Broken down into practical pointers and step-by-step instructions, the IT Service Excellence Tool Kit delivers expert advice for technology solution providers. Get your free copy now.

seazodiacCommented:
Andrew:
I think it does get better speed when you do things explicitly.Especially, this explicit conversion is  embedded in the function creation. that's why Oracle encourage you to do things explicitly.

one example:

oracle does recommend the use of " select * from emp where hiredate < to_date('10-OCT-03', 'DD-MON-YY')" , NOT " select * from emp where hiredate < '10-OCT-03'"
0
andrewstCommented:
Yes, but that is because you are more likely to get errors if you are not explicit.  to_date('10-OCT-03') would be just as bad.  Whether implicitly or explicitly that conversion IS going to be done and will take however long it takes.

OK, there is possibly a small PARSE penalty for implicit conversions: Oracle has to decide how to treat the values.  But parsing overhead is insignificant unless you are actually parsing 1000s of times.

I feel a benchmark test coming on.  Maybe tomorrow...
0
seazodiacCommented:

Andrew, I can confirm that because I meant to explain exactly the same thing (penalty in the parse stage) last time, but I don't want to turn it into a long-winded story.

0
DrJekyllCommented:
I do not think the implied vs explicit conversion is the issue here but makes for a nice diversion.  As stated you need to look at the selectivity of the data, probably a FTS is more efficient.  Is this query going to be run often?  I would not be adding all the indices if not.  You also have to consider the DML implications on adding these indices on a heavy DML table.  Consider your requirements and implications thereof.

DJ
0
grim_toasterCommented:
As a quick point on the parsing issue, wouldn't it be more efficient to only convert the passed in value to an Integer in the equals thing if you're worried about it?  As in:
BITAND(fp1, 1689182963) = CAST(1689182963 AS INTEGER)

Also, do you really need the bitand (I may be completely mistaken here)?  Couldn't you just use
fp1 = 1689182963
...
And then put a single index joining on all of the columns?  Therefore doing a single index scan instead of what would appear to be a FTS (calling a function on a column, without a function-based index, would force it to the full table scan).
0
fifty_Author Commented:
Maybe the BITAND function is a little bit slow.

Is it possible to use more cache for this SELECT statement ? It would be much faster if the fp columns were in the cache.
0
andrewstCommented:
I think this is the most significant point:

"Anyway the 'select count(*) from MYTABLE' takes 17 second as well"

So we know that the query with all the BITANDs is doing a full table scan, but at least it is not taking any longer due to the BITANDs.  So:

How many of the 3 million rows does the BITAND version return?  If it is a very small proportion, an index would probably help.  This could be on just one of the columns or many.  It would need to be a function-based index due to the BITANDs - unless, as grim_toaster suggest, you don't actually need the BITANDs.

On the other hand, if it returns a higher proportion of the rows there may be no benefit to using an index.  In that case you are left with the fact that you must use a full table scan, and that currently takes 17 seconds.  Can that be reduced?  Not by query tuning, for sure.  The first thing to check would be whether the data is stored efficiently.  For example, if at one time there were 100 million rows and 97 million had been deleted, the full table scan would read all those empty blocks as well as the blocks that contain the 3 million rows.  
0
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Oracle Database

From novice to tech pro — start learning today.

Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.