Link to home
Start Free TrialLog in
Avatar of ttuyen32
ttuyen32

asked on

how to limit the return size of query

Dear!
I am new to oracle. I really appreciate if having responds from you.
I am developing an application using Java for programing and Oracle for storing data. User can query the data using my application.

And to reserve the resource, the size of result set (number of rows) must be smaller than a specified number. If larger, an exception will be returned instead of the actual result.

I know that class Statement in Java provides an operation setMaxRows(int number). However, this operation just silently dropped the exceeded rows without notification. I want to ask if there is any way in Oracle to check the size of query before actually execute it.

I tried with select count(*) to get the size first, but in this way we need an additional scan of database. Therefore it is not efficient.

Is there any mechanism that I can configure the database to get an announcement when the size of a query is larger than expected? Please help! Thank you very much!
ASKER CERTIFIED SOLUTION
Avatar of Msquare
Msquare

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of ttuyen32
ttuyen32

ASKER

Thanks, Msquare!
I want that Oracle returns an error or exception instead of the actual result set if the result size is greater than a specific value. With your suggestion, we just limit the size of result set and the remain rows will be silently dropped :D
Yes .... but at that point you have in your client all the info to decide what to do. I presumed you then throw your own execption or some other error/alternate handling when you know you've scanned 100 rows (and you only wanted 99). As far as doing "needless" work in Oracle you haven't wasted much, only one row. I can think of some very clumsy/difficult joins which would cause Oracle to do lot of work (simplified, work out 10000 rows before returning 100), but in most cases Oraclw will pipeline the rowretrieval/join process and the RowNum filter limit will abort all "inner" searches. Your queries may have to be tuned by including the /*+ First_rows (100)  */ hint.

I "dont do Java" (cant know everything) but I presume the setMaxRows will do the RowNum trick "behind your back". You can still use the same approach (ie ask for one more row than you want to allow for the user) and the same error handling as I suggest

There is one way to get Oracle not even start a query if the query is too large - this is using the Resource Manager. This will only become effective for "large" queries, ie it can throw out queries which require more than x CPU seconds. This is usually used in datawarehpuse type systems where you would work your way through billions of rows to get some summary output, and you want to avoid users asking too complicated queries in online (and allow these in batch mode only).

You can also with Profiles lmiit a user to so many CPU seconds or Disk IO calls per SQL-query. This will start the query, burn the amount of CPU/Disk, and then abort (giving you your error) but we still will have made the atttempt (just like the RowNum limiit).

I can think of 3rd approach. This will involve a convoluted approach at your end, and probably cost more than you save. (but it depends on the quantiies of datablocks, rows, queries etc you have). You can use the EXPLAIN PLAN on your query. The CostBasedOptimizer returns how many rows it expects the query to return. You then querey the PLAN_TABLE for that number. BEWARE! The CBO estimate is an estimate that can be very wrong. Also you must have up-to-date statiistics on all involved tables.

And that is as close as you're gonna get to the fuction you want.