Link to home
Start Free TrialLog in
Avatar of k3n51mm
k3n51mm

asked on

Returning smaller chunks of large result sets in SQL Server (verbose)


We have a web application that connects to SQL Server 2000.
Platform: ASP.Net/VB - SQL Svr 2000 - Windows 2000 Server.

We have a large, highly normalized database that has indexes where needed
and handles most queries in good form. However, we have recently come across
some limitations we did not anticipate in our Custom Query module.

Some of our queries are required to be executed using dynamic SQL, where the
query string is constructed based on approximately 10-15 parameters, chosen
by the user in the UI of the application. These parameters are passed to our
sprocs, which then construct the query strings and execute them.

At any rate, when our queries returned very large result sets, it was taking
the browser way too long to render the resultant datagrid. So, I implemented a
paging system in the UI that has been working fine until now, displaying 100
records at a time from a temporary table, with page navigation links near the
datagrid.

However, we have a client who has recently uploaded a rather large amount of
archival data from past years, and some of the result sets can reach 90,000
or so records.

What we are now experiencing is that the webserver is passing the parameters to
the SQL server, and it's taking somewhere on the order of 8 minutes for the SQL
server to respond.

I have an idea of how we might be able to get around it, and I wanted to ask what
you all think of it, and I invite any better ideas - we're running out of them.

Here's my idea:

1) If the result set will have a record count of more than say 4000 records, we go
ahead and respond to the UI with the usual output parameters for implementing the
paging system: temp table name, record count, et cetera. We do this when we reach
4000 records in the temp table, to keep the response time low. So, the first response
from SQL server will always be a record count of 4000 if the row count is greater.
(the user will see the first 100 records in a datagrid, "Page 1 of 40".)

2) Then we keep filling the temp table with the rest of the results.

3) As the user clicks on the paging links near the datagrid, I update the page count
to reflect the correct number of pages, based on a new set of output parameters from
SQL. (maybe on their second click it would now display, "Page 2 of 71".)


This amounts to a sort of paging system in SQL as well, I guess.


QUESTIONS:

1) 88,000 records doesn't seem like very much to me, but I lack the experience to make
a judgement. Should we simply tell the user to narrow down their query to return
a smaller number of rows, or should we go forward with trying to make this work?

2) Is this do-able, or is there another better way to handle it?

3) Is it possible to use a trigger or function in SQL to make it respond after 4000 records
are reached in the result set's temp table? I guess we can't get a complete record count
until the entire result set has been copied to the temp table, which is what we're trying
to avoid already.

Thanks, this is a hard one, for us anyway. But if Google can do it, then so can we.
ASKER CERTIFIED SOLUTION
Avatar of jdlambert1
jdlambert1
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
A few things to consider.
You can limit rows with set rowcount.

set rowcount 4000

select * from mytable (index =  col_to_order_by)
where p1 = @p1
and p2 = @p2
...
-- do not include an ORDER BY clause

set rowcount can accept a variable parameter if you want to.  Adding the index hint will prevent the creation of a temporary table and sorting of results. That might cause your query performance to be much worse though.  You can try it and find out.
Avatar of BillAn1
BillAn1

if it is taking 8 minutes to pull the records, chances are it will still be very slow no matter what restrictions you put on the # or rows returned, e.g. using rowcount, or a temp table. The delay is almost suredly in running the original SQL to find the records, not in presenting them across the network etc. Unless your table is absolutley huge (terabytes), or your query is exremely complex (joining lots of tables, in strange ways) 8 minutes sounds extremy long to return 90000 rows.
I think you should focus on improving the query speed by examining what indexes you ahve in place on your table.