Dataset vs Datareader, and is there a way to make a Datareader skip to a position within a result set

I have to create a page that browses through a very large MSSQL database.  The development environment is C# and ASP.NET.

There seem to be two choices:  Use a Dataset and paging, or use a Datareader.  All the doc I can find seems to imply that all the records in a Dataset are saved "in memory" on the server.  Using a Dataset for a large database seems to be likely to cause performance problems.

But if I use a Datareader, is there some way to get it to skip reading through the first X records in the result set from the query?  Otherwise, every time they user pages forward, I am going to have to read the whole database up that point again.

Alternatively, is there an SQL method of asking for "the 100th through 150th rows that meet such and such a where clause"?

The goal is actually very simple.  It is just the list function in and add, change, delete and list application for a large database.  But I am at a loss as to the best way to code it for a real world, large database.

Thank you.

Who is Participating?
razoConnect With a Mentor Commented:
this is a query that would help u to get what u want using a data reader

   If DataGrid1.VirtualItemCount - startIndex > DataGrid1.PageSize Then
                i = DataGrid1.PageSize
                i = DataGrid1.VirtualItemCount - startIndex
            End If
   comm.CommandText = "SELECT * FROM(SELECT TOP " & i & " * FROM (SELECT TOP " & startIndex + DataGrid1.PageSize & " * FROM (SELECT TOP " & startIndex + DataGrid1.PageSize & " * FROM tablename ORDER BY field) DERIVEDTBL ORDER BY field DESC) DERIVEDTBL) DERIVEDTBL ORDER BY field"

the idea is to get 150 rows order them in the reverse order and then take the top 50 rows
The datareader is a forward only stream of data and thus is very fast.  In the modern day world serving up 500 records from a MSSQL server on any reasonable server will hardly make a dent in your performance.  I do agree that using the TOP function of TSQL is the way to go, and I applaud you for thinking about the overhead; but the fact is that horsepower is way cheaper than brainpower nowadays, and less a consideration.


DataReader has Read() method, that advances the DataReader to the next record.

You can use a for loop to skip records. But, as Aeros said, reader is Forward only. So you can't go back to the rows you skipped.
Get your problem seen by more experts

Be seen. Boost your question’s priority for more expert views and faster solutions

kevinmackenzieAuthor Commented:
Hi folks,

I think I need to update my email address in expert exchange, as I did not receive the usual notifications of these responses.  Will take a look at them this morning.

kevinmackenzieAuthor Commented:

What are the performance implications for a really large table with the method you suggest.  I.E. if startIndex was 2 million, would this cause the server to create two temporary tables with 2 million + pagesize rows in them?  Or will the query optimizer figure it out, and only retrieve only the rows needed?


My problem is that I need to dive into a 4 million record database and page through it a screen at a time starting anywhere in the database.  I also do not have any unique column value in each row, so I need a way to get the Xth through Yth row in the database.  Reading the whole 4M into a database is not practical, even with today's servers.


I think it would take forever to do 2 million Read method calls to get to the middle of the database, so that won't work.

Thanks folks.  Razo, I look forward to your response, sorry for the delay.

kevinmackenzieAuthor Commented:

I tried to use your syntax, but if you want the 100th through 150th row from a table that is X rows, I think you need to know X, and write the statement a little different.  I don't know whether counting the total number of rows with COUNT is expensive in terms of performance or not...

kevinmackenzieAuthor Commented:
I have solved this problem.  I am awarding the points to razo because his answer was the most useful, even if it did not get me all the way to a solution.

The solution I found is at:

and is referred to as "Jeff's solution".  Here is the pseudo code:

-- for each column in your sort, you need a variable to hold
-- the "starting values". In our case, we need two:
declare @startingDate datetime;
declare @startingID int;

-- again, we want to returns resutls from row @a to row @b:
declare @a int;
declare @b int;

set @a = 200 -- start at row 200
set @b = 250 -- end at row 250

-- get the starting date and starting ID to return results:
set rowcount @a
select @StartingDate = DateCol, @startingID = ID
from yourtable
order by DateCol ASC,ID ASC

-- find out how many rows to return, and set the rowcount:
set @b = @b - @a
set rowcount @b

-- now return the results:
select * from yourtable
DateCol > @StartingDate OR
(DateCol = @StartingDate AND ID > @StartingID)
order by DateCol ASC, ID ASC

-- clean up:
set rowcount 0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.