What are the performance implications of using derived tables?
Posted on 2004-10-09
In the ASP.NET section of experts exchange, I have asked a question about how to get the (for example) 200th through 250th row of a result set in a MSSQL database. The actual table is very large, and has duplicate key values, so I am struggling with how to page through it.
I believe that temporary tables (create a table with an identity value) are too high overhead, because the base table has 4 million records.
The following query has been suggested. I am not familiar with the DERIVED TABLE syntax, and I am not sure what the performance implications would be.
Can you please comment on how efficient this query would be on a 4M row table, and suggest a better method if you know of one?
Here's the query:
SELECT * FROM(SELECT TOP 50 * FROM (SELECT TOP 250 * FROM (SELECT TOP 250 * FROM tablename ORDER BY field) DERIVEDTBL ORDER BY field DESC) DERIVEDTBL) DERIVEDTBL ORDER BY field
The actual query would have where clauses in it to retreive only rows that match a particular key or key range, but there could be thousands of records returned. I am trying to avoid a several thousand row result set for performance reasons.
Will the query optimizer figure out what this means and do the right thing, or will it it create two temporary tables, and if so, should I worry about the overhead of doing this?