derived vs temp table performance

select col1,col2,col3 etc.. into temp from tabel1 where x=1 and y=2
select * from temp, table2 where temp.x = table2.x and etc..
drop table temp
vs

select * from
(select col1,col2,col3 etc..  from tabel1 where x=1 and y=2 ) table1, table2
where table1.x = table2.x and etc..

lets say both tables have large amount of rows.

1) which is better performance wise?  and best practice

2) assume we put an index on table2.x and #temp.x,  will the performance change since you cant put an index on derived tables (table1).    I guess the question really is creating an index + do a select on the index  vs select on an non-index column for large tables.

3) What if instead of just two tables, now you have 10 tables you need to join.

thanks
LVL 1
gagaliyaAsked:
Who is Participating?
 
alpmoonConnect With a Mentor Commented:
If you have proper indexes, derived tables should give the same performance without creating temp tables. ASE creates work tables instead of explicit temp tables and they are more efficient.

Does legacy code create any index?

If the volume is that big, you should check whether parallelism is enabled and tuned properly or not.
0
 
wilcoxonCommented:
Where I work, we hardly ever use derived tables.  I'm pretty sure nobody has looked at them in a long time so the practice probably started a long time ago when Sybase had performance issues or bugs (or possibly didn't support) derived tables.

With temp tables used the way you are above, the times when I've compared performance, I've never noticed any performance difference with adding an index or not.  That said, I almost always add an index anyway (it takes very little time to add an index on a temp table).
0
 
alpmoonCommented:
I think many factors may effect overall performance at the end. It is better to test both and see which way you would get a better performance. ASE performance and tuning guide suggests using derived tables as I have quoted below. However, a temp table with an index might improve overall performance.

Also it depends on how large your tables are. If you have tens of millions of rows, you need to explore more options like parallelism and do more testing.

"Use of SQL derived tables

"Queries expressed as a single SQL statement exploit the query processor better than queries expressed in two or more SQL statements. SQL-derived tables enable you to express, in a single step, what might otherwise require several SQL statements and temporary tables, especially where intermediate aggregate results must be stored. For example:

select dt_1.* from
   (select sum(total_sales)
        from titles_west group by total_sales)
               dt_1(sales_sum),
   (select sum(total_sales)
         from titles_east group by total_sales)
               dt_2(sales_sum)
where dt_1.sales_sum = dt_2.sales_sum

"Here, aggregate results are obtained from the SQL derived tables dt_1 and dt_2, and a join is computed between the two SQL derived tables. Everything is accomplished in a single SQL statement.

"For more information, see Chapter 9, “SQL Derived Tables,” in the Transact-SQL User's Guide."
0
 
gagaliyaAuthor Commented:
we have over 300 tables, many of them have over 10 million rows with the large ones at 700 million +.

the project i am working on needs to dynamically generate the SQL that joins x number of random tables based on user selection. So it's very difficult to benchmark performance as the table set is dynamic in nature.

Just trying to figure out the best way to approach such a problem.  the legacy code uses temp tables but i am trying to explore better options and best practice.
0
 
gagaliyaAuthor Commented:
thank you!
0
All Courses

From novice to tech pro — start learning today.