I am working with a large invoice line level table that will eventually be used for revenue reporting in a 9iR2 Oracle datatbase. This invoice line level table has 95 columns with an avg_row_len of 329. It is 3.6 G in size and contains 9,248,977 records. The query performance against this table (not joining to any other tables) is poor. It takes about 2 minutes to sum revenue for a given day. In comparison, I have built a table with the same amount of records, but fewer columns. This table has 10 columns with an avg_row_len of 59. It is 650 M in size and contains 9,248,977 records. It takes about 12 seconds to sum revenue for a given day. I am trying to get clarification on why there is a significant performance difference between these tables. As I understand that Oracle will initially read the data from disk and perform the specific query in memory. The data being pulled from disk includes the entire row of data, regardless of the few columns I want returned in the query. This should explain some of the initial performance difference. However, I would expect that the performance difference should minimize after the initial query because it should be pulling data from the SGA, but I am not see a significant change.
Can someone point me to documentation or provide a detailed explaination about what Oracle is doing with the life cycle of a query relative to pulling data from a large table with many columns?