My application collects data from 20 different application. The extracts are in the form of a txt file and is loaded in to temp database ( working database). There are no referential integrity here and data is not indexed. It is validated and cleaned and non validated data are held in suspense. the validated data is posted to a main table in the principle database ( disktrans table) and indexed. the batch is run against this table and data is posted into other tables. Mostly updates and inserts. Then the rows processed are flagged as processed. But the data continues to reside in this table. when the next batch occurs, rows are appended to this table with flag as unprocessed and all unprocessed data are taken for processing. The concern is that this table has 40 million rows. Another table -- disttransdt has 96 million rows. These two are the main table against which all queries are run. Is there any other way to make this better? Any suggested solution.